Skip to content

Understanding Usage

Codewick’s AI features run on tokens — the fundamental unit that AI providers use to measure input and output. This guide explains how tokens are counted, how to monitor your usage, and how to make the most of your monthly budget.

Every interaction with an AI model consumes tokens. A token is roughly 3-4 characters of English text. Each request involves two types:

  • Input tokens — the code, context, and instructions sent to the model
  • Output tokens — the response generated by the model

Both input and output tokens count toward your monthly budget. Token counts are determined by the AI provider, not by Codewick.

The following features consume tokens from your monthly budget:

FeatureToken usage
Chat messagesInput context + AI response
AI debuggingCode analysis + suggested fixes
AI reviewCode review analysis + feedback
Project creationInitial scaffolding + file generation
Plan stageArchitecture and task planning
Test generationTest code creation

These features work without consuming any tokens:

  • Code editing in the Monaco editor
  • Terminal commands
  • File explorer and file management
  • Git operations (commit, push, pull, branch)
  • Built-in browser and live preview
  • Checkpoint creation and restoration
  • All settings and configuration

Your current token usage is displayed in the status bar at the bottom of the Codewick window. It shows:

  • A percentage bar indicating how much of your monthly budget you’ve used
  • A plain-language label (e.g., “42% used” or “Low balance”)

The meter changes color as you approach your limit:

Usage levelColorMeaning
0–79%GreenNormal usage
80–94%YellowBudget getting low
95–100%RedNearly exhausted

Click the usage counter in the status bar to open a detailed breakdown. This view shows:

  • Per-stage token counts — how many tokens each pipeline stage (Plan, Code, Debug, Review, Test, Deploy) has consumed
  • Daily usage graph — your consumption pattern over the current billing cycle
  • Reset date — when your budget resets to zero

This helps you identify which activities consume the most tokens so you can adjust your workflow.

You can set a per-session spend limit to avoid accidentally burning through your budget in a single sitting.

  1. Go to Settings > AI & Models > Session spend limit.
  2. Set a token threshold (e.g., 20% of monthly budget).
  3. When you approach the limit, Codewick shows a warning before continuing.

If your monthly token budget is exhausted:

  • AI features are disabled — chat, debugging, review, and code generation stop working.
  • Non-AI features continue normally — editor, terminal, git, browser, file explorer, and checkpoints are unaffected.
  • The usage meter displays your reset date so you know when the budget replenishes.
  • You’ll see an upgrade prompt suggesting a higher tier if you consistently hit your limit.

Switch to Cost priority mode in Settings > AI & Models when you don’t need the most powerful model. Cost mode uses efficient models that consume fewer tokens per request.

Vague prompts force the AI to generate longer, broader responses. A specific request like “add input validation to the signup form’s email field” uses fewer tokens than “improve the signup form.”

When chatting with the AI, use @ mentions to reference specific files or functions. This keeps the input context small instead of sending your entire project:

@components/LoginForm.tsx fix the password validation regex

Create a .codewickignore file in your project root to exclude files and directories from AI context. This works like .gitignore and prevents large or irrelevant files from inflating your token usage.

node_modules/
dist/
*.min.js
*.lock