Skip to content

Context Management

AI models have limited context windows — they can only “see” a certain amount of text at once. Codewick’s context management system automatically selects the most relevant code, files, and conversation history to send to each model call, so the AI always has what it needs without wasting tokens on irrelevant information.

Every time a pipeline stage runs, Codewick’s orchestration layer decides what context to include. This isn’t a fixed set of files — it changes based on the stage, the task, and what’s happening in your project.

The system considers:

  • Your message and recent conversation history
  • Files you’ve referenced (via @ mentions or recent edits)
  • Files relevant to the task (detected through imports, dependencies, and naming patterns)
  • Pinned context you’ve manually specified
  • Project metadata (tech stack, directory structure, configuration)

Different stages get different context. This is a deliberate optimization:

  • Orchestration gets a broad summary of the project and your full message, but not entire file contents. It needs to understand the big picture, not read every line.
  • Planning gets file structure, dependency maps, and the orchestration output. It reads file headers and exports, not full implementations.
  • Building gets the specific files it needs to create or modify, plus their direct dependencies. This is the most targeted context.
  • UI Generation gets component files, stylesheets, and layout context relevant to the frontend work.
  • Debugging gets error messages, stack traces, and the files referenced in them.
  • Review gets the code produced by earlier stages plus relevant project conventions.

This per-stage approach keeps token usage efficient while ensuring each stage has what it needs.

When you open a project in Codewick for the first time, it performs a project analysis scan:

  1. Tech stack detection — Reads package.json, requirements.txt, Cargo.toml, go.mod, Gemfile, and similar files to identify your languages, frameworks, and dependencies.
  2. File indexing — Catalogs every eligible file in the project for fast lookup during context selection.
  3. Structure mapping — Builds a map of your directory layout, key entry points, and configuration files.

This analysis runs once and updates incrementally as you make changes. It enables Codewick to make intelligent context decisions from your very first message.

Codewick indexes your project within these boundaries:

  • Maximum 500 files or 100MB total, whichever limit is reached first.
  • Individual files over 500KB are excluded from indexing. These are typically generated files, data dumps, or binaries that wouldn’t be useful as AI context anyway.

When your project contains 300 or more indexed files, Codewick displays a warning in the workspace suggesting you review your context configuration. Large projects benefit from a well-configured .codewickignore file and intentional use of pinning.

Codewick automatically excludes certain directories and file types from indexing. You don’t need to configure these — they’re excluded by default:

  • Dependency directoriesnode_modules, .venv, vendor, Pods, .gradle
  • Build outputdist, build, .next, out, target, __pycache__
  • Binary files — Images, fonts, compiled binaries, archives
  • Lock filespackage-lock.json, yarn.lock, Cargo.lock (too large, too noisy)
  • IDE and OS files.idea, .vscode/settings.json, .DS_Store

These exclusions keep the index focused on source code that’s meaningful to AI models.

For project-specific exclusions, create a .codewickignore file in your project root. It uses the same syntax as .gitignore:

# Exclude test fixtures
tests/fixtures/
# Exclude generated API clients
src/generated/
# Exclude large data files
data/*.csv
data/*.json
# Exclude specific config files
config/secrets.local.yaml

Files matched by .codewickignore are excluded from AI context entirely. They still appear in the file explorer and editor — the exclusion only affects what gets sent to AI models.

You can explicitly reference files in your chat messages by typing @ followed by a filename. Codewick provides autocomplete as you type.

Can you refactor @utils/auth.ts to use the new token format from @types/auth.d.ts?

When you @ mention a file:

  • Its full contents are included in the context for every pipeline stage that runs.
  • It takes priority over automatically selected files if context space is tight.
  • Multiple @ mentions are supported in a single message.

This is the most direct way to tell Codewick “look at this specific file.”

For files or information you want included in every AI interaction — not just the current message — use pinning.

  • Pin a file: Right-click a file in the explorer and select Pin to AI context, or use the pin icon in the editor tab bar.
  • Pin a note: In the chat panel, click the pin button on any message to pin its content as persistent context.
  • Pinned items persist across messages within the same task or conversation.
  • Pinned files are included in every pipeline stage’s context.
  • You can view and manage all pinned items in the Context sidebar panel.
  • Unpin items when they’re no longer relevant to free up context space.

Tips for optimizing context in large codebases

Section titled “Tips for optimizing context in large codebases”

If you’re working in a project with hundreds of files, these practices will help Codewick give you better results while using fewer tokens:

  1. Create a .codewickignore file early. Exclude generated code, test fixtures, data files, and anything the AI doesn’t need.

  2. Use @ mentions for specific files. Don’t rely solely on automatic context selection for large projects — explicitly point to the files you’re working with.

  3. Unpin files you’re done with. Context pins from earlier in your session may no longer be relevant. Review your pins periodically.

  4. Start new conversations for new tasks. Conversation history accumulates as context. A fresh conversation gives the AI a clean slate.

  5. Keep files focused. Smaller, well-organized files are easier for the context system to select precisely. A 2,000-line utility file means the AI gets all 2,000 lines even if it only needs one function.

  6. Check the per-stage token breakdown. If input tokens are high relative to output tokens, you may have too much context being sent. See Token Usage & Budgets for how to check this.

  7. Use the project structure to your advantage. Codewick understands directory conventions. Keeping related files together helps the context system find what it needs.