Context Management
AI models have limited context windows — they can only “see” a certain amount of text at once. Codewick’s context management system automatically selects the most relevant code, files, and conversation history to send to each model call, so the AI always has what it needs without wasting tokens on irrelevant information.
How smart context works
Section titled “How smart context works”Every time a pipeline stage runs, Codewick’s orchestration layer decides what context to include. This isn’t a fixed set of files — it changes based on the stage, the task, and what’s happening in your project.
The system considers:
- Your message and recent conversation history
- Files you’ve referenced (via @ mentions or recent edits)
- Files relevant to the task (detected through imports, dependencies, and naming patterns)
- Pinned context you’ve manually specified
- Project metadata (tech stack, directory structure, configuration)
Per-stage context budgets
Section titled “Per-stage context budgets”Different stages get different context. This is a deliberate optimization:
- Orchestration gets a broad summary of the project and your full message, but not entire file contents. It needs to understand the big picture, not read every line.
- Planning gets file structure, dependency maps, and the orchestration output. It reads file headers and exports, not full implementations.
- Building gets the specific files it needs to create or modify, plus their direct dependencies. This is the most targeted context.
- UI Generation gets component files, stylesheets, and layout context relevant to the frontend work.
- Debugging gets error messages, stack traces, and the files referenced in them.
- Review gets the code produced by earlier stages plus relevant project conventions.
This per-stage approach keeps token usage efficient while ensuring each stage has what it needs.
Project analysis on open
Section titled “Project analysis on open”When you open a project in Codewick for the first time, it performs a project analysis scan:
- Tech stack detection — Reads
package.json,requirements.txt,Cargo.toml,go.mod,Gemfile, and similar files to identify your languages, frameworks, and dependencies. - File indexing — Catalogs every eligible file in the project for fast lookup during context selection.
- Structure mapping — Builds a map of your directory layout, key entry points, and configuration files.
This analysis runs once and updates incrementally as you make changes. It enables Codewick to make intelligent context decisions from your very first message.
File indexing limits
Section titled “File indexing limits”Codewick indexes your project within these boundaries:
- Maximum 500 files or 100MB total, whichever limit is reached first.
- Individual files over 500KB are excluded from indexing. These are typically generated files, data dumps, or binaries that wouldn’t be useful as AI context anyway.
Large project warning
Section titled “Large project warning”When your project contains 300 or more indexed files, Codewick displays a warning in the workspace suggesting you review your context configuration. Large projects benefit from a well-configured .codewickignore file and intentional use of pinning.
Auto-exclusions
Section titled “Auto-exclusions”Codewick automatically excludes certain directories and file types from indexing. You don’t need to configure these — they’re excluded by default:
- Dependency directories —
node_modules,.venv,vendor,Pods,.gradle - Build output —
dist,build,.next,out,target,__pycache__ - Binary files — Images, fonts, compiled binaries, archives
- Lock files —
package-lock.json,yarn.lock,Cargo.lock(too large, too noisy) - IDE and OS files —
.idea,.vscode/settings.json,.DS_Store
These exclusions keep the index focused on source code that’s meaningful to AI models.
The .codewickignore file
Section titled “The .codewickignore file”For project-specific exclusions, create a .codewickignore file in your project root. It uses the same syntax as .gitignore:
# Exclude test fixturestests/fixtures/
# Exclude generated API clientssrc/generated/
# Exclude large data filesdata/*.csvdata/*.json
# Exclude specific config filesconfig/secrets.local.yamlFiles matched by .codewickignore are excluded from AI context entirely. They still appear in the file explorer and editor — the exclusion only affects what gets sent to AI models.
@ mentions in chat
Section titled “@ mentions in chat”You can explicitly reference files in your chat messages by typing @ followed by a filename. Codewick provides autocomplete as you type.
Can you refactor @utils/auth.ts to use the new token format from @types/auth.d.ts?When you @ mention a file:
- Its full contents are included in the context for every pipeline stage that runs.
- It takes priority over automatically selected files if context space is tight.
- Multiple @ mentions are supported in a single message.
This is the most direct way to tell Codewick “look at this specific file.”
Pinning context
Section titled “Pinning context”For files or information you want included in every AI interaction — not just the current message — use pinning.
How to pin
Section titled “How to pin”- Pin a file: Right-click a file in the explorer and select Pin to AI context, or use the pin icon in the editor tab bar.
- Pin a note: In the chat panel, click the pin button on any message to pin its content as persistent context.
Pin behavior
Section titled “Pin behavior”- Pinned items persist across messages within the same task or conversation.
- Pinned files are included in every pipeline stage’s context.
- You can view and manage all pinned items in the Context sidebar panel.
- Unpin items when they’re no longer relevant to free up context space.
Tips for optimizing context in large codebases
Section titled “Tips for optimizing context in large codebases”If you’re working in a project with hundreds of files, these practices will help Codewick give you better results while using fewer tokens:
-
Create a
.codewickignorefile early. Exclude generated code, test fixtures, data files, and anything the AI doesn’t need. -
Use @ mentions for specific files. Don’t rely solely on automatic context selection for large projects — explicitly point to the files you’re working with.
-
Unpin files you’re done with. Context pins from earlier in your session may no longer be relevant. Review your pins periodically.
-
Start new conversations for new tasks. Conversation history accumulates as context. A fresh conversation gives the AI a clean slate.
-
Keep files focused. Smaller, well-organized files are easier for the context system to select precisely. A 2,000-line utility file means the AI gets all 2,000 lines even if it only needs one function.
-
Check the per-stage token breakdown. If input tokens are high relative to output tokens, you may have too much context being sent. See Token Usage & Budgets for how to check this.
-
Use the project structure to your advantage. Codewick understands directory conventions. Keeping related files together helps the context system find what it needs.