The AI Pipeline
Every task you send to Codewick passes through a structured six-stage AI pipeline. Rather than firing off a single prompt to a single model, Codewick breaks your request into specialized phases — each handled by a model optimized for that kind of work.
This architecture means a planning model reasons about what to build, a coding model writes the code, and a review model audits the result — all behind a single chat message from you.
The six stages
Section titled “The six stages”| Stage | Purpose | What it does |
|---|---|---|
| Orchestration | Understand intent | Breaks your message into structured sub-tasks, determines which stages are needed, and assigns priority. |
| Planning | Scope the work | Defines file structure, maps dependencies, identifies which existing files need changes, and outlines an execution plan. |
| Building | Write code | Creates new files, edits existing ones, and implements the logic described in the plan. |
| UI Generation | Build interfaces | Generates frontend components, styles, and layouts. Handles markup, CSS, and component composition. |
| Debugging | Find and fix errors | Identifies syntax errors, runtime failures, and logic bugs. Traces root causes across files and suggests or applies fixes. |
| Review | Quality check | Audits the generated code for correctness, security issues, performance concerns, and adherence to your project’s conventions. |
How the pipeline runs
Section titled “How the pipeline runs”Stages execute sequentially. Orchestration always runs first to determine which subsequent stages are needed. Each stage receives the output of the previous one as context, building on the work done so far.
Not every task uses all six stages. A simple question about your code might only trigger Orchestration and Review. A request to “add a dark mode toggle” could run through all six. Codewick decides automatically based on what the Orchestration stage determines.
Pipeline progress bar
Section titled “Pipeline progress bar”While a task is running, a progress bar appears at the top of the chat panel — a thin 3px amber bar divided into six segments, one for each stage. The currently active segment pulses to show where the pipeline is in its work.
- Filled segments — Stages that have completed successfully.
- Pulsing segment — The stage currently running.
- Dimmed segments — Stages that will be skipped for this task.
- Empty segments — Stages that haven’t run yet but are queued.
This gives you a quick visual sense of how far along your task is without needing to read any logs. The progress bar disappears once the pipeline finishes.
Seeing what happened
Section titled “Seeing what happened”After a task completes, each AI response card in the chat displays a model label showing which model handled that stage. This transparency lets you see exactly which model wrote your code versus which one reviewed it.
Click the model label to expand a breakdown of all stages that ran, including timing information for each one. This is useful when you want to understand where the pipeline spent its time.
What each stage looks like in practice
Section titled “What each stage looks like in practice”Orchestration
Section titled “Orchestration”This is the “brain” of the pipeline. It reads your message, examines the current state of your project, and produces a structured task breakdown. You won’t see the orchestration output directly — it feeds into the downstream stages.
The orchestrator decides:
- Which stages to activate for this task
- What priority and ordering to assign to sub-tasks
- What context each downstream stage will need
- Whether the task can be handled in a single pass or needs iteration
For complex requests like “build a user authentication system with OAuth support,” the orchestrator breaks this into multiple sub-tasks: create the auth service, add OAuth providers, build login/signup UI, write middleware, and add tests.
Planning
Section titled “Planning”The planner outputs a concrete action plan: which files to create, which to modify, and in what order. For larger tasks, it also identifies potential conflicts (like two changes touching the same function) and sequences work to avoid them.
The plan includes:
- A list of files to create or modify
- The order of operations
- Dependency relationships between changes
- Estimated scope (how many files, how many lines)
Building
Section titled “Building”The builder is where code gets written. It receives the plan and the relevant source files, then produces diffs or new files. This stage consumes the most tokens for code-heavy tasks.
The builder works with your project’s existing patterns. It reads your imports, naming conventions, and code style to produce code that fits in naturally. It handles everything from single-line fixes to multi-file feature implementations.
UI Generation
Section titled “UI Generation”When your task involves frontend work, this stage handles component creation, styling, and layout. It understands common frameworks (React, Vue, Svelte, etc.) and generates idiomatic code for your stack.
The UI generator considers:
- Your existing component library and design patterns
- CSS methodology (Tailwind, CSS modules, styled-components, etc.)
- Responsive layout requirements
- Accessibility best practices
Debugging
Section titled “Debugging”If the builder’s output contains errors — or if you’ve asked Codewick to fix a bug — the debugging stage kicks in. It can read error messages, trace stack traces, and cross-reference multiple files to find root causes.
The debugger approaches problems methodically:
- Reads the error message or symptom description
- Identifies the file(s) and line(s) involved
- Traces the execution path to find the root cause
- Proposes a fix (or applies one, if within a building pipeline)
Review
Section titled “Review”The final stage audits everything produced by the earlier stages. It checks for:
- Correctness — Does the code do what was requested?
- Security — Are there injection vulnerabilities, exposed secrets, or unsafe patterns?
- Performance — Are there obvious inefficiencies like unnecessary re-renders or O(n^2) loops?
- Style — Does the code match the conventions already present in your project?
- Edge cases — Are there unhandled null values, empty arrays, or boundary conditions?
Review findings appear as annotations on the response card, flagged by severity (info, warning, or critical). Critical findings may trigger the debugging stage to apply automatic fixes.
Pipeline behavior by task type
Section titled “Pipeline behavior by task type”Different kinds of requests activate different stage combinations:
- “Explain this function” — Orchestration only (no code changes needed)
- “Add a login page” — All six stages
- “Fix the TypeError on line 42” — Orchestration, Debugging, Building, Review
- “Refactor this component” — Orchestration, Planning, Building, Review
- “Style the sidebar” — Orchestration, Planning, UI Generation, Review
- “Review my auth middleware” — Orchestration, Review
- “Create a REST API for users” — Orchestration, Planning, Building, Review
The pipeline adapts to what you need. You don’t have to think about stages — they run automatically behind the progress bar.
Errors and retries
Section titled “Errors and retries”If a stage encounters an error — for example, the building stage produces code that fails a basic syntax check — the pipeline can retry that stage. You’ll see the progress bar segment pulse again as the retry runs.
Retries happen automatically for transient issues (model timeouts, rate limits). For content errors (like invalid code), the debugging stage may activate to fix the issue before the pipeline continues.
If a stage fails after retries, the pipeline halts and you’ll see an error message explaining what went wrong and suggesting next steps.
Canceling a running pipeline
Section titled “Canceling a running pipeline”If you realize you’ve sent the wrong request or want to change direction, click the Stop button that appears next to the progress bar during execution. This cancels the current pipeline run. Any code already written by completed stages is preserved in the chat — you can review it even after canceling.
Token cost across stages
Section titled “Token cost across stages”Each stage consumes tokens independently. Orchestration and Review tend to be lighter. Building and UI Generation are heavier since they produce more output. You can see a per-stage token breakdown by clicking the usage counter in your workspace.
The amount of context sent to each stage also affects token cost. Building receives the most file context, while Orchestration works from summaries. For details on how context is selected per stage, see Context Management.
For more on how token usage works, see Token Usage & Budgets.