Pipeline Library
Overview of the 45 AI pipelines in GridWork HQ — categories, model routing, multi-pass execution, and how to trigger pipelines.
GridWork HQ ships with 45 AI pipelines organized across five categories. Each pipeline is a Markdown definition file in the Knowledge Vault that the pipeline server reads at runtime. No code changes are needed to add, modify, or remove pipelines.
Pipeline Categories
| Category | Count | Purpose |
|---|---|---|
| Web Agency | 12 | Client delivery workflows — audits, builds, SEO, brand, content |
| Marketing | 8 | Lead generation, outreach, proposals, follow-ups |
| Design | 7 | Brand systems, style guides, design briefs, asset planning |
| Operations | 10 | Internal audits, archiving, knowledge maintenance, scope checks |
| General | 8 | Reports, specs, plans, and cross-category utilities |
Model Routing
Pipelines use different Claude models depending on the complexity and purpose of the task:
| Model | Used For | Examples |
|---|---|---|
| Opus | Direct client deliverables, complex analysis | audit, propose, build, brand |
| Sonnet | Draft generation, structured output | content, seo, report, outreach |
| Haiku | Lightweight cron tasks, quick checks | kb-librarian, scope-audit, friday-update |
Model assignment is defined in each pipeline's Markdown frontmatter. You can override it by editing the model field in the pipeline definition file.
Multi-Pass Execution
Complex pipelines run in multiple passes rather than a single prompt. Each pass builds on the previous output:
- Context gathering — reads relevant files from the Knowledge Vault (client folder, templates, memory)
- Analysis — processes the input against gathered context
- Generation — produces structured output (Markdown, JSON, or both)
- Review — validates output against the pipeline's quality criteria
- Storage — saves results to the Knowledge Vault output directory
Not every pipeline uses all five passes. Simple pipelines like kb-librarian may only need two passes, while build or brand may use all five.
Triggering Pipelines
From Mission Control
The Mission Control page in the dashboard shows cards for each pipeline with input fields and a Run button. Select a pipeline, provide the required input, and click Run to start a job.
From the CLI
curl -X POST http://localhost:8750/pipelines/run \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"pipeline": "prospect", "input": "web design agency in Atlanta"}'From Cron
Automated pipelines are triggered by the cron scheduler. See Cron Configuration for schedule definitions.
Pipeline Definition Format
Each pipeline is defined in knowledge/system/rules/pipelines/:
---
name: prospect
description: "Research and qualify potential leads"
category: marketing
model: sonnet
passes: 3
inputs:
- name: query
type: text
required: true
description: "Business type or search query"
---
## Instructions
[Pipeline instructions for the AI agent...]Job Queue
The pipeline server processes a maximum of 3 jobs in parallel (configurable via MAX_PARALLEL_PIPELINES). Additional jobs are queued and processed in order. Job status streams back to the dashboard via SSE.
Adding a Pipeline
- Create a definition file in
knowledge/system/rules/pipelines/ - Register the pipeline in the pipeline server's registry
- Add a card to Mission Control if the pipeline is user-triggered
- See AI Pipelines Overview for the full reference