Hot-reloadable prompts, structured reasoning, and chain workflows for your AI assistant.
Quick Start • What You Get • Syntax • Docs
Stop copy-pasting prompts. This server turns your prompt library into a programmable engine:
- Version Control — Prompts are YAML + templates in git. Track changes, review diffs.
- Hot Reload — Edit a template, run it immediately. No restarts.
- Structured Execution — Parses operators, injects methodology, enforces quality gates.
%%{init: {'theme': 'neutral', 'themeVariables': {'background':'#0b1224','primaryColor':'#e2e8f0','primaryBorderColor':'#1f2937','primaryTextColor':'#0f172a','lineColor':'#94a3b8','fontFamily':'"DM Sans","Segoe UI",sans-serif','fontSize':'14px','edgeLabelBackground':'#0b1224'}}}%%
flowchart TB
classDef actor fill:#0f172a,stroke:#cbd5e1,stroke-width:1.5px,color:#f8fafc;
classDef server fill:#111827,stroke:#fbbf24,stroke-width:1.8px,color:#f8fafc;
classDef process fill:#e2e8f0,stroke:#1f2937,stroke-width:1.6px,color:#0f172a;
classDef client fill:#f4d0ff,stroke:#a855f7,stroke-width:1.6px,color:#2e1065;
classDef clientbg fill:#1a0a24,stroke:#a855f7,stroke-width:1.8px,color:#f8fafc;
classDef decision fill:#fef3c7,stroke:#f59e0b,stroke-width:1.6px,color:#78350f;
linkStyle default stroke:#94a3b8,stroke-width:2px
User["1. User sends command"]:::actor
Example[">>analyze @CAGEERF :: 'cite sources'"]:::actor
User --> Example --> Parse
subgraph Server["MCP Server"]
direction TB
Parse["2. Parse operators"]:::process
Inject["3. Inject framework + gates"]:::process
Render["4. Render prompt"]:::process
Decide{"6. Route verdict"}:::decision
Parse --> Inject --> Render
end
Server:::server
subgraph Client["Claude (Client)"]
direction TB
Execute["5. Run prompt + check gates"]:::client
end
Client:::clientbg
Render -->|"Prompt with gate criteria"| Execute
Execute -->|"Verdict + output"| Decide
Decide -->|"PASS → render next step"| Render
Decide -->|"FAIL → render retry prompt"| Render
Decide -->|"Done"| Result["7. Return to user"]:::actor
The feedback loop: You send a command with operators → Server parses and injects methodology/gates → Claude executes and self-evaluates → Server routes: next step (PASS), retry (FAIL), or return result (done).
/plugin install claude-prompts@minipuftThe plugin adds hooks that fix common issues:
| Problem | Hook Fix |
|---|---|
Model ignores >>analyze |
Detects syntax, suggests correct MCP call |
| Chain step forgotten | Injects [Chain] Step 2/5 - continue |
| Gate review skipped | Reminds GATE_REVIEW: PASS|FAIL |
Raw MCP works, but models sometimes miss the syntax. The hooks catch that. → hooks/README.md
NPM (Claude Desktop, generic clients)
Add to your MCP config (claude_desktop_config.json, .cursor/mcp.json, etc.):
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"]
}
}
}Restart your client, then test: resource_manager(resource_type:"prompt", action:"list")
From Source
git clone https://github.com/minipuft/claude-prompts-mcp.git
cd claude-prompts-mcp/server && npm install && npm run buildThen point your config to server/dist/index.js.
Use your own prompts without cloning:
{
"mcpServers": {
"claude-prompts": {
"command": "npx",
"args": ["-y", "claude-prompts@latest"],
"env": {
"MCP_RESOURCES_PATH": "/path/to/your/resources"
}
}
}
}Your resources directory can contain: prompts/, gates/, methodologies/, styles/.
| Override Method | Example |
|---|---|
| All resources | MCP_RESOURCES_PATH=/path/to/resources |
| Just prompts | MCP_PROMPTS_PATH=/path/to/prompts |
| CLI flag (dev) | --prompts=/path/to/prompts |
Priority: CLI flags > individual env vars > MCP_RESOURCES_PATH > package defaults.
See CLI Configuration for all options.
Edit prompts, test immediately. Better yet—ask Claude to fix them:
User: "The code_review prompt is too verbose"
Claude: resource_manager(action:"update", id:"code_review", ...)
User: "Test it"
Claude: prompt_engine(command:">>code_review") # Uses updated version instantly
Break complex tasks into steps with -->:
analyze code --> identify issues --> propose fixes --> generate tests
Each step's output flows to the next. Add quality gates between steps.
Inject structured thinking patterns:
@CAGEERF Review this architecture # Context → Analysis → Goals → Execution → Evaluation → Refinement
@ReACT Debug this error # Reason → Act → Observe loops
Quality criteria Claude self-checks:
Summarize this :: 'under 200 words' :: 'include key statistics'
Failed gates can retry automatically or pause for your decision.
Let Claude pick the right tools:
%judge Help me refactor this codebase
Claude analyzes available frameworks, gates, and styles, then applies the best combination.
Every update is versioned. Compare, rollback, undo:
resource_manager(action:"history", id:"code_review")
resource_manager(action:"rollback", id:"code_review", version:2, confirm:true)
| Symbol | Name | What It Does | Example |
|---|---|---|---|
>> |
Prompt | Execute template | >>code_review |
--> |
Chain | Pipe to next step | step1 --> step2 |
@ |
Framework | Inject methodology | @CAGEERF |
:: |
Gate | Add quality criteria | :: 'cite sources' |
% |
Modifier | Toggle behavior | %clean, %judge |
# |
Style | Apply formatting | #analytical |
Modifiers:
%clean— No framework/gate injection%lean— Gates only, skip framework%guided— Force framework injection%judge— Claude selects best resources
# Inline (quick)
Research AI :: 'use recent sources' --> Summarize :: 'be concise'
# With framework
@CAGEERF Explain React hooks :: 'include examples'
# Programmatic
prompt_engine({
command: ">>code_review",
gates: [{ name: "Security", criteria: ["No hardcoded secrets"] }]
})
| Severity | Behavior |
|---|---|
| Critical/High | Must pass (blocking) |
| Medium/Low | Warns, continues (advisory) |
See Gates Guide for full schema.
Customize via server/config.json:
| Section | Setting | Default | Description |
|---|---|---|---|
prompts |
directory |
prompts |
Prompts directory (hot-reloaded) |
frameworks |
injection.systemPrompt |
enabled | Auto-inject methodology guidance |
gates |
definitionsDirectory |
gates |
Quality gate definitions |
execution |
judge |
true |
Enable %judge resource selection |
| Tool | Purpose |
|---|---|
prompt_engine |
Execute prompts with frameworks and gates |
resource_manager |
CRUD for prompts, gates, methodologies |
system_control |
Status, analytics, health checks |
prompt_engine(command:"@CAGEERF >>analysis topic:'AI safety'")
resource_manager(resource_type:"prompt", action:"list")
system_control(action:"status")- MCP Tooling Guide — Full command reference
- Prompt Authoring — Template syntax and schema
- Chains — Multi-step workflows
- Gates — Quality validation
- Architecture — System internals
cd server
npm install && npm run build
npm test
npm run validate:all # Full CI checkSee CONTRIBUTING.md for details.