Prompts
MCP Prompts
Section titled “MCP Prompts”What Are MCP Prompts?
Section titled “What Are MCP Prompts?”MCP prompts are reusable, parameterized prompt templates that AI assistants can discover and invoke. They provide structured guidance for common architecture analysis tasks.
Benefits:
- Discoverability — Clients can list available prompts without hardcoding
- Consistency — Standardized workflows across different AI assistants
- Composability — Prompts can reference MCP tools and resources
Planned Prompts
Section titled “Planned Prompts”The following prompts are planned for future releases:
architecture_review
Section titled “architecture_review”Purpose: Comprehensive architecture review with a guided checklist
Parameters:
project_path(required) — Path to the projectfocus_area(optional) —structure,evolution,security,ai, orall(default)
Workflow:
- Run
analyze_architecturewith the appropriate preset - Interpret metrics with best-practice thresholds:
scc.max_cycle_size > 0→ Cycles detected (bad)propagation_cost.system.ratio > 0.15→ High propagation cost (refactor needed)centrality.module.betweenness_max > 0.5→ Hub modules (risk)
- Generate a report with:
- Executive summary (pass/fail, overall health score)
- Key findings (prioritized by severity)
- Actionable recommendations
- Trade-offs and next steps
Example invocation:
get_prompt({ "name": "architecture_review", "arguments": { "project_path": ".", "focus_area": "structure" }})security_audit
Section titled “security_audit”Purpose: Security-focused architecture analysis
Parameters:
project_path(required)check_dependencies(optional, defaulttrue) — Include CVE scanning
Workflow:
- Run
analyze_architecture({ preset: "security" }) - Run
check_llm_integration()(check for PII leakage risks) - Analyze findings:
- Sensitive data flow violations
- Effect system violations (unauthorized side effects)
- LLM integration risks (PII in prompts, no encryption)
- Known CVEs in dependencies
- Produce a security report with:
- Critical vulnerabilities (require immediate attention)
- Warnings (should be addressed soon)
- Recommendations (best practices)
Example invocation:
get_prompt({ "name": "security_audit", "arguments": { "project_path": ".", "check_dependencies": true }})refactoring_plan
Section titled “refactoring_plan”Purpose: Generate a step-by-step refactoring plan
Parameters:
project_path(required)focus_metric(optional) —scc,centrality,propagation_cost, orauto(default)
Workflow:
- Run
get_hotspots({ project_path, metric_type: focus_metric }) - Run
suggest_refactors({ project_path })(when implemented) - Prioritize recommendations by:
- Impact: High-centrality modules first
- Effort: Low-hanging fruit (e.g., breaking cycles via barrel imports)
- Risk: Low-risk changes first (e.g., extract interface, no behavior change)
- Generate a plan with:
- Phase 1: Quick wins (low effort, high impact)
- Phase 2: Structural improvements (cycles, layer violations)
- Phase 3: Deep refactoring (split modules, redesign)
Example invocation:
get_prompt({ "name": "refactoring_plan", "arguments": { "project_path": ".", "focus_metric": "scc" }})onboarding
Section titled “onboarding”Purpose: Help new developers understand the codebase
Parameters:
project_path(required)role(optional) —frontend,backend,fullstack, orgeneral(default)
Workflow:
- Run
analyze_architecture({ preset: "quick" }) - Identify:
- Entry points: Modules with high fan-out (likely top-level)
- Core modules: Modules with high betweenness (central to architecture)
- Hotspots: High-churn modules (change frequently, focus here)
- Stable modules: Low-churn, low-centrality (safe to ignore initially)
- Generate an onboarding guide:
- “Start here” — Top 5 modules to read first
- “Core concepts” — Architectural patterns detected
- “Watch out for” — Hotspots, cycles, tech debt
- “Safe to ignore” — Generated code, dependencies
Example invocation:
get_prompt({ "name": "onboarding", "arguments": { "project_path": ".", "role": "backend" }})pre_commit
Section titled “pre_commit”Purpose: Pre-commit checklist for developers
Parameters:
project_path(required)changed_files(optional) — Array of files in the commit
Workflow:
- Run
check_cycles({ project_path }) - Run
evaluate_policy({ project_path }) - If
changed_filesprovided:- Run
analyze_file_impact({ project_path, file_paths: changed_files }) - Check if any changed file has high centrality (risky)
- Run
- Return a pass/fail checklist:
- ✅ No circular dependencies
- ✅ No policy violations
- ✅ No high-risk changes (or warn if risky)
- ❌ Block commit if issues found
Example invocation:
get_prompt({ "name": "pre_commit", "arguments": { "project_path": ".", "changed_files": ["src/core/auth.ts"] }})llm_integration_health
Section titled “llm_integration_health”Purpose: Deep dive into AI/ML integration quality
Parameters:
project_path(required)
Workflow:
- Run
check_llm_integration({ project_path }) - Run
analyze_architecture({ preset: "ai" })for full AI metrics - Analyze:
- Observability: Are LLM calls traced/logged?
- Security: PII redaction, prompt injection risks
- Cost control: Token tracking, budget alerts
- Resilience: Timeouts, retries, fallback models
- Architecture: RAG health, agent coordination, fine-tuning
- Generate a report:
- Health score (0-1)
- Critical issues (e.g., no PII redaction)
- Recommendations (add tracing, use prompt templates, etc.)
Example invocation:
get_prompt({ "name": "llm_integration_health", "arguments": { "project_path": "." }})Implementation Status
Section titled “Implementation Status”| Prompt | Status | ETA |
|---|---|---|
architecture_review | 🚧 Planned | TBD |
security_audit | 🚧 Planned | TBD |
refactoring_plan | 🚧 Planned | TBD |
onboarding | 🚧 Planned | TBD |
pre_commit | 🚧 Planned | TBD |
llm_integration_health | 🚧 Planned | TBD |
How Prompts Will Work
Section titled “How Prompts Will Work”Once implemented, AI assistants will:
-
Discover prompts:
list_prompts() -
Fetch a prompt template:
get_prompt({"name": "architecture_review","arguments": { "project_path": "." }}) -
Execute the prompt: The server returns a structured prompt with embedded tool calls. The AI assistant executes it and formats results for the user.
Custom Prompts
Section titled “Custom Prompts”Once prompts are implemented, you’ll be able to define custom prompts in your arxo.yaml:
prompts: my_custom_review: description: "Custom review focused on API design" steps: - tool: analyze_architecture preset: coupling - tool: get_hotspots metric_type: centrality - format: "Report on API coupling hotspots"Why MCP Prompts?
Section titled “Why MCP Prompts?”Prompts provide a higher-level abstraction than tools:
| Feature | Tools | Prompts |
|---|---|---|
| Granularity | Low-level (run single analysis) | High-level (multi-step workflow) |
| Composition | Manual (AI chains tools) | Automatic (server defines workflow) |
| Consistency | Varies by AI | Standardized |
| Discoverability | AI must know about tools | AI discovers prompts |
Use tools when: You need fine-grained control (e.g., check_cycles only)
Use prompts when: You need a guided, multi-step workflow (e.g., full architecture review)
Related Pages
Section titled “Related Pages”- Tools — Low-level MCP tools
- Workflows — Example tool compositions (manual)
- Advanced — Custom tool composition patterns
- CLI Comparison — CLI equivalents for workflows