Skip to content
Arxo Arxo

Prompts


MCP prompts are reusable, parameterized prompt templates that AI assistants can discover and invoke. They provide structured guidance for common architecture analysis tasks.

Benefits:

  • Discoverability — Clients can list available prompts without hardcoding
  • Consistency — Standardized workflows across different AI assistants
  • Composability — Prompts can reference MCP tools and resources

The following prompts are planned for future releases:

Purpose: Comprehensive architecture review with a guided checklist

Parameters:

  • project_path (required) — Path to the project
  • focus_area (optional) — structure, evolution, security, ai, or all (default)

Workflow:

  1. Run analyze_architecture with the appropriate preset
  2. Interpret metrics with best-practice thresholds:
    • scc.max_cycle_size > 0 → Cycles detected (bad)
    • propagation_cost.system.ratio > 0.15 → High propagation cost (refactor needed)
    • centrality.module.betweenness_max > 0.5 → Hub modules (risk)
  3. Generate a report with:
    • Executive summary (pass/fail, overall health score)
    • Key findings (prioritized by severity)
    • Actionable recommendations
    • Trade-offs and next steps

Example invocation:

get_prompt({
"name": "architecture_review",
"arguments": {
"project_path": ".",
"focus_area": "structure"
}
})

Purpose: Security-focused architecture analysis

Parameters:

  • project_path (required)
  • check_dependencies (optional, default true) — Include CVE scanning

Workflow:

  1. Run analyze_architecture({ preset: "security" })
  2. Run check_llm_integration() (check for PII leakage risks)
  3. Analyze findings:
    • Sensitive data flow violations
    • Effect system violations (unauthorized side effects)
    • LLM integration risks (PII in prompts, no encryption)
    • Known CVEs in dependencies
  4. Produce a security report with:
    • Critical vulnerabilities (require immediate attention)
    • Warnings (should be addressed soon)
    • Recommendations (best practices)

Example invocation:

get_prompt({
"name": "security_audit",
"arguments": {
"project_path": ".",
"check_dependencies": true
}
})

Purpose: Generate a step-by-step refactoring plan

Parameters:

  • project_path (required)
  • focus_metric (optional) — scc, centrality, propagation_cost, or auto (default)

Workflow:

  1. Run get_hotspots({ project_path, metric_type: focus_metric })
  2. Run suggest_refactors({ project_path }) (when implemented)
  3. Prioritize recommendations by:
    • Impact: High-centrality modules first
    • Effort: Low-hanging fruit (e.g., breaking cycles via barrel imports)
    • Risk: Low-risk changes first (e.g., extract interface, no behavior change)
  4. Generate a plan with:
    • Phase 1: Quick wins (low effort, high impact)
    • Phase 2: Structural improvements (cycles, layer violations)
    • Phase 3: Deep refactoring (split modules, redesign)

Example invocation:

get_prompt({
"name": "refactoring_plan",
"arguments": {
"project_path": ".",
"focus_metric": "scc"
}
})

Purpose: Help new developers understand the codebase

Parameters:

  • project_path (required)
  • role (optional) — frontend, backend, fullstack, or general (default)

Workflow:

  1. Run analyze_architecture({ preset: "quick" })
  2. Identify:
    • Entry points: Modules with high fan-out (likely top-level)
    • Core modules: Modules with high betweenness (central to architecture)
    • Hotspots: High-churn modules (change frequently, focus here)
    • Stable modules: Low-churn, low-centrality (safe to ignore initially)
  3. Generate an onboarding guide:
    • “Start here” — Top 5 modules to read first
    • “Core concepts” — Architectural patterns detected
    • “Watch out for” — Hotspots, cycles, tech debt
    • “Safe to ignore” — Generated code, dependencies

Example invocation:

get_prompt({
"name": "onboarding",
"arguments": {
"project_path": ".",
"role": "backend"
}
})

Purpose: Pre-commit checklist for developers

Parameters:

  • project_path (required)
  • changed_files (optional) — Array of files in the commit

Workflow:

  1. Run check_cycles({ project_path })
  2. Run evaluate_policy({ project_path })
  3. If changed_files provided:
    • Run analyze_file_impact({ project_path, file_paths: changed_files })
    • Check if any changed file has high centrality (risky)
  4. Return a pass/fail checklist:
    • ✅ No circular dependencies
    • ✅ No policy violations
    • ✅ No high-risk changes (or warn if risky)
    • ❌ Block commit if issues found

Example invocation:

get_prompt({
"name": "pre_commit",
"arguments": {
"project_path": ".",
"changed_files": ["src/core/auth.ts"]
}
})

Purpose: Deep dive into AI/ML integration quality

Parameters:

  • project_path (required)

Workflow:

  1. Run check_llm_integration({ project_path })
  2. Run analyze_architecture({ preset: "ai" }) for full AI metrics
  3. Analyze:
    • Observability: Are LLM calls traced/logged?
    • Security: PII redaction, prompt injection risks
    • Cost control: Token tracking, budget alerts
    • Resilience: Timeouts, retries, fallback models
    • Architecture: RAG health, agent coordination, fine-tuning
  4. Generate a report:
    • Health score (0-1)
    • Critical issues (e.g., no PII redaction)
    • Recommendations (add tracing, use prompt templates, etc.)

Example invocation:

get_prompt({
"name": "llm_integration_health",
"arguments": {
"project_path": "."
}
})

PromptStatusETA
architecture_review🚧 PlannedTBD
security_audit🚧 PlannedTBD
refactoring_plan🚧 PlannedTBD
onboarding🚧 PlannedTBD
pre_commit🚧 PlannedTBD
llm_integration_health🚧 PlannedTBD

Once implemented, AI assistants will:

  1. Discover prompts:

    list_prompts()
  2. Fetch a prompt template:

    get_prompt({
    "name": "architecture_review",
    "arguments": { "project_path": "." }
    })
  3. Execute the prompt: The server returns a structured prompt with embedded tool calls. The AI assistant executes it and formats results for the user.


Once prompts are implemented, you’ll be able to define custom prompts in your arxo.yaml:

prompts:
my_custom_review:
description: "Custom review focused on API design"
steps:
- tool: analyze_architecture
preset: coupling
- tool: get_hotspots
metric_type: centrality
- format: "Report on API coupling hotspots"

Prompts provide a higher-level abstraction than tools:

FeatureToolsPrompts
GranularityLow-level (run single analysis)High-level (multi-step workflow)
CompositionManual (AI chains tools)Automatic (server defines workflow)
ConsistencyVaries by AIStandardized
DiscoverabilityAI must know about toolsAI discovers prompts

Use tools when: You need fine-grained control (e.g., check_cycles only)

Use prompts when: You need a guided, multi-step workflow (e.g., full architecture review)