list_presets
list_presets
Section titled “list_presets”List all available analysis presets with their descriptions. Use this to discover which presets are available before calling analyze_architecture.
Parameters
Section titled “Parameters”None. This tool takes no parameters.
Response
Section titled “Response”Returns a JSON array of preset objects.
Response Schema
Section titled “Response Schema”[ { "name": "string", "description": "string" }]Example
Section titled “Example”Request:
{}Response:
[ { "name": "quick", "description": "SCC, L1 overview, centrality" }, { "name": "ci", "description": "SCC, smells, layer violations, architecture budgeting" }, { "name": "risk", "description": "Ricci curvature, MSR, hotspot score, ownership, truck factor, causal discovery, semantic duplicates" }, { "name": "coupling", "description": "Propagation cost, msr (hotspot + co-change), coupling analysis, decoupling level, clustered cost cochange, semantic duplicates" }, { "name": "security", "description": "Security, sensitive data flow, effect violations" }, { "name": "runtime", "description": "Cloud cost, traffic hotspot, critical path, runtime centrality, runtime drift, sensitive data flow, test coverage" }, { "name": "ai", "description": "LLM integration, ML architecture, RAG architecture, finetuning architecture, agent architecture" }, { "name": "rag", "description": "RAG architecture, LLM integration" }, { "name": "full", "description": "All metrics" }]Preset Details
Section titled “Preset Details”Speed: Fast (~1-5 seconds)
Metrics: scc, l1_overview, centrality
Use case: Pre-commit sanity check, quick health assessment
Best for: Daily development workflow, pre-PR checks
Speed: Moderate (~5-15 seconds)
Metrics: scc, smells, layer_violations, architecture_budgeting
Use case: CI/CD pipeline gate, policy enforcement
Best for: Pull request checks, merge gates, architecture governance
Speed: Moderate (~10-30 seconds)
Metrics: ricci_curvature, centrality, msr (includes hotspot score), ownership, truck_factor, causal_discovery, semantic_duplicates
Use case: Defect prediction, change risk assessment, knowledge risk
Best for: Release planning, identifying risky modules, bus factor analysis
coupling
Section titled “coupling”Speed: Slow (~30-90 seconds)
Metrics: propagation_cost, msr (includes hotspot and co-change analysis), coupling_analysis, decoupling_level, clustered_cost_cochange, semantic_duplicates
Use case: Deep coupling audit, refactoring planning
Best for: Major refactoring initiatives, decoupling analysis, architectural reviews
security
Section titled “security”Speed: Moderate (~10-20 seconds)
Metrics: security, sensitive_data_flow, effect_violations
Use case: Security audit, CVE detection, PII flow analysis
Best for: Security reviews, compliance checks, vulnerability assessment
runtime
Section titled “runtime”Speed: Moderate (~15-30 seconds)
Metrics: cloud_cost, traffic_hotspot, critical_path, centrality, runtime_drift, sensitive_data_flow, test_coverage
Use case: Runtime/performance analysis (requires telemetry data)
Best for: Performance optimization, cloud cost analysis, observability review
Speed: Moderate (~10-25 seconds)
Metrics: llm_integration, ml_architecture, rag_architecture, finetuning_architecture, agent_architecture
Use case: LLM/AI integration health check, ML pipeline audit
Best for: AI-powered applications, LLM integration reviews, RAG system analysis
Speed: Fast (~5-10 seconds)
Metrics: rag_architecture, llm_integration
Use case: RAG-specific analysis (retrieval, grounding, index staleness)
Best for: RAG applications, vector database integration, retrieval quality
Speed: Very slow (~1-5 minutes for large projects)
Metrics: All 52 metrics in the engine (14 OSS + 38 Pro)
Use case: Comprehensive architecture review
Best for: Quarterly reviews, architecture assessments, research
Choosing a Preset
Section titled “Choosing a Preset”| Scenario | Recommended Preset |
|---|---|
| Pre-commit check | quick |
| Pull request gate | ci |
| Planning a refactor | coupling |
| Security review | security |
| AI system audit | ai or rag |
| Performance review | runtime |
| Quarterly architecture review | full |
| Identifying risky modules | risk |
Workflow
Section titled “Workflow”Use this tool to explore available presets before analysis:
1. list_presets → see all options2. Pick appropriate preset based on use case3. analyze_architecture({ preset: "..." }) → run analysisError Cases
Section titled “Error Cases”This tool has no error cases — it always succeeds and returns the preset list.
Performance
Section titled “Performance”- Speed: Instant (
<10ms) - Caching: Not applicable (no computation)
Related Tools
Section titled “Related Tools”analyze_architecture— Run analysis with a preset
Related Guides
Section titled “Related Guides”- Presets Guide — Detailed guide on all presets
- Choosing the Right Preset — Decision guide
CLI Equivalent
Section titled “CLI Equivalent”# List presets using CLIarxo analyze --help | grep -A 20 "presets"
# Or list via configarxo config presets