Presets
Presets
Section titled “Presets”Arxo includes pre-configured presets for common analysis scenarios. Presets select appropriate metrics and set reasonable defaults.
Available Presets
Section titled “Available Presets”Use Case: Fast feedback during development
Metrics:
- SCC (cycle detection)
- L1 Overview (high-level stats)
Characteristics:
- ⚡ Very fast (1-5 seconds)
- 🎯 Essential metrics only
- 💡 Quick health check
Usage:
arxo analyze --quick# orarxo analyze --preset quickExample Output:
✓ 183 files analyzed✓ No circular dependencies✓ System size: 247 modulesUse Case: Continuous integration pipelines
Metrics:
- SCC
- Propagation Cost
- Smells
- Hierarchy
Characteristics:
- ⚡ Fast (10-30 seconds)
- 🚫 Fail-fast on violations
- 📊 Essential quality gates
Usage:
arxo analyze --preset ci --fail-fastRecommended CI Config:
preset: ci
policy: invariants: - metric: scc.max_cycle_size op: "==" value: 0
- metric: propagation_cost.system.ratio op: "<=" value: 0.20
- metric: smells.cycles.severe_count op: "==" value: 0
run_options: fail_fast: true quiet: trueUse Case: Identify technical debt and architectural risks
Metrics:
- SCC
- Propagation Cost
- Smells
- Centrality
- MSR (code churn, hotspots)
- Change Coupling
Characteristics:
- 🔍 Comprehensive risk analysis
- 📈 Evolution metrics
- 🎯 Identifies refactoring candidates
- ⏱️ Medium speed (30-60 seconds)
Usage:
arxo analyze --preset riskWhat It Finds:
- Circular dependencies
- God components (high coupling)
- Change hotspots (frequently modified)
- Coupled changes (files that change together)
- Fragile modules (high propagation cost)
Coupling
Section titled “Coupling”Use Case: Analyze dependencies and coupling
Metrics:
propagation_costmsr(hotspot and co-change analysis)coupling_analysisdecoupling_levelclustered_cost_cochangesemantic_duplicates
Characteristics:
- 🔗 Dependency focus
- 📊 Coupling metrics
- 🎯 Refactoring guidance
- ⏱️ Medium speed (30-60 seconds)
Usage:
arxo analyze --preset couplingWhat It Measures:
- Direct coupling (imports)
- Transitive coupling (propagation)
- Co-change coupling from git history
- Duplication pressure from semantically similar modules
Security
Section titled “Security”Use Case: Security-focused analysis
Metrics:
- SCC
- Security Vulnerabilities
- Sensitive Data Flow
- Effect Violations
Characteristics:
- 🔒 Security focus
- 🕵️ Data flow analysis
- ⚠️ Risk detection
- ⏱️ Medium speed (30-90 seconds)
Usage:
arxo analyze --preset securityWhat It Detects:
- Sensitive data leaks
- Unsafe data flows
- Missing input validation
- Side effect violations
Runtime
Section titled “Runtime”Use Case: Runtime and operational analysis
Metrics:
- Runtime Centrality
- Traffic Hotspots
- Critical Path
- Runtime Drift
- Sensitive Data Flow (prioritizes PII violations on hot paths when telemetry is present)
- Test Coverage (prioritizes undertested modules on hot paths when telemetry is present)
- Chaos Readiness
- Cloud Cost
Characteristics:
- 📊 Operational focus
- 🔍 Production insights
- 💰 Cost analysis
- ⏱️ Requires telemetry data
Usage:
arxo analyze --preset runtime --telemetry-path ./traces.jsonOr configure telemetry in your config file (see Configuration):
data: telemetry: source_path: ./traces.json format: otel_json # or zipkin_json, jaeger_jsonRequirements:
- Trace data in one of: OTLP JSON, Zipkin JSON, or Jaeger JSON
- Set
data.telemetry.formatto match your exporter - For span-to-code mapping: include
code.filepathin span attributes/tags
See the Telemetry guide for collection and format details.
Use Case: AI/LLM application analysis
Metrics:
- LLM Integration
- RAG Architecture
- Agent Architecture
- ML Architecture
- Fine-tuning Architecture
Characteristics:
- 🤖 AI-specific patterns
- 💰 Cost tracking
- 🔒 PII detection
- 📊 Observability gaps
- ⏱️ Medium speed (30-60 seconds)
Usage:
arxo analyze --preset aiWhat It Detects:
- LLM calls without tracing
- Missing cost tracking
- PII leakage risks
- Hardcoded prompts
- Missing evaluation harness
- Rate limit absence
Use Case: Comprehensive analysis (all metrics)
Metrics: All 50+ available metrics
Characteristics:
- 📊 Complete analysis
- 🔬 Maximum insight
- ⏱️ Slow (2-5 minutes)
- 💾 Large output
Usage:
arxo analyze --preset fullWhen to Use:
- Initial architecture audit
- Quarterly reviews
- Before major refactoring
- Comprehensive documentation
Combining Presets with Custom Metrics
Section titled “Combining Presets with Custom Metrics”You can extend presets:
# Start with CI preset, add custom metricarxo analyze --preset ci --metric llm_integrationOr in config:
preset: ci
metrics: # Preset metrics + custom - id: llm_integration - id: rag_architectureCreating Custom Presets
Section titled “Creating Custom Presets”While you cannot register new presets, you can create reusable configs:
.arxo.custom.yaml:
# Custom "frontend" presetmetrics: - id: scc - id: propagation_cost - id: modularity - id: centrality - id: smells
data: import_graph: group_by: folder group_depth: 2 exclude: - "**/test/**" - "**/*.test.tsx"
policy: invariants: - metric: scc.max_cycle_size op: "==" value: 0 - metric: propagation_cost.system.ratio op: "<=" value: 0.15
run_options: fail_fast: trueUse:
arxo analyze --config .arxo.custom.yamlPreset Comparison
Section titled “Preset Comparison”| Preset | Speed | Metrics | Use Case |
|---|---|---|---|
| quick | ⚡⚡⚡ Very Fast (1-5s) | 2 | Development loop |
| ci | ⚡⚡ Fast (10-30s) | 4 | CI/CD gates |
| risk | ⚡ Medium (30-60s) | 6 | Tech debt audit |
| coupling | ⚡ Medium (30-60s) | 6 | Dependency analysis |
| security | ⚡ Medium (30-90s) | 4 | Security review |
| runtime | ⚡ Medium (varies) | 6 | Operational analysis |
| ai | ⚡ Medium (30-60s) | 5 | AI/LLM projects |
| full | 🐌 Slow (2-5min) | 50+ | Comprehensive audit |
Preset Selection Guide
Section titled “Preset Selection Guide”For Development
Section titled “For Development”- Pre-commit:
--quick - IDE integration:
--quick - Manual checks:
--preset ci
For CI/CD
Section titled “For CI/CD”- Pull Requests:
--preset ci --only-changed - Main branch:
--preset risk - Nightly:
--preset full
For Reviews
Section titled “For Reviews”- Sprint review:
--preset risk - Architecture review:
--preset coupling - Security audit:
--preset security - Quarterly audit:
--preset full
For Specialized Projects
Section titled “For Specialized Projects”- AI/LLM apps:
--preset ai - Microservices:
--preset coupling - Legacy refactor:
--preset risk - Production monitoring:
--preset runtime
Environment-Specific Presets
Section titled “Environment-Specific Presets”Create different configs per environment:
Development (.arxo.dev.yaml):
preset: quickpolicy: invariants: [] # No enforcementStaging (.arxo.staging.yaml):
preset: cipolicy: invariants: - metric: scc.max_cycle_size op: "<=" value: 3 # Allow small cyclesProduction (.arxo.prod.yaml):
preset: riskpolicy: invariants: - metric: scc.max_cycle_size op: "==" value: 0 # StrictAI-Assisted Preset Selection
Section titled “AI-Assisted Preset Selection”Use the MCP server with your AI assistant to discover and run presets conversationally:
You: "What analysis presets are available?"AI: [Calls list_presets] "8 presets available: quick, ci, risk..."
You: "Run a security analysis"AI: [Uses analyze_architecture with preset="security"] "Found 2 sensitive data flow violations..."The MCP server supports all presets and can help you choose the right one. See MCP Workflows for examples.
Next Steps
Section titled “Next Steps”- CLI Reference - Command options
- Configuration Guide - Full config
- MCP Workflows - Preset selection with AI
- Policy Examples - Policy patterns
- Metrics Overview - Available metrics