Skip to content
Arxo Arxo

Presets

Arxo includes pre-configured presets for common analysis scenarios. Presets select appropriate metrics and set reasonable defaults.

Use Case: Fast feedback during development

Metrics:

  • SCC (cycle detection)
  • L1 Overview (high-level stats)

Characteristics:

  • ⚡ Very fast (1-5 seconds)
  • 🎯 Essential metrics only
  • 💡 Quick health check

Usage:

Terminal window
arxo analyze --quick
# or
arxo analyze --preset quick

Example Output:

✓ 183 files analyzed
✓ No circular dependencies
✓ System size: 247 modules

Use Case: Continuous integration pipelines

Metrics:

  • SCC
  • Propagation Cost
  • Smells
  • Hierarchy

Characteristics:

  • ⚡ Fast (10-30 seconds)
  • 🚫 Fail-fast on violations
  • 📊 Essential quality gates

Usage:

Terminal window
arxo analyze --preset ci --fail-fast

Recommended CI Config:

.arxo.ci.yaml
preset: ci
policy:
invariants:
- metric: scc.max_cycle_size
op: "=="
value: 0
- metric: propagation_cost.system.ratio
op: "<="
value: 0.20
- metric: smells.cycles.severe_count
op: "=="
value: 0
run_options:
fail_fast: true
quiet: true

Use Case: Identify technical debt and architectural risks

Metrics:

  • SCC
  • Propagation Cost
  • Smells
  • Centrality
  • MSR (code churn, hotspots)
  • Change Coupling

Characteristics:

  • 🔍 Comprehensive risk analysis
  • 📈 Evolution metrics
  • 🎯 Identifies refactoring candidates
  • ⏱️ Medium speed (30-60 seconds)

Usage:

Terminal window
arxo analyze --preset risk

What It Finds:

  • Circular dependencies
  • God components (high coupling)
  • Change hotspots (frequently modified)
  • Coupled changes (files that change together)
  • Fragile modules (high propagation cost)

Use Case: Analyze dependencies and coupling

Metrics:

  • propagation_cost
  • msr (hotspot and co-change analysis)
  • coupling_analysis
  • decoupling_level
  • clustered_cost_cochange
  • semantic_duplicates

Characteristics:

  • 🔗 Dependency focus
  • 📊 Coupling metrics
  • 🎯 Refactoring guidance
  • ⏱️ Medium speed (30-60 seconds)

Usage:

Terminal window
arxo analyze --preset coupling

What It Measures:

  • Direct coupling (imports)
  • Transitive coupling (propagation)
  • Co-change coupling from git history
  • Duplication pressure from semantically similar modules

Use Case: Security-focused analysis

Metrics:

  • SCC
  • Security Vulnerabilities
  • Sensitive Data Flow
  • Effect Violations

Characteristics:

  • 🔒 Security focus
  • 🕵️ Data flow analysis
  • ⚠️ Risk detection
  • ⏱️ Medium speed (30-90 seconds)

Usage:

Terminal window
arxo analyze --preset security

What It Detects:

  • Sensitive data leaks
  • Unsafe data flows
  • Missing input validation
  • Side effect violations

Use Case: Runtime and operational analysis

Metrics:

  • Runtime Centrality
  • Traffic Hotspots
  • Critical Path
  • Runtime Drift
  • Sensitive Data Flow (prioritizes PII violations on hot paths when telemetry is present)
  • Test Coverage (prioritizes undertested modules on hot paths when telemetry is present)
  • Chaos Readiness
  • Cloud Cost

Characteristics:

  • 📊 Operational focus
  • 🔍 Production insights
  • 💰 Cost analysis
  • ⏱️ Requires telemetry data

Usage:

Terminal window
arxo analyze --preset runtime --telemetry-path ./traces.json

Or configure telemetry in your config file (see Configuration):

data:
telemetry:
source_path: ./traces.json
format: otel_json # or zipkin_json, jaeger_json

Requirements:

  • Trace data in one of: OTLP JSON, Zipkin JSON, or Jaeger JSON
  • Set data.telemetry.format to match your exporter
  • For span-to-code mapping: include code.filepath in span attributes/tags

See the Telemetry guide for collection and format details.


Use Case: AI/LLM application analysis

Metrics:

  • LLM Integration
  • RAG Architecture
  • Agent Architecture
  • ML Architecture
  • Fine-tuning Architecture

Characteristics:

  • 🤖 AI-specific patterns
  • 💰 Cost tracking
  • 🔒 PII detection
  • 📊 Observability gaps
  • ⏱️ Medium speed (30-60 seconds)

Usage:

Terminal window
arxo analyze --preset ai

What It Detects:

  • LLM calls without tracing
  • Missing cost tracking
  • PII leakage risks
  • Hardcoded prompts
  • Missing evaluation harness
  • Rate limit absence

Use Case: Comprehensive analysis (all metrics)

Metrics: All 50+ available metrics

Characteristics:

  • 📊 Complete analysis
  • 🔬 Maximum insight
  • ⏱️ Slow (2-5 minutes)
  • 💾 Large output

Usage:

Terminal window
arxo analyze --preset full

When to Use:

  • Initial architecture audit
  • Quarterly reviews
  • Before major refactoring
  • Comprehensive documentation

You can extend presets:

Terminal window
# Start with CI preset, add custom metric
arxo analyze --preset ci --metric llm_integration

Or in config:

preset: ci
metrics:
# Preset metrics + custom
- id: llm_integration
- id: rag_architecture

While you cannot register new presets, you can create reusable configs:

.arxo.custom.yaml:

# Custom "frontend" preset
metrics:
- id: scc
- id: propagation_cost
- id: modularity
- id: centrality
- id: smells
data:
import_graph:
group_by: folder
group_depth: 2
exclude:
- "**/test/**"
- "**/*.test.tsx"
policy:
invariants:
- metric: scc.max_cycle_size
op: "=="
value: 0
- metric: propagation_cost.system.ratio
op: "<="
value: 0.15
run_options:
fail_fast: true

Use:

Terminal window
arxo analyze --config .arxo.custom.yaml

PresetSpeedMetricsUse Case
quick⚡⚡⚡ Very Fast (1-5s)2Development loop
ci⚡⚡ Fast (10-30s)4CI/CD gates
risk⚡ Medium (30-60s)6Tech debt audit
coupling⚡ Medium (30-60s)6Dependency analysis
security⚡ Medium (30-90s)4Security review
runtime⚡ Medium (varies)6Operational analysis
ai⚡ Medium (30-60s)5AI/LLM projects
full🐌 Slow (2-5min)50+Comprehensive audit

  • Pre-commit: --quick
  • IDE integration: --quick
  • Manual checks: --preset ci
  • Pull Requests: --preset ci --only-changed
  • Main branch: --preset risk
  • Nightly: --preset full
  • Sprint review: --preset risk
  • Architecture review: --preset coupling
  • Security audit: --preset security
  • Quarterly audit: --preset full
  • AI/LLM apps: --preset ai
  • Microservices: --preset coupling
  • Legacy refactor: --preset risk
  • Production monitoring: --preset runtime

Create different configs per environment:

Development (.arxo.dev.yaml):

preset: quick
policy:
invariants: [] # No enforcement

Staging (.arxo.staging.yaml):

preset: ci
policy:
invariants:
- metric: scc.max_cycle_size
op: "<="
value: 3 # Allow small cycles

Production (.arxo.prod.yaml):

preset: risk
policy:
invariants:
- metric: scc.max_cycle_size
op: "=="
value: 0 # Strict

Use the MCP server with your AI assistant to discover and run presets conversationally:

You: "What analysis presets are available?"
AI: [Calls list_presets] "8 presets available: quick, ci, risk..."
You: "Run a security analysis"
AI: [Uses analyze_architecture with preset="security"]
"Found 2 sensitive data flow violations..."

The MCP server supports all presets and can help you choose the right one. See MCP Workflows for examples.