Advanced
Advanced
Section titled “Advanced”This page covers advanced MCP server usage: custom tool composition, caching strategies, CI integration, and performance optimization.
Custom Tool Composition
Section titled “Custom Tool Composition”MCP tools are designed to be composable. AI assistants can chain multiple tools together to create sophisticated workflows.
Pattern: Health Check → Deep Dive
Section titled “Pattern: Health Check → Deep Dive”Start with a fast check, then escalate if issues found:
1. check_cycles({ project_path: "." }) ↓ If max_cycle_size > 02. get_hotspots({ project_path: "." }) ↓ Get specific recommendations3. suggest_refactors({ project_path: "." })Pattern: Impact-First Refactoring
Section titled “Pattern: Impact-First Refactoring”Before changing a file, assess its blast radius:
1. analyze_file_impact({ project_path: ".", file_paths: ["src/core/auth.ts"] }) ↓ Check centrality metrics2. If high betweenness (>0.5): - get_hotspots({ project_path: ".", metric_type: "centrality" }) - Recommend splitting the file firstPattern: Policy-Gated Analysis
Section titled “Pattern: Policy-Gated Analysis”Enforce policies before running expensive analysis:
1. evaluate_policy({ project_path: "." }) ↓ If violations_count > 02. Stop and report violations ↓ Else3. analyze_architecture({ project_path: ".", preset: "full" })Pattern: Incremental LLM Audit
Section titled “Pattern: Incremental LLM Audit”Audit AI integrations in stages:
1. check_llm_integration({ project_path: "." }) ↓ If health_score < 0.52. analyze_architecture({ project_path: ".", preset: "ai" }) ↓ Get full AI metrics (RAG, agents, fine-tuning)3. Provide prioritized fix listCaching Strategy
Section titled “Caching Strategy”The MCP server uses an in-memory cache with configurable TTL and size limits. Understanding cache behavior is key to optimizing performance.
Cache Key Composition
Section titled “Cache Key Composition”A cache entry is keyed by:
- Project path (absolute path)
- Config hash (SHA-256 of
arxo.yaml) - Preset name (e.g., “quick”, “ci”)
- Changed files hash (for
analyze_file_impact)
Implication: Changing any of these invalidates the cache.
Cache Tuning for Different Workflows
Section titled “Cache Tuning for Different Workflows”Development (Interactive AI Queries)
Section titled “Development (Interactive AI Queries)”Goal: Fast feedback, tolerate slightly stale data
{ "command": "/path/to/arxo-mcp", "args": ["--cache-ttl", "600", "--max-cache-entries", "512000"]}- TTL: 10 minutes (600s) — code changes often, but not every second
- Size: 512k entries — enough for 3-5 large projects
CI/CD Pipeline
Section titled “CI/CD Pipeline”Goal: Always fresh data, no stale results
{ "command": "/path/to/arxo-mcp", "args": ["--cache-ttl", "0", "--max-cache-entries", "10000"]}- TTL: 0 seconds — disable caching (each run is fresh)
- Size: Small entry limit (or disable caching entirely)
Better approach for CI: Use the CLI (arxo analyze) instead of MCP for CI pipelines. See CLI Comparison.
Long-Running Server (Shared Team Environment)
Section titled “Long-Running Server (Shared Team Environment)”Goal: Maximize cache hits across multiple users/projects
{ "command": "/path/to/arxo-mcp", "args": ["--cache-ttl", "1800", "--max-cache-entries", "1000000"]}- TTL: 30 minutes (1800s) — balance freshness and reuse
- Size: 1M entries — accommodate many projects
Cache Invalidation Strategies
Section titled “Cache Invalidation Strategies”The MCP server automatically invalidates cache entries when:
- TTL expires
- Config file (
arxo.yaml) changes - Maximum cache entries exceeded (LRU eviction)
Manual invalidation:
- Restart the MCP server (clears all cache)
- Change a config value temporarily (invalidates cache key)
Monitoring cache performance:
Enable debug logging:
{ "env": { "RUST_LOG": "arxo_mcp=debug" }}Look for:
[INFO] Cache hit for project: /path/to/project, preset: ci[INFO] Cache miss, running analysis for project: /path/to/projectGood cache hit rate: >70% for interactive development
Resource-Based Workflows
Section titled “Resource-Based Workflows”MCP resources provide lightweight read access to cached analysis results. Use them for:
Quick Status Checks
Section titled “Quick Status Checks”Instead of running analyze_architecture (which may take seconds), read cached results:
Resource: policy://violations/<project_path>Use case: "How many policy violations do we have?"Preset used: "ci"Resource: metrics://current/<project_path>Use case: "What's our current propagation cost?"Preset used: "quick"Dashboard Integration
Section titled “Dashboard Integration”If you’re building a custom dashboard that polls architecture metrics:
- Run
analyze_architectureonce (e.g., on a schedule) - Read
metrics://current/to display results - Cache hit → instant response
Conditional Tool Execution
Section titled “Conditional Tool Execution”Read a resource first to decide if a full analysis is needed:
1. Read policy://violations/<path> ↓ If violations_count == 02. Skip full analysis ↓ Else3. Run analyze_architecture to get detailsParallel Tool Execution
Section titled “Parallel Tool Execution”Some AI assistants support parallel MCP tool calls. When tools are independent (no data dependencies), run them in parallel for faster results.
Safe Parallel Patterns
Section titled “Safe Parallel Patterns”Pattern 1: Multiple independent checks
Parallel: - check_cycles({ project_path: "." }) - check_llm_integration({ project_path: "." }) - evaluate_policy({ project_path: "." })
Aggregate results into a single health reportPattern 2: Multi-project analysis
Parallel: - analyze_architecture({ project_path: "./service-a" }) - analyze_architecture({ project_path: "./service-b" }) - analyze_architecture({ project_path: "./service-c" })
Compare results across servicesUnsafe Parallel Patterns (Avoid)
Section titled “Unsafe Parallel Patterns (Avoid)”Anti-pattern 1: Sequential dependency
Parallel: - check_cycles({ project_path: "." }) - suggest_refactors({ project_path: "." }) // Depends on cycle detection
Problem: suggest_refactors assumes cycles existAnti-pattern 2: Cache thrashing
Parallel: - analyze_architecture({ preset: "quick" }) - analyze_architecture({ preset: "ci" }) - analyze_architecture({ preset: "full" })
Problem: All three run simultaneously, no cache benefitCI Integration Patterns
Section titled “CI Integration Patterns”While the CLI is recommended for CI, the MCP server can integrate with CI systems that support MCP clients.
Pattern: Pull Request Architecture Review
Section titled “Pattern: Pull Request Architecture Review”Workflow:
- AI assistant triggered on PR open/update
- MCP tools analyze changed files:
analyze_file_impact({project_path: ".",file_paths: [/* PR changed files */]})
- AI posts comment with findings:
- Blast radius (how many modules affected)
- Centrality score (risk of breaking change)
- Policy violations
- Hotspots introduced
Implementation:
- GitHub Actions: Use an MCP-compatible AI bot
- GitLab CI: Custom script invoking MCP server
- Jenkins: MCP client plugin
Pattern: Architecture Budget Enforcement
Section titled “Pattern: Architecture Budget Enforcement”Goal: Block PRs that degrade architecture quality
Workflow:
- Run baseline comparison:
analyze_with_baseline({project_path: ".",baseline_ref: "main"})
- Check if metrics degraded:
scc.max_cycle_sizeincreasedpropagation_cost.system.ratio(propagation cost) increased
- Fail if regressions detected
Note: analyze_with_baseline requires the full arxo binary. For now, use the CLI in CI instead.
Pattern: Preset-Based Gates
Section titled “Pattern: Preset-Based Gates”Goal: Different quality bars for different scenarios
Workflow:
If PR touches core/: - Run analyze_architecture({ preset: "coupling" }) - Strict threshold: propagation_cost.system.ratio <= 0.08
If PR touches features/: - Run analyze_architecture({ preset: "quick" }) - Relaxed threshold: scc.max_cycle_size <= 3
If PR touches ai/: - Run check_llm_integration() - Require health_score >= 0.7Multi-Project / Monorepo Usage
Section titled “Multi-Project / Monorepo Usage”Pattern: Per-Package Analysis
Section titled “Pattern: Per-Package Analysis”For monorepos with multiple packages, analyze each independently:
Parallel: - analyze_architecture({ project_path: "./packages/frontend" }) - analyze_architecture({ project_path: "./packages/backend" }) - analyze_architecture({ project_path: "./packages/shared" })Aggregate results to identify:
- Which package has the most cycles
- Which package has the highest propagation cost
- Cross-package dependencies (if using import graph analysis)
Pattern: Monorepo-Wide Analysis
Section titled “Pattern: Monorepo-Wide Analysis”Use the monorepo preset (available in the engine):
analyze_architecture({ project_path: ".", preset: "full" // Includes monorepo metric})The monorepo metric detects:
- Phantom dependencies (transitive deps not declared)
- Package boundary violations
- Unused internal packages
Performance Optimization
Section titled “Performance Optimization”Optimize Analysis Scope
Section titled “Optimize Analysis Scope”Exclude unnecessary paths:
Create .arxoignore in project root:
# Dependenciesnode_modules/vendor/.venv/target/
# Generated codedist/build/out/generated/
# Tests (if not analyzing test coverage)**/*.test.ts**/*.spec.ts__tests__/Impact: 10x faster analysis for projects with large dependencies.
Choose the Right Preset
Section titled “Choose the Right Preset”| Preset | Use Case | Avg Time (100k LOC) |
|---|---|---|
quick | Interactive AI queries | 2-5 seconds |
ci | PR checks, pre-commit | 5-10 seconds |
risk | Periodic review | 10-20 seconds |
coupling | Deep refactoring analysis | 20-40 seconds |
full | Weekly/monthly review | 60+ seconds |
Recommendation: Start with quick, escalate to ci or risk only when needed.
Use Incremental Analysis
Section titled “Use Incremental Analysis”The analyze_file_impact tool runs incremental analysis:
analyze_file_impact({ project_path: ".", file_paths: ["src/changed-file.ts"]})Impact: Only recomputes metrics affected by the changed file. 5-10x faster than full re-analysis.
Tune Cache Aggressively
Section titled “Tune Cache Aggressively”For interactive workflows, increase TTL:
{ "args": ["--cache-ttl", "1800"] // 30 minutes}Trade-off: You may see slightly stale results if code changes frequently, but analysis is instant on cache hit.
Parallelize Independent Projects
Section titled “Parallelize Independent Projects”If analyzing multiple projects, run in parallel (see Parallel Tool Execution).
Example: Monorepo with 5 packages — analyze all 5 simultaneously instead of sequentially. 5x speedup.
Security Considerations
Section titled “Security Considerations”Untrusted Code Analysis
Section titled “Untrusted Code Analysis”The MCP server runs analysis on the codebase but does not execute code. It’s safe to analyze untrusted code, but:
- Risk: Malicious code could trigger parser bugs (very rare, but possible)
- Mitigation: Run the MCP server in a sandboxed environment (Docker, VM)
Access Control
Section titled “Access Control”The MCP server has read-only access to the filesystem. It cannot:
- Modify code
- Execute code
- Write files (except logs to stderr)
However, it can:
- Read any file in the project path
- Read the config file
Recommendation: Run the MCP server with the same permissions as your development environment (not root).
Secrets in Config
Section titled “Secrets in Config”Avoid putting secrets in arxo.yaml:
Bad:
telemetry: api_key: sk-abc123... # Don't do thisGood:
telemetry: api_key: ${ARCH0_API_KEY} # Read from environmentThe MCP server does not execute environment variable substitution, so use the CLI for sensitive workflows.
Custom Presets
Section titled “Custom Presets”The MCP server respects custom presets defined in your arxo.yaml:
presets: my-custom-preset: - scc - centrality - propagation_costUse it:
analyze_architecture({ "project_path": ".", "preset": "my-custom-preset"})Caching: Custom presets have unique cache keys, so they won’t conflict with built-in presets.
Logging Best Practices
Section titled “Logging Best Practices”Development
Section titled “Development”{ "env": { "RUST_LOG": "arxo_mcp=info" }}Logs:
- Tool calls
- Cache hits/misses
- Analysis duration
Debugging
Section titled “Debugging”{ "env": { "RUST_LOG": "arxo_mcp=debug" }}Logs everything above, plus:
- Parameter details
- Config loading steps
- Cache key computation
Production (CI/Shared Server)
Section titled “Production (CI/Shared Server)”{ "env": { "RUST_LOG": "arxo_mcp=warn" }}Logs only warnings and errors.
Note: Logs go to stderr, so they don’t interfere with JSON-RPC on stdout.
Monitoring and Observability
Section titled “Monitoring and Observability”Metrics to Track
Section titled “Metrics to Track”If running the MCP server as a long-running service:
-
Cache hit rate:
- Parse logs for
Cache hitvsCache miss - Target: >70% for interactive workflows
- Parse logs for
-
Tool execution time:
- Parse logs for
Analysis completed in X.Xs - Alert if >30s for
quickpreset
- Parse logs for
-
Error rate:
- Count
[ERROR]log lines - Alert on non-zero error rate
- Count
-
Memory usage:
Terminal window ps aux | grep arxo-mcp | awk '{print $6}'
Health Check Endpoint
Section titled “Health Check Endpoint”The MCP server does not expose an HTTP health endpoint (it’s stdio-based). For health checks:
-
Process check:
Terminal window pgrep arxo-mcp || echo "Server not running" -
JSON-RPC ping:
Terminal window echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | \timeout 5s /path/to/arxo-mcp > /dev/null && \echo "Healthy" || echo "Unhealthy"
Related Pages
Section titled “Related Pages”- Workflows — Common usage patterns
- Troubleshooting — Debugging and error resolution
- CLI Comparison — When to use CLI vs MCP
- Configuration — Server arguments and client setup