Skip to main content

Architecture Overview

This document provides a comprehensive overview of the MCP ADR Analysis Server architecture, including system components, data flows, and design decisions.


System Architectureโ€‹

The MCP ADR Analysis Server is built as a Model Context Protocol (MCP) server that provides AI-powered architectural analysis capabilities to MCP clients.


Core Componentsโ€‹

Transport Layerโ€‹

The server communicates with MCP clients via stdio transport using JSON-RPC 2.0 protocol.

// Message flow
Client โ†’ [JSON-RPC Request] โ†’ Server
Server โ†’ [JSON-RPC Response] โ†’ Client

Key Characteristics:

  • Bidirectional communication over stdin/stdout
  • Stateless request handling
  • Support for tool calls, resource reads, and prompt operations

Tool Layerโ€‹

The server exposes 73 tools organized by functionality:

CategoryToolsPurpose
Core Analysisanalyze_project_ecosystem, get_architectural_contextProject understanding
ADR Managementsuggest_adrs, generate_adr_from_decisionADR lifecycle
Securityanalyze_content_security, generate_content_maskingContent protection
Deploymentdeployment_readiness, smart_git_push_v2Release validation
Researchperform_research, generate_research_questionsInformation gathering
Workflowtool_chain_orchestrator, troubleshoot_guided_workflowProcess automation

Utility Layerโ€‹

Shared utilities that tools depend on:


Data Flowโ€‹

Analysis Request Flowโ€‹

Knowledge Graph Flowโ€‹


Directory Structureโ€‹

src/
โ”œโ”€โ”€ index.ts # MCP server entry point
โ”œโ”€โ”€ tools/ # Tool implementations (73 tools)
โ”‚ โ”œโ”€โ”€ adr-suggestion-tool.ts
โ”‚ โ”œโ”€โ”€ smart-score-tool.ts
โ”‚ โ”œโ”€โ”€ deployment-readiness-tool.ts
โ”‚ โ”œโ”€โ”€ content-masking-tool.ts
โ”‚ โ””โ”€โ”€ ...
โ”œโ”€โ”€ utils/ # Shared utilities
โ”‚ โ”œโ”€โ”€ ai-executor.ts # OpenRouter integration
โ”‚ โ”œโ”€โ”€ knowledge-graph-manager.ts
โ”‚ โ”œโ”€โ”€ cache.ts # Multi-level caching
โ”‚ โ”œโ”€โ”€ enhanced-logging.ts
โ”‚ โ””โ”€โ”€ ...
โ””โ”€โ”€ types/ # TypeScript type definitions

Design Decisionsโ€‹

1. MCP Protocol Complianceโ€‹

Decision: Build as a pure MCP server rather than a standalone CLI tool.

Rationale:

  • Native integration with AI assistants (Claude, Cursor, Cline)
  • Standardized communication protocol
  • Ecosystem compatibility with other MCP servers

Trade-offs:

  • Requires MCP client for full functionality
  • Additional protocol overhead vs direct API

2. Dual Execution Modesโ€‹

Decision: Support both full AI execution and prompt-only fallback.

Rationale:

  • Lower barrier to entry (no API key required to try)
  • Flexibility for users with their own AI access
  • Cost control for users

Trade-offs:

  • More complex tool implementations
  • Two code paths to maintain

3. Tree-sitter for Code Analysisโ€‹

Decision: Use tree-sitter for semantic code understanding.

Rationale:

  • Accurate AST parsing vs regex patterns
  • Support for 50+ languages
  • Incremental parsing for efficiency

Trade-offs:

  • Native module compilation during install
  • Larger package size
  • Build may fail in restricted network environments

4. Local-First Architectureโ€‹

Decision: Process everything locally, use external services only when needed.

Rationale:

  • Privacy: code never leaves the machine unless explicitly synced
  • Speed: local analysis is faster than API calls
  • Offline capability: core features work without internet

Trade-offs:

  • Limited to local compute resources
  • Some features require external services (AI, web search)

5. Multi-Level Cachingโ€‹

Decision: Implement caching at multiple levels (memory, disk, session).

Rationale:

  • Avoid redundant analysis of unchanged code
  • Reduce API calls and costs
  • Faster repeated operations

Trade-offs:

  • Cache invalidation complexity
  • Disk space usage
  • Potential stale data issues

Comparison with Alternativesโ€‹

ApproachOur DesignAlternative: Standalone CLIAlternative: Web Service
IntegrationNative MCPManual workflowAPI calls
AI AccessAutomaticUser-managedProvider-managed
PrivacyLocal-firstLocalCloud-based
OfflinePartialFullNone
SetupMCP configDirect installAccount required

Extension Pointsโ€‹

The architecture supports extension through:

Adding New Toolsโ€‹

// 1. Create tool file in src/tools/
export const myNewTool = {
name: 'my_new_tool',
description: 'Does something useful',
inputSchema: {
/* JSON Schema */
},
handler: async params => {
/* Implementation */
},
};

// 2. Register in src/index.ts
server.setRequestHandler(CallToolRequestSchema, async request => {
// Tool dispatch logic
});

Adding New Utilitiesโ€‹

// 1. Create utility in src/utils/
export class MyUtility {
// Shared functionality
}

// 2. Import and use in tools
import { MyUtility } from '../utils/my-utility';

Custom AI Modelsโ€‹

// Configure via environment variables
AI_MODEL=openai/gpt-4-turbo # or any OpenRouter-supported model
AI_TEMPERATURE=0.3
AI_MAX_TOKENS=4096

Performance Characteristicsโ€‹

OperationTypical LatencyNotes
Tool dispatch< 10msLocal routing
Code parsing50-500msDepends on file size
Cached analysis< 50msCache hit
AI analysis2-10sNetwork + model inference
Web search3-15sDepends on content

Optimization Strategiesโ€‹

  • Incremental parsing: Only re-parse changed files
  • Parallel processing: Analyze multiple files concurrently
  • Smart caching: Content-addressed cache for reproducibility
  • Lazy loading: Load utilities on demand

Security Modelโ€‹

Security Principles:

  • All file paths validated against project root
  • Automatic detection and masking of secrets
  • No execution of arbitrary code from analyzed projects
  • API keys stored in environment, never logged


Further Readingโ€‹


Questions about the architecture? โ†’ Open an Issue