Architecture Overview
This document provides a comprehensive overview of the MCP ADR Analysis Server architecture, including system components, data flows, and design decisions.
System Architectureโ
The MCP ADR Analysis Server is built as a Model Context Protocol (MCP) server that provides AI-powered architectural analysis capabilities to MCP clients.
Core Componentsโ
Transport Layerโ
The server communicates with MCP clients via stdio transport using JSON-RPC 2.0 protocol.
// Message flow
Client โ [JSON-RPC Request] โ Server
Server โ [JSON-RPC Response] โ Client
Key Characteristics:
- Bidirectional communication over stdin/stdout
- Stateless request handling
- Support for tool calls, resource reads, and prompt operations
Tool Layerโ
The server exposes 73 tools organized by functionality:
| Category | Tools | Purpose |
|---|---|---|
| Core Analysis | analyze_project_ecosystem, get_architectural_context | Project understanding |
| ADR Management | suggest_adrs, generate_adr_from_decision | ADR lifecycle |
| Security | analyze_content_security, generate_content_masking | Content protection |
| Deployment | deployment_readiness, smart_git_push_v2 | Release validation |
| Research | perform_research, generate_research_questions | Information gathering |
| Workflow | tool_chain_orchestrator, troubleshoot_guided_workflow | Process automation |
Utility Layerโ
Shared utilities that tools depend on:
Data Flowโ
Analysis Request Flowโ
Knowledge Graph Flowโ
Directory Structureโ
src/
โโโ index.ts # MCP server entry point
โโโ tools/ # Tool implementations (73 tools)
โ โโโ adr-suggestion-tool.ts
โ โโโ smart-score-tool.ts
โ โโโ deployment-readiness-tool.ts
โ โโโ content-masking-tool.ts
โ โโโ ...
โโโ utils/ # Shared utilities
โ โโโ ai-executor.ts # OpenRouter integration
โ โโโ knowledge-graph-manager.ts
โ โโโ cache.ts # Multi-level caching
โ โโโ enhanced-logging.ts
โ โโโ ...
โโโ types/ # TypeScript type definitions
Design Decisionsโ
1. MCP Protocol Complianceโ
Decision: Build as a pure MCP server rather than a standalone CLI tool.
Rationale:
- Native integration with AI assistants (Claude, Cursor, Cline)
- Standardized communication protocol
- Ecosystem compatibility with other MCP servers
Trade-offs:
- Requires MCP client for full functionality
- Additional protocol overhead vs direct API
2. Dual Execution Modesโ
Decision: Support both full AI execution and prompt-only fallback.
Rationale:
- Lower barrier to entry (no API key required to try)
- Flexibility for users with their own AI access
- Cost control for users
Trade-offs:
- More complex tool implementations
- Two code paths to maintain
3. Tree-sitter for Code Analysisโ
Decision: Use tree-sitter for semantic code understanding.
Rationale:
- Accurate AST parsing vs regex patterns
- Support for 50+ languages
- Incremental parsing for efficiency
Trade-offs:
- Native module compilation during install
- Larger package size
- Build may fail in restricted network environments
4. Local-First Architectureโ
Decision: Process everything locally, use external services only when needed.
Rationale:
- Privacy: code never leaves the machine unless explicitly synced
- Speed: local analysis is faster than API calls
- Offline capability: core features work without internet
Trade-offs:
- Limited to local compute resources
- Some features require external services (AI, web search)
5. Multi-Level Cachingโ
Decision: Implement caching at multiple levels (memory, disk, session).
Rationale:
- Avoid redundant analysis of unchanged code
- Reduce API calls and costs
- Faster repeated operations
Trade-offs:
- Cache invalidation complexity
- Disk space usage
- Potential stale data issues
Comparison with Alternativesโ
| Approach | Our Design | Alternative: Standalone CLI | Alternative: Web Service |
|---|---|---|---|
| Integration | Native MCP | Manual workflow | API calls |
| AI Access | Automatic | User-managed | Provider-managed |
| Privacy | Local-first | Local | Cloud-based |
| Offline | Partial | Full | None |
| Setup | MCP config | Direct install | Account required |
Extension Pointsโ
The architecture supports extension through:
Adding New Toolsโ
// 1. Create tool file in src/tools/
export const myNewTool = {
name: 'my_new_tool',
description: 'Does something useful',
inputSchema: {
/* JSON Schema */
},
handler: async params => {
/* Implementation */
},
};
// 2. Register in src/index.ts
server.setRequestHandler(CallToolRequestSchema, async request => {
// Tool dispatch logic
});
Adding New Utilitiesโ
// 1. Create utility in src/utils/
export class MyUtility {
// Shared functionality
}
// 2. Import and use in tools
import { MyUtility } from '../utils/my-utility';
Custom AI Modelsโ
// Configure via environment variables
AI_MODEL=openai/gpt-4-turbo # or any OpenRouter-supported model
AI_TEMPERATURE=0.3
AI_MAX_TOKENS=4096