Skip to main content

AI Architecture Concepts

Understanding the AI integration patterns, execution pipeline, and intelligent analysis capabilities of the MCP ADR Analysis Server.


Overviewโ€‹

The MCP ADR Analysis Server integrates AI capabilities at multiple levels to provide intelligent architectural analysis. This document explains the core AI concepts, how they interact, and the design decisions behind the architecture.

Key Conceptsโ€‹

  • Dual Execution Modes: Full AI mode vs. prompt-only mode for flexibility
  • Knowledge Graph: Persistent memory of architectural relationships
  • Tree-sitter Integration: Semantic code understanding for accurate analysis
  • Confidence Scoring: Quantified reliability of analysis results
  • Cascading Data Sources: Multi-tier information retrieval strategy

AI Execution Pipelineโ€‹

The server supports two execution modes, each with distinct characteristics:

Full Mode (AI-Powered)โ€‹

When EXECUTION_MODE=full and an OpenRouter API key is configured, the server executes AI analysis directly:

Process Flow:

  1. Request Reception: MCP client sends tool invocation via JSON-RPC
  2. Context Assembly: Relevant project context, ADRs, and code snippets gathered
  3. AI Execution: Structured prompt sent to OpenRouter with assembled context
  4. Response Processing: AI response parsed, validated, and formatted
  5. Result Delivery: Structured result returned to MCP client

Prompt-Only Mode (Fallback)โ€‹

When no API key is configured, the server generates prompts for manual AI execution:

This mode allows users to:

  • Explore available tools without cost
  • Use their preferred AI interface
  • Maintain control over AI interactions

OpenRouter Integrationโ€‹

The server uses OpenRouter as its AI gateway, providing access to multiple AI models through a unified API.

Why OpenRouter?โ€‹

BenefitDescription
Model DiversityAccess Claude, GPT-4, Llama, and other models with one API key
Automatic RoutingIntelligent model selection based on request type
Cost ManagementUnified billing and usage tracking
Fallback SupportAutomatic failover if primary model is unavailable

Configurationโ€‹

// Environment variables
OPENROUTER_API_KEY; // Required for full mode
AI_MODEL; // Model selection (default: anthropic/claude-3-sonnet)
AI_TEMPERATURE; // Response consistency (default: 0.3)
AI_MAX_TOKENS; // Maximum response length (default: 4096)

Request Structureโ€‹

interface AIExecutorRequest {
prompt: string; // Structured analysis prompt
context: ProjectContext; // Assembled project context
parameters: {
temperature: number;
maxTokens: number;
model: string;
};
}

Knowledge Graph Architectureโ€‹

The Knowledge Graph is a persistent memory system that tracks relationships between architectural artifacts.

Graph Structureโ€‹

Node Typesโ€‹

Node TypeDescriptionProperties
ADRArchitectural Decision Recordtitle, status, date, context
CodeFileSource code filepath, language, functions
DecisionSpecific architectural choicerationale, alternatives, consequences
TechnologyFramework or toolname, version, category
PatternArchitectural patternname, type, implementation

Edge Typesโ€‹

Edge TypeDescriptionExample
IMPLEMENTSCode implements a decisionjwt.ts IMPLEMENTS "Use JWT tokens"
DEPENDS_ONADR depends on anotherAPI Design DEPENDS_ON Auth Strategy
USESCode uses a technologyroutes.ts USES Express
SUPERSEDESADR replaces anotherADR-005 SUPERSEDES ADR-002

Graph Operationsโ€‹

// Add a relationship
await knowledgeGraph.addRelationship({
from: 'adr-001',
to: 'src/auth/jwt.ts',
type: 'IMPLEMENTS',
metadata: { confidence: 0.95 },
});

// Query related code
const relatedCode = await knowledgeGraph.query({
from: 'adr-001',
edgeType: 'IMPLEMENTS',
depth: 2,
});

Tree-sitter Integrationโ€‹

Tree-sitter provides incremental parsing for semantic code understanding.

Why Tree-sitter?โ€‹

  • Language-agnostic: Supports 50+ programming languages
  • Incremental: Efficient re-parsing on code changes
  • Accurate: Full AST access for precise analysis
  • Fast: Written in C with language bindings

Analysis Capabilitiesโ€‹

Extracted Informationโ€‹

ElementUse Case
FunctionsUnderstand code structure, link to ADRs
ImportsDetect dependencies, technology stack
ClassesIdentify patterns, architectural boundaries
CommentsExtract documentation, inline decisions
TypesUnderstand data models, contracts

Code Linkingโ€‹

Tree-sitter enables Smart Code Linking - connecting ADR decisions to implementation:

// Find code related to an ADR decision
const relatedCode = await findRelatedCode(
'docs/adrs/001-auth-system.md',
'We will implement JWT authentication',
projectPath,
{
useTreeSitter: true,
extractFunctions: true,
maxResults: 10,
}
);

Confidence Scoringโ€‹

Every analysis result includes a confidence score (0-1) indicating reliability.

Scoring Factorsโ€‹

FactorWeightDescription
Source Quality30%Reliability of information source
Context Completeness25%Amount of relevant context available
Pattern Matching20%Strength of detected patterns
Consistency15%Agreement across multiple sources
Recency10%How current the information is

Score Interpretationโ€‹

RangeMeaningRecommended Action
0.9 - 1.0High confidenceTrust result, proceed with implementation
0.7 - 0.89Good confidenceReview for edge cases
0.5 - 0.69Moderate confidenceValidate with additional research
Below 0.5Low confidenceManual verification required

Implementationโ€‹

interface AnalysisResult {
data: any;
confidence: number;
factors: {
sourceQuality: number;
contextCompleteness: number;
patternMatching: number;
consistency: number;
recency: number;
};
recommendations: string[];
}

Cascading Data Sourcesโ€‹

The perform_research tool uses a cascading strategy to find answers:

Priority Orderโ€‹

  1. Project Files: Most authoritative, highest confidence
  2. Knowledge Graph: Architectural context and history
  3. Environment: Runtime configuration and deployment
  4. Web Search: External information (lowest confidence, requires Firecrawl)

Design Decisionsโ€‹

Decision 1: Dual Execution Modesโ€‹

Problem: Users have varying needs - some want immediate results, others prefer control over AI interactions.

Solution: Support both full AI execution and prompt-only modes, switching based on configuration.

Trade-offs:

  • Increased complexity in tool implementation
  • Better user flexibility and adoption
  • No vendor lock-in for AI provider

Decision 2: Local Knowledge Graphโ€‹

Problem: AI models lack persistent memory between sessions.

Solution: Maintain a local knowledge graph that persists architectural relationships.

Trade-offs:

  • Additional storage requirements
  • Requires periodic synchronization
  • Enables offline context and faster responses

Decision 3: Tree-sitter for Code Analysisโ€‹

Problem: Need accurate code understanding across multiple languages.

Solution: Use tree-sitter for AST-based analysis instead of regex patterns.

Trade-offs:

  • Native module compilation required during install
  • More accurate than pattern matching
  • Language support limited to tree-sitter grammars


Further Readingโ€‹


Questions about AI architecture? โ†’ Open an Issue