Skip to main content

๐Ÿ› ๏ธ Tool Design Philosophy

Understanding the design principles and architecture behind the MCP ADR Analysis Server's 37 specialized tools.


๐ŸŽฏ Overviewโ€‹

The MCP ADR Analysis Server implements a comprehensive suite of 37 specialized tools designed around the principle of "intelligent automation with human oversight." Each tool is carefully crafted to provide specific capabilities while maintaining consistency, reliability, and extensibility.

Key Design Principlesโ€‹

  • Single Responsibility - Each tool has one clear purpose
  • Consistent Interface - Uniform parameter and response patterns
  • Intelligent Defaults - Sensible defaults with full customization
  • Error Resilience - Graceful handling of edge cases
  • Extensibility - Easy to add new tools and capabilities

๐Ÿ—๏ธ Architecture and Designโ€‹

Tool Architecture Patternโ€‹

Tool Categoriesโ€‹

๐Ÿ” Analysis Tools (8 tools)

  • Project ecosystem analysis
  • Architectural context extraction
  • Environment analysis
  • ADR discovery and review

๐Ÿ“ Generation Tools (6 tools)

  • ADR creation from various sources
  • TODO extraction and management
  • Bootstrap script generation
  • Interactive planning workflows

๐Ÿ”’ Security Tools (5 tools)

  • Content security analysis
  • Intelligent content masking
  • Custom pattern configuration
  • Security validation

๐Ÿ“Š Validation Tools (4 tools)

  • Project health scoring
  • Progress tracking and comparison
  • Deployment readiness analysis
  • Guidance generation

๐Ÿ› ๏ธ Workflow Tools (5 tools)

  • Troubleshooting workflows
  • Development guidance
  • Tool chain orchestration
  • Smart Git operations

๐Ÿ”ฌ Research Tools (4 tools)

  • Research question generation
  • Template creation
  • Research integration
  • Knowledge incorporation

๐Ÿ“ File Operations (3 tools)

  • File reading and writing
  • Directory listing
  • Cache management

โš™๏ธ Configuration Tools (2 tools)

  • Output masking configuration
  • Action confirmation requests

๐Ÿ”„ How It Worksโ€‹

Tool Execution Pipelineโ€‹

Phase 1: Input Processing

abstract class BaseTool {
async execute(args: ToolArgs): Promise<ToolResult> {
try {
// 1. Input validation
const validatedArgs = await this.validateInput(args);

// 2. Authentication and authorization
await this.checkPermissions(validatedArgs);

// 3. Tool-specific processing
const result = await this.processTool(validatedArgs);

// 4. Response formatting
return this.formatResponse(result);
} catch (error) {
return this.handleError(error);
}
}

protected abstract validateInput(args: ToolArgs): Promise<ValidatedArgs>;
protected abstract processTool(args: ValidatedArgs): Promise<ToolResult>;
}

Phase 2: Tool Implementation Pattern

class AnalyzeProjectEcosystemTool extends BaseTool {
protected async validateInput(args: ToolArgs): Promise<ValidatedArgs> {
// 1. Schema validation
const schema = z.object({
projectPath: z.string().min(1),
analysisType: z.enum(['quick', 'comprehensive', 'deep']),
includeDependencies: z.boolean().optional().default(true),
maxDepth: z.number().min(1).max(10).optional().default(5),
});

const validated = schema.parse(args);

// 2. Business logic validation
await this.validateProjectPath(validated.projectPath);
await this.checkAnalysisPermissions(validated);

return validated;
}

protected async processTool(args: ValidatedArgs): Promise<ToolResult> {
// 1. Check cache
const cacheKey = this.generateCacheKey(args);
const cached = await this.getFromCache(cacheKey);
if (cached) return cached;

// 2. Perform analysis
const analysis = await this.performAnalysis(args);

// 3. Cache results
await this.setCache(cacheKey, analysis);

return analysis;
}

private async performAnalysis(args: ValidatedArgs): Promise<AnalysisResult> {
const [structure, dependencies, patterns] = await Promise.all([
this.analyzeProjectStructure(args.projectPath),
this.analyzeDependencies(args.projectPath),
this.identifyArchitecturalPatterns(args.projectPath),
]);

return {
structure,
dependencies,
patterns,
confidence: this.calculateConfidence(structure, dependencies, patterns),
recommendations: await this.generateRecommendations(structure, patterns),
};
}
}

Phase 3: Response Standardization

interface StandardToolResponse {
success: boolean;
data?: any;
error?: {
code: string;
message: string;
details?: any;
};
metadata: {
toolName: string;
executionTime: number;
cacheHit: boolean;
timestamp: string;
};
}

class ResponseFormatter {
formatSuccess(data: any, metadata: ToolMetadata): StandardToolResponse {
return {
success: true,
data: this.applyContentMasking(data),
metadata: {
toolName: metadata.toolName,
executionTime: metadata.executionTime,
cacheHit: metadata.cacheHit,
timestamp: new Date().toISOString(),
},
};
}

formatError(error: ToolError, metadata: ToolMetadata): StandardToolResponse {
return {
success: false,
error: {
code: error.code,
message: error.message,
details: error.details,
},
metadata: {
toolName: metadata.toolName,
executionTime: metadata.executionTime,
cacheHit: false,
timestamp: new Date().toISOString(),
},
};
}
}

Tool Orchestrationโ€‹

Tool Chain Pattern:

class ToolChainOrchestrator {
async executeChain(chain: ToolChain): Promise<ChainResult> {
const results: ToolResult[] = [];
const context: ChainContext = {};

for (const step of chain.steps) {
try {
// 1. Prepare context for this step
const stepContext = this.prepareContext(context, step);

// 2. Execute tool
const result = await this.executeTool(step.tool, stepContext);

// 3. Update context with results
context[step.name] = result;
results.push(result);

// 4. Check for early termination
if (step.terminateOnError && !result.success) {
break;
}
} catch (error) {
return this.handleChainError(error, results);
}
}

return this.aggregateResults(results);
}
}

๐Ÿ’ก Design Decisionsโ€‹

Decision 1: Uniform Tool Interfaceโ€‹

Problem: Different tools had inconsistent interfaces, making them hard to use and maintain
Solution: Standardize all tools with consistent input/output patterns and error handling
Trade-offs:

  • โœ… Pros: Easier to use, maintain, and extend; consistent user experience
  • โŒ Cons: Some tools may have slightly more complex interfaces than needed

Decision 2: Schema-First Validationโ€‹

Problem: Runtime errors from invalid inputs were hard to debug and handle
Solution: Use Zod schemas for comprehensive input validation with detailed error messages
Trade-offs:

  • โœ… Pros: Clear validation errors, type safety, self-documenting interfaces
  • โŒ Cons: Additional complexity, larger bundle size

Decision 3: Intelligent Cachingโ€‹

Problem: Repeated operations were slow and expensive, especially AI-powered analysis
Solution: Implement multi-level caching with intelligent invalidation
Trade-offs:

  • โœ… Pros: Dramatically faster response times, reduced costs
  • โŒ Cons: Cache invalidation complexity, potential stale data

Decision 4: Error Recovery Patternsโ€‹

Problem: Tool failures could cascade and cause system-wide issues
Solution: Implement graceful error handling with recovery mechanisms
Trade-offs:

  • โœ… Pros: System resilience, better user experience
  • โŒ Cons: More complex error handling logic

๐Ÿ”ง Tool Development Guidelinesโ€‹

Creating New Toolsโ€‹

1. Tool Template:

export class YourNewTool extends BaseTool {
static readonly NAME = 'your_new_tool';
static readonly DESCRIPTION = 'Brief description of what this tool does';

protected async validateInput(args: ToolArgs): Promise<ValidatedArgs> {
// Define your validation schema
const schema = z.object({
// Your parameters here
});

return schema.parse(args);
}

protected async processTool(args: ValidatedArgs): Promise<ToolResult> {
// Implement your tool logic
return {
success: true,
data: {
// Your result data
},
};
}
}

2. Testing Requirements:

  • Unit tests for all public methods
  • Integration tests with real data
  • Error condition testing
  • Performance benchmarks

3. Documentation Requirements:

  • Clear parameter descriptions
  • Usage examples
  • Error condition documentation
  • Performance characteristics

Tool Naming Conventionsโ€‹

Analysis Tools: analyze_* or get_*

  • analyze_project_ecosystem
  • get_architectural_context

Generation Tools: generate_* or create_*

  • generate_adrs_from_prd
  • create_rule_set

Security Tools: analyze_content_security, validate_*

  • analyze_content_security
  • validate_content_masking

Workflow Tools: troubleshoot_*, get_*_guidance

  • troubleshoot_guided_workflow
  • get_development_guidance

๐Ÿ“Š Tool Performance Metricsโ€‹

Current Tool Performanceโ€‹

Tool CategoryAverage Response TimeCache Hit RateError Rate
Analysis Tools2.5s75%2%
Generation Tools8.2s60%5%
Security Tools1.8s85%1%
Validation Tools3.1s70%3%
Workflow Tools4.5s65%4%
Research Tools12.3s45%8%

Optimization Targetsโ€‹

  • Response Time: <5s for 95% of requests
  • Cache Hit Rate: >70% across all tools
  • Error Rate: <3% across all tools
  • Availability: 99.9% uptime


๐Ÿ“š Further Readingโ€‹


Questions about tool design? โ†’ Open an Issue