Skip to main content

๐Ÿค– AI-Powered Workflow Orchestration Guide

This guide covers the advanced AI-powered features of the MCP ADR Analysis Server, including intelligent tool orchestration, human override capabilities, and systematic troubleshooting workflows.

๐Ÿง  Overview of AI-Powered Featuresโ€‹

The MCP ADR Analysis Server includes several AI-powered tools that leverage OpenRouter.ai and advanced prompt engineering to provide intelligent workflow automation:

Core AI Toolsโ€‹

  • Tool Chain Orchestrator - Dynamic tool sequencing based on user intent
  • Troubleshooting Workflow - Systematic failure analysis with test plan generation
  • Smart Project Scoring - Cross-tool health assessment with AI optimization

Key Benefitsโ€‹

  • Hallucination Prevention - Reality checks and structured planning prevent AI confusion
  • Dynamic Workflow Generation - AI analyzes context to generate optimal tool sequences
  • Intelligent Fallbacks - Template-based approaches when AI services are unavailable
  • Cross-Tool Coordination - Tools work together for comprehensive project insights

๐Ÿ”— Tool Chain Orchestratorโ€‹

The Tool Chain Orchestrator is the central AI-powered planning system that generates intelligent tool execution sequences.

Core Operationsโ€‹

Generate Execution Planโ€‹

{
"operation": "generate_plan",
"userRequest": "Complete architectural analysis and generate implementation roadmap",
"includeContext": true,
"optimizeFor": "comprehensive"
}

What it does:

  • Analyzes user intent using OpenRouter.ai
  • Generates structured tool execution sequences
  • Provides dependency analysis and execution order
  • Includes confidence scoring and alternative approaches

Analyze User Intentโ€‹

{
"operation": "analyze_intent",
"userRequest": "Help me understand why my deployment is failing",
"extractGoals": true
}

Use Cases:

  • Understanding complex user requests
  • Goal extraction from natural language
  • Intent classification for workflow selection

Suggest Relevant Toolsโ€‹

{
"operation": "suggest_tools",
"userRequest": "I need to improve my project's architectural documentation",
"maxSuggestions": 5
}

Common Workflow Patternsโ€‹

When LLMs get confused or for systematic workflows, use these proven tool sequences:

Complete Project Analysisโ€‹

Tools: analyze_project_ecosystem โ†’ discover_existing_adrs โ†’ suggest_adrs โ†’ smart_score

Generate Documentationโ€‹

Tools: generate_adr_todo โ†’ generate_deployment_guidance โ†’ generate_rules

Security Audit and Fixโ€‹

Tools: analyze_content_security โ†’ suggest_adrs โ†’ generate_rules โ†’ validate_rules

Deployment Readinessโ€‹

Tools: compare_adr_progress โ†’ smart_score โ†’ generate_deployment_guidance โ†’ smart_git_push

Systematic Troubleshootingโ€‹

Tools: troubleshoot_guided_workflow โ†’ smart_score โ†’ compare_adr_progress

Complete TODO Managementโ€‹

Tools: generate_adr_todo โ†’ manage_todo โ†’ compare_adr_progress โ†’ smart_score

Advanced Featuresโ€‹

Reality Check for Hallucination Detectionโ€‹

{
"operation": "reality_check",
"sessionContext": "Previous conversation context",
"detectConfusion": true
}

Hallucination Indicators:

  • Repetitive tool suggestions
  • Circular reasoning patterns
  • Requests for non-existent tools
  • Inconsistent parameter suggestions

Session Guidance for Long Conversationsโ€‹

{
"operation": "session_guidance",
"conversationLength": "long",
"provideSummary": true
}

Provides:

  • Conversation summary and progress
  • Suggested next steps
  • Warning about potential confusion
  • Reset recommendations

๐Ÿšจ Human Override Systemโ€‹

The Human Override system forces AI-powered planning when LLMs get confused or stuck in loops.

When to Use Human Overrideโ€‹

Indicators you need human override:

  • LLM asking repetitive questions
  • Circular conversation patterns
  • LLM claiming tools don't exist
  • Inconsistent or contradictory suggestions
  • LLM seems "stuck" or confused

Core Operationsโ€‹

Force AI Planningโ€‹

{
"taskDescription": "Set up complete ADR infrastructure with deployment pipeline",
"forceExecution": true,
"includeContext": true
}

What it does:

  • Bypasses current LLM confusion
  • Forces fresh AI analysis through OpenRouter.ai
  • Generates structured execution plans
  • Provides clear command schemas for LLM consumption

Extract Goals from Natural Languageโ€‹

{
"taskDescription": "The build is broken and tests are failing, need to fix everything",
"forceExecution": true,
"extractGoals": true
}

Goal Extraction Example:

  • Input: "The build is broken and tests are failing, need to fix everything"
  • Extracted Goals: ["Fix build issues", "Resolve test failures", "Validate system stability"]

Integration with Knowledge Graphโ€‹

The Human Override system integrates with the Knowledge Graph to:

  • Track human intervention patterns
  • Record forced execution contexts
  • Analyze effectiveness of override strategies
  • Provide analytics on LLM confusion patterns

๐Ÿ”ง Troubleshooting Guided Workflowโ€‹

Systematic problem-solving with ADR/TODO alignment and AI-powered test plan generation.

Supported Failure Typesโ€‹

  • test_failure - Unit/integration test failures
  • deployment_failure - Production deployment issues
  • build_failure - Compilation or build process failures
  • runtime_error - Application runtime exceptions
  • performance_issue - Performance degradation problems
  • security_issue - Security vulnerabilities or breaches

Core Operationsโ€‹

Analyze Failureโ€‹

{
"operation": "analyze_failure",
"failureType": "deployment_failure",
"description": "Kubernetes deployment failing with connection timeouts",
"severity": "high",
"context": {
"environment": "production",
"lastWorkingVersion": "v2.1.0",
"errorDetails": "Connection timeout after 30s"
}
}

Generate AI-Powered Test Planโ€‹

{
"operation": "generate_test_plan",
"failureType": "build_failure",
"description": "TypeScript compilation failing after dependency update",
"severity": "medium"
}

AI-Generated Output:

{
"testPlan": {
"diagnosticCommands": [
"npx tsc --noEmit --listFiles",
"npm ls typescript",
"npx tsc --showConfig"
],
"validationSteps": [
"Verify TypeScript version compatibility",
"Check for conflicting type definitions",
"Validate tsconfig.json configuration"
],
"expectedOutcomes": [
"TypeScript version should be compatible with dependencies",
"No duplicate type definitions should exist",
"Configuration should match project requirements"
]
}
}

Full Workflow Integrationโ€‹

{
"operation": "full_workflow",
"failureType": "security_issue",
"description": "Potential sensitive data exposure in logs",
"severity": "critical"
}

Full Workflow Steps:

  1. Analyze failure with AI-powered assessment
  2. Generate specific test plan with commands
  3. Execute analyze_content_security for sensitive data detection
  4. Run suggest_adrs for security-related architectural decisions
  5. Use generate_rules to create security compliance rules
  6. Execute smart_score to assess security posture improvement

๐Ÿ“Š Smart Project Scoring Systemโ€‹

Cross-tool coordination for comprehensive project health assessment.

Core Operationsโ€‹

Recalculate All Scoresโ€‹

{
"operation": "recalculate_scores",
"updateSources": true,
"includeOptimization": true
}

Synchronize Cross-Tool Scoresโ€‹

{
"operation": "sync_scores",
"rebalanceWeights": true,
"focusAreas": ["todo", "architecture", "security", "deployment"]
}

Comprehensive Health Diagnosticsโ€‹

{
"operation": "diagnose_scores",
"includeRecommendations": true,
"dataFreshness": true
}

AI-Driven Weight Optimizationโ€‹

{
"operation": "optimize_weights",
"projectType": "web-application",
"optimizationGoal": "deployment_readiness"
}

Score Componentsโ€‹

Default Scoring Weights:

  • TODO Completion: 30%
  • Architecture Compliance: 25%
  • Security Posture: 20%
  • Deployment Readiness: 15%
  • Code Quality: 10%

Project-Specific Optimization:

  • Startup Projects: Higher weight on TODO completion and deployment readiness
  • Enterprise Applications: Higher weight on security and architecture compliance
  • Open Source: Higher weight on code quality and documentation

๐Ÿ”„ Best Practices for AI-Powered Workflowsโ€‹

1. Start with Tool Orchestrationโ€‹

For complex tasks, always begin with:

{
"operation": "generate_plan",
"userRequest": "Your specific goal",
"includeContext": true
}

2. Use Human Override When Stuckโ€‹

If conversation becomes circular:

{
"taskDescription": "Clear description of what you want to achieve",
"forceExecution": true
}

3. Leverage Troubleshooting for Problemsโ€‹

For any failure or issue:

{
"operation": "full_workflow",
"failureType": "appropriate_type",
"description": "Specific problem description"
}

4. Monitor Health Continuouslyโ€‹

Regular health checks:

{
"operation": "diagnose_scores",
"includeRecommendations": true
}

5. Validate with Reality Checksโ€‹

Prevent AI confusion:

{
"operation": "reality_check",
"detectConfusion": true
}

๐Ÿš€ Advanced Integration Patternsโ€‹

Workflow Chainingโ€‹

Combine AI-powered tools for comprehensive workflows:

  1. Initial Planning: tool_chain_orchestrator generates plan
  2. Execution: Follow generated tool sequence
  3. Validation: smart_score assesses results
  4. Troubleshooting: troubleshoot_guided_workflow if issues arise
  5. Restart: Start fresh session if confusion occurs

Context Preservationโ€‹

Maintain context across tool executions:

  • Include conversation history in orchestration requests
  • Use knowledge graph to track intent progressions
  • Leverage smart scoring for continuous assessment

Fallback Strategiesโ€‹

Always have fallbacks when AI is unavailable:

  • Predefined task patterns in orchestrator
  • Template-based test plans in troubleshooting
  • Manual workflow guides for common scenarios

๐ŸŽฏ Common Scenarios and Solutionsโ€‹

Scenario 1: LLM Keeps Asking Same Questionsโ€‹

Solution: Use Human Override

{
"taskDescription": "Complete architectural analysis for microservices project",
"forceExecution": true
}

Scenario 2: Complex Multi-Step Taskโ€‹

Solution: Use Tool Orchestration

{
"operation": "generate_plan",
"userRequest": "Migrate monolith to microservices with full documentation",
"optimizeFor": "comprehensive"
}

Scenario 3: Deployment Keeps Failingโ€‹

Solution: Use Troubleshooting Workflow

{
"operation": "full_workflow",
"failureType": "deployment_failure",
"description": "Specific deployment error details"
}

Scenario 4: Project Health Decliningโ€‹

Solution: Use Smart Scoring Diagnostics

{
"operation": "diagnose_scores",
"includeRecommendations": true,
"focusAreas": ["todo", "architecture", "deployment"]
}

๐Ÿ”ฎ Future Enhancementsโ€‹

Planned AI Improvementsโ€‹

  1. Enhanced Hallucination Detection - More sophisticated confusion pattern recognition
  2. Learning from Patterns - AI learns from successful workflow patterns
  3. Cross-Project Insights - Share knowledge across different projects
  4. Advanced Context Understanding - Better long-term conversation context
  5. Predictive Troubleshooting - Anticipate issues before they occur

Integration Roadmapโ€‹

  • CI/CD Integration - Automated workflow orchestration in pipelines
  • IDE Extensions - Real-time AI guidance during development
  • Team Collaboration - Shared AI insights and workflow patterns
  • Advanced Analytics - Project health prediction and optimization

Need Help?

  • Check the main README for basic setup
  • Review specific tool guides for detailed parameters
  • Open an issue on GitHub for AI-related problems