ADR-015: APE (Automatic Prompt Engineering) Optimization Strategy
Statusโ
Accepted
Dateโ
2025-12-12
Contextโ
The Automatic Prompt Engineering (APE) module (src/utils/automatic-prompt-engineering.ts) is a 926-line utility that generates AI delegation prompts for automatic prompt optimization. Analysis identified several optimization opportunities aligned with Anthropic's MCP best practices.
Current State Issuesโ
- Token Overhead: Each APE call generates ~2,643 tokens with 40-60% redundancy
- Hardcoded Descriptions: 22 description strings across 4 inline maps regenerated on every call
- No Template Reuse: Prompts regenerated from scratch for each request (0% reuse)
- Memory Footprint: ~1.2KB of description strings loaded even when not used
Research Findingsโ
Based on analysis of:
Key principles identified:
- Progressive Discovery: Load tool definitions on-demand, not upfront
- Code-Based Interfaces: Use functions instead of verbose prompt templates
- Filter at Execution: Process data in execution environment before passing to model
- Template Caching: Reuse static portions of prompts across calls
Decisionโ
Implement a phased APE optimization approach:
Phase 1: Extract Description Constants (Implemented)โ
Created src/config/ape-descriptions.ts containing:
STRATEGY_DESCRIPTIONS: Generation strategy descriptionsCRITERION_DESCRIPTIONS: Evaluation criterion descriptionsSELECTION_STRATEGY_DESCRIPTIONS: Selection strategy descriptionsTOOL_OPTIMIZATION_PRIORITIES: Tool-specific optimization priorities- Helper functions with fallbacks for safe access
- Template caching utilities for future optimization
Updated src/utils/automatic-prompt-engineering.ts to:
- Import descriptions from centralized config
- Remove 4 inline description maps (~90 lines removed)
- Use cached helper functions for description access
Phase 2: Template Caching (Infrastructure Ready)โ
Added template caching infrastructure:
getCachedTemplateSection(): Cache static template portionsclearTemplateCache(): Clear cache for testing/updatesgetTemplateCacheStats(): Monitor cache effectiveness
Future Phases (Not Yet Implemented)โ
Phase 3: Separate Configuration Delivery
- Send config as JSON metadata separate from prompt
- Enable template reuse across configurations
Phase 4: Progressive Tool Discovery
- Add context-aware tool filtering to ListToolsRequest
- Implement
search_toolswith detail levels
Consequencesโ
Positiveโ
- Memory Reduction: ~1.2KB savings per module load from extracted constants
- Code Reduction: ~90 lines removed from main APE module
- Maintainability: Centralized descriptions easier to update/translate
- Caching Ready: Infrastructure in place for 30-40% token reduction
- MCP Alignment: Better aligned with Anthropic's best practices
Negativeโ
- Additional File: New
ape-descriptions.tsfile to maintain - Import Overhead: Slight increase in import complexity
- Migration Risk: Changes to established APE module
Metricsโ
| Metric | Before | After | Improvement |
|---|---|---|---|
| APE Module Lines | 926 | ~836 | -10% |
| Inline Description Maps | 4 | 0 | -100% |
| Hardcoded Strings | 22 | 0 (in main module) | Centralized |
| Template Reuse | 0% | Ready for 30-40% | Infrastructure |
Implementationโ
Files Createdโ
src/config/ape-descriptions.ts- Centralized APE descriptions and caching
Files Modifiedโ
src/utils/automatic-prompt-engineering.ts- Import from config, remove inline maps
Testingโ
- Existing APE tests (
tests/ape.test.ts) verify functionality preserved - No changes to APE public API
Referencesโ
- Anthropic MCP Introduction
- Anthropic Code Execution with MCP
- Automatic Prompt Engineer (APE) Guide
- APE Research Summary - DeepLearning.AI
- MCP Best Practices - Mike's Blog
Related ADRsโ
- ADR-002: AI Integration and Advanced Prompting Strategy
- ADR-014: CE-MCP Architecture