This document provides API reference documentation for the key services and infrastructure components in the MCP GitHub Project Manager.
Factory for creating AI service instances with Vercel AI SDK. Provides model access, resilience features, and graceful degradation.
Location: src/services/ai/AIServiceFactory.ts
Get the singleton instance of AIServiceFactory.
static getInstance(): AIServiceFactoryReturns: The singleton AIServiceFactory instance
Example:
const factory = AIServiceFactory.getInstance();Get an AI model instance for a specific use case.
getModel(type: 'main' | 'research' | 'fallback' | 'prd'): LanguageModel | nullParameters:
| Parameter | Type | Description |
|---|---|---|
| type | string | Model type: 'main', 'research', 'fallback', or 'prd' |
Returns: LanguageModel instance or null if unavailable
Example:
const model = factory.getModel('main');
if (model) {
const result = await generateText({ model, prompt: 'Hello' });
}Get the main AI model for general task generation.
getMainModel(): LanguageModel | nullGet the research AI model for enhanced analysis.
getResearchModel(): LanguageModel | nullGet the fallback AI model when main model fails.
getFallbackModel(): LanguageModel | nullGet the PRD AI model for PRD generation.
getPRDModel(): LanguageModel | nullGet the best available model with fallback logic.
getBestAvailableModel(): LanguageModel | nullTries models in order: main -> fallback -> prd -> research
Check if any AI models are configured and available.
isAIAvailable(): booleanReturns: true if at least one model is available
Validate the AI service configuration.
validateConfiguration(): ConfigStatusReturns: Configuration status object
interface ConfigStatus {
isValid: boolean;
availableModels: string[];
unavailableModels: string[];
warnings: string[];
}Enable resilience features for AI calls.
enableResilience(config?: AIResilienceConfig): voidParameters:
| Parameter | Type | Description |
|---|---|---|
| config.maxRetries | number | Max retry attempts (default: 3) |
| config.timeoutMs | number | Timeout per operation (default: 30000) |
| config.halfOpenAfterMs | number | Circuit half-open time (default: 30000) |
| config.consecutiveFailures | number | Failures before circuit opens (default: 5) |
Example:
factory.enableResilience({
maxRetries: 2,
timeoutMs: 15000
});Check if resilience is enabled.
isResilienceEnabled(): booleanGet the current circuit breaker state.
getCircuitState(): 'closed' | 'open' | 'half-open' | 'disabled'Execute an AI operation with resilience protection.
async executeWithResilience<T>(
operation: () => Promise<T>,
fallback?: () => T | DegradedResult
): Promise<T | DegradedResult>Parameters:
| Parameter | Type | Description |
|---|---|---|
| operation | function | Async operation to execute |
| fallback | function | Optional fallback for graceful degradation |
Returns: Operation result or DegradedResult
Example:
factory.enableResilience();
const result = await factory.executeWithResilience(
() => generateText({ model, prompt: 'Analyze this' }),
() => ({ degraded: true, message: 'Using cached response' })
);
if ('degraded' in result) {
console.log('AI unavailable:', result.message);
} else {
console.log('AI response:', result);
}Central facade for all project management operations. Delegates to specialized services.
Location: src/services/ProjectManagementService.ts
constructor(factory: GitHubRepositoryFactory)Create a new GitHub project.
async createProject(params: CreateProjectParams): Promise<Project>Get project details.
async getProject(projectId: string): Promise<Project | null>Update a project.
async updateProject(projectId: string, params: UpdateProjectParams): Promise<Project>Create a new issue.
async createIssue(params: CreateIssueParams): Promise<Issue>Create a new milestone.
async createMilestone(params: CreateMilestoneParams): Promise<Milestone>Create a new sprint.
async createSprint(params: CreateSprintParams): Promise<Sprint>Note: See src/services/ProjectManagementService.ts for the full list of 34+ methods.
Centralized health check logic for system monitoring.
Location: src/infrastructure/health/HealthService.ts
constructor(deps?: HealthServiceDependencies)Dependencies:
interface HealthServiceDependencies {
aiFactory?: AIServiceFactory;
aiResilience?: AIResiliencePolicy;
cache?: ResourceCache;
}Perform a comprehensive health check.
async check(): Promise<HealthStatus>Returns: Complete health status
interface HealthStatus {
status: 'healthy' | 'degraded' | 'unhealthy';
timestamp: string;
uptime: number;
services: {
github: {
connected: boolean;
rateLimit?: { remaining: number; limit: number; };
};
ai: {
available: boolean;
circuitState: 'closed' | 'open' | 'half-open' | 'disabled';
models: { available: string[]; unavailable: string[]; };
};
cache: {
entries: number;
persistenceEnabled: boolean;
lastPersist?: string;
};
};
}Status determination:
unhealthy: GitHub is not connecteddegraded: AI unavailable or circuit is openhealthy: All services operational
Example:
const healthService = new HealthService({
aiFactory: AIServiceFactory.getInstance(),
cache: ResourceCache.getInstance()
});
const status = await healthService.check();
if (status.status === 'degraded') {
console.log('System running in degraded mode');
}Wraps Cockatiel circuit breaker for resilient operations.
Location: src/infrastructure/resilience/CircuitBreakerService.ts
constructor(name: string, config?: CircuitBreakerConfig)Parameters:
| Parameter | Type | Description |
|---|---|---|
| name | string | Identifier for logging |
| config.halfOpenAfter | number | Time before circuit tests recovery (default: 30000ms) |
| config.consecutiveFailures | number | Failures before circuit opens (default: 5) |
Execute an operation through the circuit breaker.
async execute<T>(fn: () => Promise<T>): Promise<T>Behavior:
- Circuit closed: Operation executes normally
- Circuit open: Operation fails fast without executing
- Circuit half-open: Operation executes to test recovery
Get the current circuit state.
getState(): 'closed' | 'open' | 'half-open'Check if circuit is blocking requests.
isOpen(): booleanExample:
const breaker = new CircuitBreakerService('API', {
consecutiveFailures: 3,
halfOpenAfter: 10000
});
try {
const result = await breaker.execute(() => fetchAPI());
} catch (error) {
if (breaker.isOpen()) {
console.log('Circuit is open, service unavailable');
}
}Composed resilience policy for AI service calls.
Location: src/infrastructure/resilience/AIResiliencePolicy.ts
constructor(config?: AIResilienceConfig)Configuration:
interface AIResilienceConfig {
maxRetries?: number; // Default: 3
timeoutMs?: number; // Default: 30000
halfOpenAfterMs?: number; // Default: 30000
consecutiveFailures?: number; // Default: 5
}Execute an operation with full resilience protection.
async execute<T>(
operation: () => Promise<T>,
fallbackFn?: () => T | DegradedResult
): Promise<T | DegradedResult>Protection layers (outer to inner):
- Fallback - catches all failures
- Retry - retries with exponential backoff
- Circuit Breaker - prevents cascading failures
- Timeout - ensures timely completion
Get circuit breaker state.
getCircuitState(): 'closed' | 'open' | 'half-open'Check if circuit is open.
isCircuitOpen(): booleanGet the resolved configuration.
getConfig(): Readonly<Required<AIResilienceConfig>>Example:
const policy = new AIResiliencePolicy({
maxRetries: 2,
timeoutMs: 5000
});
const result = await policy.execute(
() => aiService.generateText(prompt),
() => ({ degraded: true, message: 'AI unavailable' })
);AsyncLocalStorage-based correlation ID tracking for request tracing.
Location: src/infrastructure/observability/CorrelationContext.ts
Start a new trace for an operation.
async function startTrace<T>(
operation: string,
fn: () => Promise<T>
): Promise<T>Parameters:
| Parameter | Type | Description |
|---|---|---|
| operation | string | Name of the operation being traced |
| fn | function | Async operation to execute |
Logs to stderr:
{"timestamp": "...", "type": "trace", "correlationId": "uuid", "operation": "...", "status": "start"}
{"timestamp": "...", "type": "trace", "correlationId": "uuid", "operation": "...", "status": "success", "durationMs": 123}Get the current correlation ID.
function getCorrelationId(): string | undefinedReturns: Correlation ID if within a trace, undefined otherwise
Get the full trace context.
function getTraceContext(): TraceContext | undefinedinterface TraceContext {
correlationId: string;
startTime: number;
operation: string;
}Example:
await startTrace('processRequest', async () => {
console.log(`Trace ID: ${getCorrelationId()}`);
// Any nested calls can access the same correlation ID
return await processData();
});In-memory cache with optional persistence.
Location: src/infrastructure/cache/ResourceCache.ts
Get the singleton cache instance.
static getInstance(): ResourceCacheCache a resource.
async set<T>(
type: ResourceType,
id: string,
value: T,
options?: ResourceCacheOptions
): Promise<void>Options:
interface ResourceCacheOptions {
ttl?: number; // Time-to-live in ms (default: 1 hour)
tags?: string[]; // Tags for filtering
namespaces?: string[]; // Namespaces for grouping
}Get a cached resource.
async get<T extends Resource>(
type: ResourceType,
id: string,
options?: ResourceCacheOptions
): Promise<T | null>Get all resources of a type.
async getByType<T extends Resource>(
type: ResourceType,
options?: ResourceCacheOptions
): Promise<T[]>Get resources by tag.
async getByTag<T extends Resource>(
tag: string,
type?: ResourceType,
options?: ResourceCacheOptions
): Promise<T[]>Enable cache persistence to disk.
enablePersistence(filePath?: string): voidDefault path: ~/.cache/mcp-github-pm/resource-cache.json
Manually trigger cache persistence.
async persist(): Promise<void>Get cache statistics.
getStats(): CacheStatsinterface CacheStats {
size: number;
tagCount: number;
typeCount: number;
namespaceCount: number;
persistenceEnabled: boolean;
lastPersist?: string;
}Example:
const cache = ResourceCache.getInstance();
cache.enablePersistence();
await cache.set('project', 'proj_123', projectData, {
ttl: 3600000,
tags: ['active']
});
const project = await cache.get('project', 'proj_123');
const activeProjects = await cache.getByTag('active', 'project');Logger with correlation ID in every JSON log entry.
Location: src/infrastructure/observability/TracingLogger.ts
constructor(context?: string)Parameters:
| Parameter | Type | Description |
|---|---|---|
| context | string | Logger context name (e.g., service name) |
Log info level message.
info(message: string, data?: Record<string, unknown>): voidLog warning level message.
warn(message: string, data?: Record<string, unknown>): voidLog error level message.
error(message: string, error?: Error, data?: Record<string, unknown>): voidLog debug level message.
debug(message: string, data?: Record<string, unknown>): voidLog format:
{
"timestamp": "2024-01-31T12:00:00.000Z",
"level": "info",
"correlationId": "uuid-from-trace",
"context": "MyService",
"message": "Operation completed",
"data": { "key": "value" }
}Example:
const logger = new TracingLogger('MyService');
await startTrace('operation', async () => {
logger.info('Starting operation', { input: data });
// ...
logger.info('Operation complete', { result: output });
});Returned when AI service uses fallback.
interface DegradedResult {
degraded: true;
message: string;
}System health status from HealthService.
interface HealthStatus {
status: 'healthy' | 'degraded' | 'unhealthy';
timestamp: string;
uptime: number;
services: ServiceHealthStatus;
}Circuit breaker state.
type CircuitBreakerState = 'closed' | 'open' | 'half-open';Supported resource types for caching.
type ResourceType =
| 'project'
| 'issue'
| 'pull_request'
| 'milestone'
| 'sprint'
| 'label'
| 'user'
| 'comment'
| 'review';| Variable | Description | Required |
|---|---|---|
| GITHUB_TOKEN | GitHub personal access token | Yes |
| ANTHROPIC_API_KEY | Anthropic API key for Claude models | No* |
| OPENAI_API_KEY | OpenAI API key for GPT models | No* |
| GOOGLE_API_KEY | Google API key for Gemini models | No* |
| PERPLEXITY_API_KEY | Perplexity API key | No* |
| AI_MAIN_MODEL | Main AI model (e.g., claude-3-5-sonnet-20241022) | No |
| AI_RESEARCH_MODEL | Research AI model | No |
| AI_FALLBACK_MODEL | Fallback AI model | No |
| AI_PRD_MODEL | PRD generation AI model | No |
*At least one AI provider key is required for AI features.
Models are auto-detected based on name:
| Prefix | Provider |
|---|---|
claude- |
Anthropic |
gpt-, o1 |
OpenAI |
gemini- |
|
llama, sonar, perplexity |
Perplexity |
Example .env:
GITHUB_TOKEN=ghp_xxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxx
AI_MAIN_MODEL=claude-3-5-sonnet-20241022
AI_FALLBACK_MODEL=gpt-4o| Component | Location |
|---|---|
| AIServiceFactory | src/services/ai/AIServiceFactory.ts |
| ProjectManagementService | src/services/ProjectManagementService.ts |
| HealthService | src/infrastructure/health/HealthService.ts |
| CircuitBreakerService | src/infrastructure/resilience/CircuitBreakerService.ts |
| AIResiliencePolicy | src/infrastructure/resilience/AIResiliencePolicy.ts |
| CorrelationContext | src/infrastructure/observability/CorrelationContext.ts |
| ResourceCache | src/infrastructure/cache/ResourceCache.ts |
| TracingLogger | src/infrastructure/observability/TracingLogger.ts |
| DI Container | src/container.ts |
Generated: 2026-01-31 MCP GitHub Project Manager v1.0