Skip to content

Latest commit

 

History

History
894 lines (637 loc) · 17.6 KB

File metadata and controls

894 lines (637 loc) · 17.6 KB

API Reference

This document provides API reference documentation for the key services and infrastructure components in the MCP GitHub Project Manager.

Table of Contents

  1. Services
  2. Infrastructure
  3. Types
  4. Configuration

Services

AIServiceFactory

Factory for creating AI service instances with Vercel AI SDK. Provides model access, resilience features, and graceful degradation.

Location: src/services/ai/AIServiceFactory.ts

Static Methods

getInstance()

Get the singleton instance of AIServiceFactory.

static getInstance(): AIServiceFactory

Returns: The singleton AIServiceFactory instance

Example:

const factory = AIServiceFactory.getInstance();

Instance Methods

getModel(type)

Get an AI model instance for a specific use case.

getModel(type: 'main' | 'research' | 'fallback' | 'prd'): LanguageModel | null

Parameters:

Parameter Type Description
type string Model type: 'main', 'research', 'fallback', or 'prd'

Returns: LanguageModel instance or null if unavailable

Example:

const model = factory.getModel('main');
if (model) {
  const result = await generateText({ model, prompt: 'Hello' });
}
getMainModel()

Get the main AI model for general task generation.

getMainModel(): LanguageModel | null
getResearchModel()

Get the research AI model for enhanced analysis.

getResearchModel(): LanguageModel | null
getFallbackModel()

Get the fallback AI model when main model fails.

getFallbackModel(): LanguageModel | null
getPRDModel()

Get the PRD AI model for PRD generation.

getPRDModel(): LanguageModel | null
getBestAvailableModel()

Get the best available model with fallback logic.

getBestAvailableModel(): LanguageModel | null

Tries models in order: main -> fallback -> prd -> research

isAIAvailable()

Check if any AI models are configured and available.

isAIAvailable(): boolean

Returns: true if at least one model is available

validateConfiguration()

Validate the AI service configuration.

validateConfiguration(): ConfigStatus

Returns: Configuration status object

interface ConfigStatus {
  isValid: boolean;
  availableModels: string[];
  unavailableModels: string[];
  warnings: string[];
}
enableResilience(config?)

Enable resilience features for AI calls.

enableResilience(config?: AIResilienceConfig): void

Parameters:

Parameter Type Description
config.maxRetries number Max retry attempts (default: 3)
config.timeoutMs number Timeout per operation (default: 30000)
config.halfOpenAfterMs number Circuit half-open time (default: 30000)
config.consecutiveFailures number Failures before circuit opens (default: 5)

Example:

factory.enableResilience({
  maxRetries: 2,
  timeoutMs: 15000
});
isResilienceEnabled()

Check if resilience is enabled.

isResilienceEnabled(): boolean
getCircuitState()

Get the current circuit breaker state.

getCircuitState(): 'closed' | 'open' | 'half-open' | 'disabled'
executeWithResilience(operation, fallback?)

Execute an AI operation with resilience protection.

async executeWithResilience<T>(
  operation: () => Promise<T>,
  fallback?: () => T | DegradedResult
): Promise<T | DegradedResult>

Parameters:

Parameter Type Description
operation function Async operation to execute
fallback function Optional fallback for graceful degradation

Returns: Operation result or DegradedResult

Example:

factory.enableResilience();

const result = await factory.executeWithResilience(
  () => generateText({ model, prompt: 'Analyze this' }),
  () => ({ degraded: true, message: 'Using cached response' })
);

if ('degraded' in result) {
  console.log('AI unavailable:', result.message);
} else {
  console.log('AI response:', result);
}

ProjectManagementService

Central facade for all project management operations. Delegates to specialized services.

Location: src/services/ProjectManagementService.ts

Constructor

constructor(factory: GitHubRepositoryFactory)

Key Methods

createProject(params)

Create a new GitHub project.

async createProject(params: CreateProjectParams): Promise<Project>
getProject(projectId)

Get project details.

async getProject(projectId: string): Promise<Project | null>
updateProject(projectId, params)

Update a project.

async updateProject(projectId: string, params: UpdateProjectParams): Promise<Project>
createIssue(params)

Create a new issue.

async createIssue(params: CreateIssueParams): Promise<Issue>
createMilestone(params)

Create a new milestone.

async createMilestone(params: CreateMilestoneParams): Promise<Milestone>
createSprint(params)

Create a new sprint.

async createSprint(params: CreateSprintParams): Promise<Sprint>

Note: See src/services/ProjectManagementService.ts for the full list of 34+ methods.


HealthService

Centralized health check logic for system monitoring.

Location: src/infrastructure/health/HealthService.ts

Constructor

constructor(deps?: HealthServiceDependencies)

Dependencies:

interface HealthServiceDependencies {
  aiFactory?: AIServiceFactory;
  aiResilience?: AIResiliencePolicy;
  cache?: ResourceCache;
}

Methods

check()

Perform a comprehensive health check.

async check(): Promise<HealthStatus>

Returns: Complete health status

interface HealthStatus {
  status: 'healthy' | 'degraded' | 'unhealthy';
  timestamp: string;
  uptime: number;
  services: {
    github: {
      connected: boolean;
      rateLimit?: { remaining: number; limit: number; };
    };
    ai: {
      available: boolean;
      circuitState: 'closed' | 'open' | 'half-open' | 'disabled';
      models: { available: string[]; unavailable: string[]; };
    };
    cache: {
      entries: number;
      persistenceEnabled: boolean;
      lastPersist?: string;
    };
  };
}

Status determination:

  • unhealthy: GitHub is not connected
  • degraded: AI unavailable or circuit is open
  • healthy: All services operational

Example:

const healthService = new HealthService({
  aiFactory: AIServiceFactory.getInstance(),
  cache: ResourceCache.getInstance()
});

const status = await healthService.check();
if (status.status === 'degraded') {
  console.log('System running in degraded mode');
}

Infrastructure

CircuitBreakerService

Wraps Cockatiel circuit breaker for resilient operations.

Location: src/infrastructure/resilience/CircuitBreakerService.ts

Constructor

constructor(name: string, config?: CircuitBreakerConfig)

Parameters:

Parameter Type Description
name string Identifier for logging
config.halfOpenAfter number Time before circuit tests recovery (default: 30000ms)
config.consecutiveFailures number Failures before circuit opens (default: 5)

Methods

execute(fn)

Execute an operation through the circuit breaker.

async execute<T>(fn: () => Promise<T>): Promise<T>

Behavior:

  • Circuit closed: Operation executes normally
  • Circuit open: Operation fails fast without executing
  • Circuit half-open: Operation executes to test recovery
getState()

Get the current circuit state.

getState(): 'closed' | 'open' | 'half-open'
isOpen()

Check if circuit is blocking requests.

isOpen(): boolean

Example:

const breaker = new CircuitBreakerService('API', {
  consecutiveFailures: 3,
  halfOpenAfter: 10000
});

try {
  const result = await breaker.execute(() => fetchAPI());
} catch (error) {
  if (breaker.isOpen()) {
    console.log('Circuit is open, service unavailable');
  }
}

AIResiliencePolicy

Composed resilience policy for AI service calls.

Location: src/infrastructure/resilience/AIResiliencePolicy.ts

Constructor

constructor(config?: AIResilienceConfig)

Configuration:

interface AIResilienceConfig {
  maxRetries?: number;        // Default: 3
  timeoutMs?: number;         // Default: 30000
  halfOpenAfterMs?: number;   // Default: 30000
  consecutiveFailures?: number; // Default: 5
}

Methods

execute(operation, fallback?)

Execute an operation with full resilience protection.

async execute<T>(
  operation: () => Promise<T>,
  fallbackFn?: () => T | DegradedResult
): Promise<T | DegradedResult>

Protection layers (outer to inner):

  1. Fallback - catches all failures
  2. Retry - retries with exponential backoff
  3. Circuit Breaker - prevents cascading failures
  4. Timeout - ensures timely completion
getCircuitState()

Get circuit breaker state.

getCircuitState(): 'closed' | 'open' | 'half-open'
isCircuitOpen()

Check if circuit is open.

isCircuitOpen(): boolean
getConfig()

Get the resolved configuration.

getConfig(): Readonly<Required<AIResilienceConfig>>

Example:

const policy = new AIResiliencePolicy({
  maxRetries: 2,
  timeoutMs: 5000
});

const result = await policy.execute(
  () => aiService.generateText(prompt),
  () => ({ degraded: true, message: 'AI unavailable' })
);

CorrelationContext

AsyncLocalStorage-based correlation ID tracking for request tracing.

Location: src/infrastructure/observability/CorrelationContext.ts

Functions

startTrace(operation, fn)

Start a new trace for an operation.

async function startTrace<T>(
  operation: string,
  fn: () => Promise<T>
): Promise<T>

Parameters:

Parameter Type Description
operation string Name of the operation being traced
fn function Async operation to execute

Logs to stderr:

{"timestamp": "...", "type": "trace", "correlationId": "uuid", "operation": "...", "status": "start"}
{"timestamp": "...", "type": "trace", "correlationId": "uuid", "operation": "...", "status": "success", "durationMs": 123}
getCorrelationId()

Get the current correlation ID.

function getCorrelationId(): string | undefined

Returns: Correlation ID if within a trace, undefined otherwise

getTraceContext()

Get the full trace context.

function getTraceContext(): TraceContext | undefined
interface TraceContext {
  correlationId: string;
  startTime: number;
  operation: string;
}

Example:

await startTrace('processRequest', async () => {
  console.log(`Trace ID: ${getCorrelationId()}`);
  // Any nested calls can access the same correlation ID
  return await processData();
});

ResourceCache

In-memory cache with optional persistence.

Location: src/infrastructure/cache/ResourceCache.ts

Static Methods

getInstance()

Get the singleton cache instance.

static getInstance(): ResourceCache

Instance Methods

set(type, id, value, options?)

Cache a resource.

async set<T>(
  type: ResourceType,
  id: string,
  value: T,
  options?: ResourceCacheOptions
): Promise<void>

Options:

interface ResourceCacheOptions {
  ttl?: number;           // Time-to-live in ms (default: 1 hour)
  tags?: string[];        // Tags for filtering
  namespaces?: string[];  // Namespaces for grouping
}
get(type, id, options?)

Get a cached resource.

async get<T extends Resource>(
  type: ResourceType,
  id: string,
  options?: ResourceCacheOptions
): Promise<T | null>
getByType(type, options?)

Get all resources of a type.

async getByType<T extends Resource>(
  type: ResourceType,
  options?: ResourceCacheOptions
): Promise<T[]>
getByTag(tag, type?, options?)

Get resources by tag.

async getByTag<T extends Resource>(
  tag: string,
  type?: ResourceType,
  options?: ResourceCacheOptions
): Promise<T[]>
enablePersistence(filePath?)

Enable cache persistence to disk.

enablePersistence(filePath?: string): void

Default path: ~/.cache/mcp-github-pm/resource-cache.json

persist()

Manually trigger cache persistence.

async persist(): Promise<void>
getStats()

Get cache statistics.

getStats(): CacheStats
interface CacheStats {
  size: number;
  tagCount: number;
  typeCount: number;
  namespaceCount: number;
  persistenceEnabled: boolean;
  lastPersist?: string;
}

Example:

const cache = ResourceCache.getInstance();
cache.enablePersistence();

await cache.set('project', 'proj_123', projectData, {
  ttl: 3600000,
  tags: ['active']
});

const project = await cache.get('project', 'proj_123');
const activeProjects = await cache.getByTag('active', 'project');

TracingLogger

Logger with correlation ID in every JSON log entry.

Location: src/infrastructure/observability/TracingLogger.ts

Constructor

constructor(context?: string)

Parameters:

Parameter Type Description
context string Logger context name (e.g., service name)

Methods

info(message, data?)

Log info level message.

info(message: string, data?: Record<string, unknown>): void
warn(message, data?)

Log warning level message.

warn(message: string, data?: Record<string, unknown>): void
error(message, error?, data?)

Log error level message.

error(message: string, error?: Error, data?: Record<string, unknown>): void
debug(message, data?)

Log debug level message.

debug(message: string, data?: Record<string, unknown>): void

Log format:

{
  "timestamp": "2024-01-31T12:00:00.000Z",
  "level": "info",
  "correlationId": "uuid-from-trace",
  "context": "MyService",
  "message": "Operation completed",
  "data": { "key": "value" }
}

Example:

const logger = new TracingLogger('MyService');

await startTrace('operation', async () => {
  logger.info('Starting operation', { input: data });
  // ...
  logger.info('Operation complete', { result: output });
});

Types

DegradedResult

Returned when AI service uses fallback.

interface DegradedResult {
  degraded: true;
  message: string;
}

HealthStatus

System health status from HealthService.

interface HealthStatus {
  status: 'healthy' | 'degraded' | 'unhealthy';
  timestamp: string;
  uptime: number;
  services: ServiceHealthStatus;
}

CircuitBreakerState

Circuit breaker state.

type CircuitBreakerState = 'closed' | 'open' | 'half-open';

ResourceType

Supported resource types for caching.

type ResourceType =
  | 'project'
  | 'issue'
  | 'pull_request'
  | 'milestone'
  | 'sprint'
  | 'label'
  | 'user'
  | 'comment'
  | 'review';

Configuration

Environment Variables

Variable Description Required
GITHUB_TOKEN GitHub personal access token Yes
ANTHROPIC_API_KEY Anthropic API key for Claude models No*
OPENAI_API_KEY OpenAI API key for GPT models No*
GOOGLE_API_KEY Google API key for Gemini models No*
PERPLEXITY_API_KEY Perplexity API key No*
AI_MAIN_MODEL Main AI model (e.g., claude-3-5-sonnet-20241022) No
AI_RESEARCH_MODEL Research AI model No
AI_FALLBACK_MODEL Fallback AI model No
AI_PRD_MODEL PRD generation AI model No

*At least one AI provider key is required for AI features.

Model Configuration

Models are auto-detected based on name:

Prefix Provider
claude- Anthropic
gpt-, o1 OpenAI
gemini- Google
llama, sonar, perplexity Perplexity

Example .env:

GITHUB_TOKEN=ghp_xxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxx
AI_MAIN_MODEL=claude-3-5-sonnet-20241022
AI_FALLBACK_MODEL=gpt-4o

Source Files

Component Location
AIServiceFactory src/services/ai/AIServiceFactory.ts
ProjectManagementService src/services/ProjectManagementService.ts
HealthService src/infrastructure/health/HealthService.ts
CircuitBreakerService src/infrastructure/resilience/CircuitBreakerService.ts
AIResiliencePolicy src/infrastructure/resilience/AIResiliencePolicy.ts
CorrelationContext src/infrastructure/observability/CorrelationContext.ts
ResourceCache src/infrastructure/cache/ResourceCache.ts
TracingLogger src/infrastructure/observability/TracingLogger.ts
DI Container src/container.ts

Generated: 2026-01-31 MCP GitHub Project Manager v1.0