Skip to content

Latest commit

 

History

History
332 lines (234 loc) · 5.89 KB

File metadata and controls

332 lines (234 loc) · 5.89 KB

API Reference

Complete API documentation for the Three-Layer AI Framework.

Layer 1: UX Automation

RAGChatbot

A chatbot with Retrieval-Augmented Generation capabilities.

from src.layer1.rag_chatbot import RAGChatbot

bot = RAGChatbot(
    knowledge_base: str,
    model: str = "gpt-4",
    embedding_model: str = "text-embedding-ada-002",
    temperature: float = 0.7,
    max_tokens: int = 1000
)

Parameters

  • knowledge_base (str): Path to knowledge base directory
  • model (str, optional): OpenAI model name. Default: "gpt-4"
  • embedding_model (str, optional): Embedding model. Default: "text-embedding-ada-002"
  • temperature (float, optional): Sampling temperature. Default: 0.7
  • max_tokens (int, optional): Maximum response length. Default: 1000

Methods

chat(query: str) -> str

Process a user query and return a response.

response = bot.chat("What is your return policy?")
add_documents(documents: List[str]) -> None

Add documents to the knowledge base.

bot.add_documents([
    "./policies/return-policy.pdf",
    "./faq/common-questions.md"
])
clear_history() -> None

Clear conversation history.

bot.clear_history()

Layer 2: Data Intelligence

KnowledgeGraph

Build and query enterprise knowledge graphs.

from src.layer2.knowledge_graph import KnowledgeGraph

kg = KnowledgeGraph(
    database: str = "neo4j",
    uri: str = "bolt://localhost:7687",
    username: str = "neo4j",
    password: str = "password"
)

Methods

ingest_documents(documents: List[str]) -> None

Ingest documents into the knowledge graph.

kg.ingest_documents([
    "./data/policies/*.pdf",
    "./data/processes/*.docx"
])
query(query_text: str) -> List[Dict]

Query the graph using natural language.

results = kg.query("Find all processes related to onboarding")
# Returns: [{"entity": "...", "relationship": "...", "score": 0.95}]
add_relationship(from_entity: str, to_entity: str, relationship_type: str) -> None

Manually add a relationship.

kg.add_relationship(
    from_entity="Employee",
    to_entity="Department",
    relationship_type="WORKS_IN"
)

DataPipeline

Real-time data pipeline management.

from src.layer2.data_pipeline import DataPipeline

pipeline = DataPipeline(
    source: str,
    destination: str,
    schedule: str = "realtime"
)

Methods

add_transformation(name: str, function: Callable) -> None

Add a transformation function.

def clean_data(df):
    return df.dropna()

pipeline.add_transformation("clean", clean_data)
start() -> None

Start the pipeline.

pipeline.start()
stop() -> None

Stop the pipeline.

pipeline.stop()

Layer 3: Strategic Intelligence

StrategicForecastingEngine

Generate strategic forecasts and scenarios.

from src.layer3.azure_ai_foundry import StrategicForecastingEngine

engine = StrategicForecastingEngine(
    workspace: str,
    compute_target: str = "cpu-cluster"
)

Methods

train_forecast(data, horizon: int, frequency: str, metrics: List[str])

Train a forecasting model.

model = engine.train_forecast(
    data=historical_data,
    horizon=12,  # months
    frequency="M",
    metrics=["revenue", "costs"]
)
predict(model, periods: int) -> DataFrame

Generate predictions.

forecast = engine.predict(model, periods=12)
generate_scenarios(base_assumptions: Dict, uncertainty_ranges: Dict) -> List[Dict]

Generate strategic scenarios.

scenarios = engine.generate_scenarios(
    base_assumptions={"growth": 0.05},
    uncertainty_ranges={"growth": (0.02, 0.08)}
)

DashboardGenerator

Generate executive dashboards and reports.

from src.layer3.executive_dashboard import DashboardGenerator

generator = DashboardGenerator()

Methods

create_board_report(data_sources, time_period, include_forecast=True)

Create a board-ready report.

report = generator.create_board_report(
    data_sources=["finance", "sales"],
    time_period="Q4_2024",
    include_forecast=True
)
save_pdf(filename: str) -> None

Save report as PDF.

report.save_pdf("board_report.pdf")

Common Data Types

ChatMessage

{
    "role": "user" | "assistant" | "system",
    "content": str,
    "timestamp": datetime
}

Forecast

{
    "metric": str,
    "periods": List[datetime],
    "values": List[float],
    "confidence_lower": List[float],
    "confidence_upper": List[float]
}

Scenario

{
    "name": str,
    "assumptions": Dict[str, float],
    "forecast": Dict[str, List[float]],
    "probability": float
}

Error Handling

All methods raise descriptive exceptions:

from src.layer1.exceptions import (
    ChatbotError,
    KnowledgeBaseError,
    ModelError
)

try:
    response = bot.chat("Hello")
except ChatbotError as e:
    print(f"Chatbot error: {e}")
except ModelError as e:
    print(f"Model error: {e}")

Configuration

Environment Variables

# Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://...
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4

# Azure ML
AZURE_ML_WORKSPACE=...
AZURE_ML_SUBSCRIPTION=...

# Database
NEO4J_URI=bolt://localhost:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=...

Rate Limits

Service Limit Burst
Azure OpenAI 10 req/s 100 req/min
Knowledge Graph 100 req/s 1000 req/min
Forecasting 1 req/min 10 req/hour

Changelog

v1.0.0 (2024-09-01)

  • Initial release
  • Layer 1, 2, 3 core functionality
  • Examples and documentation

Support

For API questions:


Last updated: 2024-09-02