Complete API documentation for the Three-Layer AI Framework.
A chatbot with Retrieval-Augmented Generation capabilities.
from src.layer1.rag_chatbot import RAGChatbot
bot = RAGChatbot(
knowledge_base: str,
model: str = "gpt-4",
embedding_model: str = "text-embedding-ada-002",
temperature: float = 0.7,
max_tokens: int = 1000
)- knowledge_base (str): Path to knowledge base directory
- model (str, optional): OpenAI model name. Default: "gpt-4"
- embedding_model (str, optional): Embedding model. Default: "text-embedding-ada-002"
- temperature (float, optional): Sampling temperature. Default: 0.7
- max_tokens (int, optional): Maximum response length. Default: 1000
Process a user query and return a response.
response = bot.chat("What is your return policy?")Add documents to the knowledge base.
bot.add_documents([
"./policies/return-policy.pdf",
"./faq/common-questions.md"
])Clear conversation history.
bot.clear_history()Build and query enterprise knowledge graphs.
from src.layer2.knowledge_graph import KnowledgeGraph
kg = KnowledgeGraph(
database: str = "neo4j",
uri: str = "bolt://localhost:7687",
username: str = "neo4j",
password: str = "password"
)Ingest documents into the knowledge graph.
kg.ingest_documents([
"./data/policies/*.pdf",
"./data/processes/*.docx"
])Query the graph using natural language.
results = kg.query("Find all processes related to onboarding")
# Returns: [{"entity": "...", "relationship": "...", "score": 0.95}]Manually add a relationship.
kg.add_relationship(
from_entity="Employee",
to_entity="Department",
relationship_type="WORKS_IN"
)Real-time data pipeline management.
from src.layer2.data_pipeline import DataPipeline
pipeline = DataPipeline(
source: str,
destination: str,
schedule: str = "realtime"
)Add a transformation function.
def clean_data(df):
return df.dropna()
pipeline.add_transformation("clean", clean_data)Start the pipeline.
pipeline.start()Stop the pipeline.
pipeline.stop()Generate strategic forecasts and scenarios.
from src.layer3.azure_ai_foundry import StrategicForecastingEngine
engine = StrategicForecastingEngine(
workspace: str,
compute_target: str = "cpu-cluster"
)Train a forecasting model.
model = engine.train_forecast(
data=historical_data,
horizon=12, # months
frequency="M",
metrics=["revenue", "costs"]
)Generate predictions.
forecast = engine.predict(model, periods=12)Generate strategic scenarios.
scenarios = engine.generate_scenarios(
base_assumptions={"growth": 0.05},
uncertainty_ranges={"growth": (0.02, 0.08)}
)Generate executive dashboards and reports.
from src.layer3.executive_dashboard import DashboardGenerator
generator = DashboardGenerator()Create a board-ready report.
report = generator.create_board_report(
data_sources=["finance", "sales"],
time_period="Q4_2024",
include_forecast=True
)Save report as PDF.
report.save_pdf("board_report.pdf"){
"role": "user" | "assistant" | "system",
"content": str,
"timestamp": datetime
}{
"metric": str,
"periods": List[datetime],
"values": List[float],
"confidence_lower": List[float],
"confidence_upper": List[float]
}{
"name": str,
"assumptions": Dict[str, float],
"forecast": Dict[str, List[float]],
"probability": float
}All methods raise descriptive exceptions:
from src.layer1.exceptions import (
ChatbotError,
KnowledgeBaseError,
ModelError
)
try:
response = bot.chat("Hello")
except ChatbotError as e:
print(f"Chatbot error: {e}")
except ModelError as e:
print(f"Model error: {e}")# Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://...
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4
# Azure ML
AZURE_ML_WORKSPACE=...
AZURE_ML_SUBSCRIPTION=...
# Database
NEO4J_URI=bolt://localhost:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=...| Service | Limit | Burst |
|---|---|---|
| Azure OpenAI | 10 req/s | 100 req/min |
| Knowledge Graph | 100 req/s | 1000 req/min |
| Forecasting | 1 req/min | 10 req/hour |
- Initial release
- Layer 1, 2, 3 core functionality
- Examples and documentation
For API questions:
- Email: 2maree@gmail.com
- GitHub Issues: Report an issue
Last updated: 2024-09-02