Skip to content

Connect your AI app directly to your data with a full stack solution. Fully connected Agentic Graph RAG pipelines mean you can focus on fine tuning your app and not building data infrastructure.

License

Notifications You must be signed in to change notification settings

trustgraph-ai/trustgraph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Data-to-AI, Simplified.

PyPI version Discord

๐Ÿš€ Getting Started ๐Ÿ“บ YouTube ๐Ÿง  Knowledge Cores โš™๏ธ API Docs ๐Ÿง‘โ€๐Ÿ’ป CLI Docs ๐Ÿ’ฌ Discord ๐Ÿ“– Blog

The AI App Problem: Everything in Between

Building enterprise AI applications is hard. You're not just connecting APIs with a protocol - you're wrangling a complex ecosystem:

  • Data Silos: Connecting to and managing data from various sources (databases, APIs, files) is a nightmare.
  • LLM Integration: Choosing, integrating, and managing different LLMs adds another layer of complexity.
  • Deployment Headaches: Deploying, scaling, and monitoring your AI application is a constant challenge.
  • Knowledge Graph Construction: Taking raw knowledge and structuring it so it can be efficiently retrieved.
  • Vector Database Juggling: Setting up and optimizing a vector database for efficient data retrieval is crucial but complex.
  • Data Pipelines: Building robust ETL pipelines to prepare and transform your data is time-consuming.
  • Data Management: As your app grows, so does the data meaning storage and retreival becomes much more complex.
  • Prompt Engineering: Building, testing, and deploying prompts for specific use cases.
  • Reliability: With every new connection, the complexity ramps up meaning any simple error can bring the entire system crashing down.

What is TrustGraph?

TrustGraph removes the biggest headache of building an AI app: connecting and managing all the data, deployments, and models. As a full-stack platform, TrustGraph simplifies the development and deployment of data-driven AI applications. TrustGraph is a complete solution, handling everything from data ingestion to deployment, so you can focus on building innovative AI experiences.

architecture

The Stack Layers

  • ๐Ÿ“„ Data Ingest: Bulk ingest documents such as .pdf,.txt, and .md
  • ๐Ÿช“ Adjustable Chunking: Choose your chunking algorithm and parameters
  • ๐Ÿ” No-code LLM Integration: Anthropic, AWS Bedrock, AzureAI, AzureOpenAI, Cohere, Google AI Studio, Google VertexAI, Llamafiles, Ollama, and OpenAI
  • ๐Ÿ“– Automated Knowledge Graph Building: No need for complex ontologies and manual graph building
  • ๐Ÿ”ข Knowledge Graph to Vector Embeddings Mappings: Connect knowledge graph enhanced data directly to vector embeddings
  • โ”Natural Language Data Retrieval: Automatically perform a semantic similiarity search and subgraph extraction for the context of LLM generative responses
  • ๐Ÿง  Knowledge Cores: Modular data sets with semantic relationships that can saved and quickly loaded on demand
  • ๐Ÿค– Agent Manager: Define custom tools used by a ReAct style Agent Manager that fully controls the response flow including the ability to perform Graph RAG requests
  • ๐Ÿ“š Multiple Knowledge Graph Options: Full integration with Memgraph, FalkorDB, Neo4j, or Cassandra
  • ๐Ÿงฎ Multiple VectorDB Options: Full integration with Qdrant, Pinecone, or Milvus
  • ๐ŸŽ›๏ธ Production-Grade Reliability, scalability, and accuracy
  • ๐Ÿ” Observability and Telemetry: Get insights into system performance with Prometheus and Grafana
  • ๐ŸŽป Orchestration: Fully containerized with Docker or Kubernetes
  • ๐Ÿฅž Stack Manager: Control and scale the stack with confidence with Apache Pulsar
  • โ˜๏ธ Cloud Deployments: AWS and Google Cloud
  • ๐Ÿชด Customizable and Extensible: Tailor for your data and use cases
  • ๐Ÿ–ฅ๏ธ Configuration Builder: Build the YAML configuration with drop down menus and selectable parameters
  • ๐Ÿ•ต๏ธ Test Suite: A simple UI to fully test TrustGraph performance

Why Use TrustGraph?

  • Accelerate Development: TrustGraph instantly connects your data and app, keeping you laser focused on your users.
  • Reduce Complexity: Eliminate the pain of integrating disparate tools and technologies.
  • Focus on Innovation: Spend your time building your core AI logic, not managing infrastructure.
  • Improve Data Relevance: Ensure your LLM has access to the right data, at the right time.
  • Scale with Confidence: Deploy and scale your AI applications reliably and efficiently.
  • Full RAG Solution: Focus on optimizing your respones not building RAG pipelines.

Quickstart Guide ๐Ÿš€

Developer APIs and CLI

See the API Developer's Guide for more information.

For users, TrustGraph has the following interfaces:

The TrustGraph CLI installs the commands for interacting with TrustGraph while running along with the Python SDK. The Configuration Builder enables customization of TrustGraph deployments prior to launching. The REST API can be accessed through port 8088 of the TrustGraph host machine with JSON request and response bodies.

Install the TrustGraph CLI

pip3 install trustgraph-cli==0.20.9

Note

The TrustGraph CLI version must match the desired TrustGraph release version.

Configuration Builder

TrustGraph is endlessly customizable by editing the YAML launch files. The Configuration Builder provides a quick and intuitive tool for building a custom configuration that deploys with Docker, Podman, Minikube, or Google Cloud. There is a Configuration Builder for the both the lastest and stable TrustGraph releases.

The Configuration Builder has 4 important sections:

  • Component Selection โœ…: Choose from the available deployment platforms, LLMs, graph store, VectorDB, chunking algorithm, chunking parameters, and LLM parameters
  • Customization ๐Ÿงฐ: Customize the prompts for the LLM System, Data Extraction Agents, and Agent Flow
  • Test Suite ๐Ÿ•ต๏ธ: Add the Test Suite to the configuration available on port 8888
  • Finish Deployment ๐Ÿš€: Download the launch YAML files with deployment instructions

The Configuration Builder will generate the YAML files in deploy.zip. Once deploy.zip has been downloaded and unzipped, launching TrustGraph is as simple as navigating to the deploy directory and running:

docker compose up -d

Tip

Docker is the recommended container orchestration platform for first getting started with TrustGraph.

When finished, shutting down TrustGraph is as simple as:

docker compose down -v

System Restarts

The -v flag will destroy all data on shut down. To restart the system, it's necessary to keep the volumes. To keep the volumes, shut down without the -v flag:

docker compose down

With the volumes preserved, restarting the system is as simple as:

docker compose up -d

All data previously in TrustGraph will be saved and usable on restart.

Test Suite

If added to the build in the Configuration Builder, the Test Suite will be available at port 8888. The Test Suite has the following capabilities:

  • Graph RAG Chat ๐Ÿ’ฌ: Graph RAG queries in a chat interface
  • Vector Search ๐Ÿ”Ž: Semantic similarity search with cosine similarity scores
  • Semantic Relationships ๐Ÿ•ต๏ธ: See semantic relationships in a list structure
  • Graph Visualizer ๐ŸŒ: Visualize semantic relationships in 3D
  • Data Loader ๐Ÿ“‚: Directly load .pdf, .txt, or .md into the system with document metadata

Example TrustGraph Notebooks

Prebuilt Configuration Files

TrustGraph YAML files are available here. Download deploy.zip for the desired release version.

Release Type Release Version
Latest 0.20.11
Stable 0.20.9

TrustGraph is fully containerized and is launched with a YAML configuration file. Unzipping the deploy.zip will add the deploy directory with the following subdirectories:

  • docker-compose
  • minikube-k8s
  • gcp-k8s

Note

As more integrations have been added, the number of possible combinations of configurations has become quite large. It is recommended to use the Configuration Builder to build your deployment configuration. Each directory contains YAML configuration files for the default component selections.

Docker:

docker compose -f <launch-file.yaml> up -d

Kubernetes:

kubectl apply -f <launch-file.yaml>

TrustGraph is designed to be modular to support as many LLMs and environments as possible. A natural fit for a modular architecture is to decompose functions into a set of modules connected through a pub/sub backbone. Apache Pulsar serves as this pub/sub backbone. Pulsar acts as the data broker managing data processing queues connected to procesing modules.

Pulsar Workflows

  • For processing flows, Pulsar accepts the output of a processing module and queues it for input to the next subscribed module.
  • For services such as LLMs and embeddings, Pulsar provides a client/server model. A Pulsar queue is used as the input to the service. When processed, the output is then delivered to a separate queue where a client subscriber can request that output.

Data Extraction Agents

TrustGraph extracts knowledge documents to an ultra-dense knowledge graph using 3 automonous data extraction agents. These agents focus on individual elements needed to build the knowledge graph. The agents are:

  • Topic Extraction Agent
  • Entity Extraction Agent
  • Relationship Extraction Agent

The agent prompts are built through templates, enabling customized data extraction agents for a specific use case. The data extraction agents are launched automatically with the loader commands.

PDF file:

tg-load-pdf <document.pdf>

Text or Markdown file:

tg-load-text <document.txt>

Graph RAG Queries

Once the knowledge graph and embeddings have been built or a cognitive core has been loaded, RAG queries are launched with a single line:

tg-invoke-graph-rag -q "What are the top 3 takeaways from the document?"

Agent Flow

Invoking the Agent Flow will use a ReAct style approach the combines Graph RAG and text completion requests to think through a problem solution.

tg-invoke-agent -v -q "Write a blog post on the top 3 takeaways from the document."

Tip

Adding -v to the agent request will return all of the agent manager's thoughts and observations that led to the final response.

API Documentation

Developing on TrustGraph using APIs

Deploy and Manage TrustGraph

๐Ÿš€๐Ÿ™ Full Deployment Guide ๐Ÿš€๐Ÿ™

TrustGraph Developer's Guide

Developing for TrustGraph