🔥 trustgraph-ai/trustgraph
The context development platform. Store, enrich, and retrieve structured knowledge with graph-native infrastructure, semantic retrieval, and portable context cores. — Trending on GitHub today with 50 new stars.
Building applications that need to know things requires more than a database. TrustGraph is the context development platform: graph-native infrastructure for storing, enriching, and retrieving structured knowledge at any scale. Think like Supabase but built around context graphs: multi-model storage, semantic retrieval pipelines, portable context cores, and a full developer toolkit out of the box. Deploy locally or in the cloud. No unnecessary API keys. Just context, engineered.
The platform:
- Multi-model and multimodal database system
Tabular/relational, key-value Document, graph, and vectors Images, video, and audio
- Automated data ingest and loading
Quick ingest with semantic similarity retrieval Ontology structuring for precision retrieval
- Out-of-the-box RAG pipelines
DocumentRAG GraphRAG OntologyRAG
-
3D GraphViz for exploring context
-
Fully Agentic System
Single Agent Multi Agent MCP integration
- Run anywhere
Deploy locally with Docker Deploy in cloud with Kubernetes
- Support for all major LLMs
API support for Anthropic, Cohere, Gemini, Mistral, OpenAI, and others Model inferencing with vLLM, Ollama, TGI, LM Studio, and Llamafiles
- Developer friendly
REST API Docs Websocket API Docs Python API Docs CLI Docs
No API Keys Required
How many times have you cloned a repo and opened the .env.example to see the dozens of API keys for 3rd party dependencies needed to make the services work? There are only 3 things in TrustGraph that might need an API key:
-
3rd party LLM services like Anthropic, Cohere, Gemini, Mistral, OpenAI, etc.
-
3rd party OCR like Mistral OCR
-
The API key you set for the TrustGraph API gateway
Everything else is included.
-
Managed Multi-model storage in Cassandra
-
Managed Vector embedding storage in Qdrant
-
Managed File and Object storage in Garage (S3 compatible)
-
Managed High-speed Pub/Sub messaging fabric with Pulsar
-
Complete LLM inferencing stack for open LLMs with vLLM, TGI, Ollama, LM Studio, and Llamafiles
Quickstart
npx @trustgraph/config
TrustGraph downloads as Docker containers and can be run locally with Docker, Podman, or Minikube. The config tool will generate:
-
deploy.zip with either a docker-compose.yaml file for a Docker/Podman deploy or resources.yaml for Kubernetes
-
Deployment instructions as INSTALLATION.md
Quickstart.mp4
For a browser based quickstart, try the Configuration Terminal.
Table of Contents
-
What is a Context Graph?
-
Context Graphs in Action
-
Getting Started
-
Context Cores
-
Tech Stack
-
Observability & Telemetry
-
Contributing
-
License
-
Support & Community
Watch What is a Context Graph?
Watch Context Graphs in Action
Getting Started with TrustGraph
-
Getting Started Guides
-
Using the Workbench
-
Developer APIs and CLI
-
Deployment Guides
Workbench
The Workbench provides tools for all major features of TrustGraph. The Workbench is on port 8888 by default.
-
Vector Search: Search the installed knowledge bases
-
Agentic, GraphRAG and LLM Chat: Chat interface for agents, GraphRAG queries, or direct to LLMs
-
Relationships: Analyze deep relationships in the installed knowledge bases
-
Graph Visualizer: 3D GraphViz of the installed knowledge bases
-
Library: Staging area for installing knowledge bases
-
Flow Classes: Workflow preset configurations
-
Flows: Create custom workflows and adjust LLM parameters during runtime
-
Knowledge Cores: Manage resuable knowledge bases
-
Prompts: Manage and adjust prompts during runtime
-
Schemas: Define custom schemas for structured data knowledge bases
-
Ontologies: Define custom ontologies for unstructured data knowledge bases
-
Agent Tools: Define tools with collections, knowledge cores, MCP connections, and tool groups
-
MCP Tools: Connect to MCP servers
TypeScript Library for UIs
There are 3 libraries for quick UI integration of TrustGraph services.
-
@trustgraph/client
-
@trustgraph/react-state
-
@trustgraph/react-provider
Context Cores
A Context Core is a portable, versioned bundle of context that you can ship between projects and environments, pin in production, and reuse across agents. It packages the “stuff agents need to know” (structured knowledge + embeddings + evidence + policies) into a single artifact, so you can treat context like code: build it, test it, version it, promote it, and roll it back. TrustGraph is built to support this kind of end-to-end context engineering and orchestration workflow.
What’s inside a Context Core
A Context Core typically includes:
-
Ontology (your domain schema) and mappings
-
Context Graph (entities, relationships, supporting evidence)
-
Embeddings / vector indexes for fast semantic entry-point lookup
-
Source manifests + provenance (where facts came from, when, and how they were derived)
-
Retrieval policies (traversal rules, freshness, authority ranking)
Tech Stack
TrustGraph provides component flexibility to optimize agent workflows.
LLM APIs
-
Anthropic
-
AWS Bedrock
-
AzureAI
-
AzureOpenAI
-
Cohere
-
Google AI Studio
-
Google VertexAI
-
Mistral
-
OpenAI
LLM Orchestration
-
LM Studio
-
Llamafiles
-
Ollama
-
TGI
-
vLLM
Multi-model storage
- Apache Cassandra
VectorDB
- Qdrant
File and Object Storage
- Garage
Observability
-
Prometheus
-
Grafana
-
Loki
Data Streaming
- Apache Pulsar
Clouds
-
AWS
-
Azure
-
Google Cloud
-
OVHcloud
-
Scaleway
Observability & Telemetry
Once the platform is running, access the Grafana dashboard at:
http://localhost:3000
Default credentials are:
user: admin password: adminuser: admin password: adminThe default Grafana dashboard tracks the following:
Telemetry
-
LLM Latency
-
Error Rate
-
Service Request Rates
-
Queue Backlogs
-
Chunking Histogram
-
Error Source by Service
-
Rate Limit Events
-
CPU usage by Service
-
Memory usage by Service
-
Models Deployed
-
Token Throughput (Tokens/second)
-
Cost Throughput (Cost/second)
Contributing
Developer's Guide
License
TrustGraph is licensed under Apache 2.0.
Copyright 2024-2025 TrustGraph
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Support & Community
-
Bug Reports & Feature Requests: Discord
-
Discussions & Questions: Discord
-
Documentation: Docs
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-sourceSimplify AI-Driven Data Connectivity With MongoDB and MCP Toolbox
The wave of generative AI applications is revolutionizing how businesses interact with and derive value from their data. Organizations need solutions that simplify these interactions and ensure compatibility with an expanding ecosystem of databases. Enter MCP Toolbox for Databases , an open-source Model Context Protocol (MCP) server that enables seamless integration between gen AI agents and enterprise data sources using a standardized protocol pioneered by Anthropic. With the built-in capability to query multiple data sources simultaneously and unify results, MCP Toolbox eliminates fragmented integration challenges, empowering businesses to unlock the full potential of their data. With MongoDB Atlas now joining the ecosystem of databases supported by MCP Toolbox, enterprises using MongoDB
#457 – Jennifer Burns: Milton Friedman, Ayn Rand, Economics, Capitalism, Freedom
Jennifer Burns is a historian of ideas, focusing on the evolution of economic, political, and social ideas in the United States in the 20th century. She wrote two biographies, one on Milton Friedman, and the other on Ayn Rand. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep457-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/jennifer-burns-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Jennifer’s X:
Portkey open-sources its AI gateway after processing 2 trillion tokens a day - The New Stack
<a href="https://news.google.com/rss/articles/CBMiYEFVX3lxTE5IeVN2dGs4WEFFdUI4c3V1UmdNTDZkZThVbFlaUVRMQmRCZFd6TkYyV0R0NGF2clhUN1JOMUNJX2IxOTljS182NTZ1Y2syZ3BFTFhZc3V0b0VqODU3NHRUcA?oc=5" target="_blank">Portkey open-sources its AI gateway after processing 2 trillion tokens a day</a> <font color="#6f6f6f">The New Stack</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI
Meteorite found in Somalia turns out to contain two minerals that are not found on Earth - Interesting Engineering
<a href="https://news.google.com/rss/articles/CBMiiwFBVV95cUxNb01VeTJqS2NyZjNzdnNLYTFBTW9JUEJoZkV2ekUzNjlMZ2hYemRiMWw5dXBTUUY4cFpNbWg5U0trLUVheHkzeWFXb1dsNnhULV94SDlFOElFRFhsU3hORVFJSnRaYU9KdUd2Q2JmckdVNWJhWVJveWZEVkQxd19SWjgxSjd2TFFDUFZr?oc=5" target="_blank">Meteorite found in Somalia turns out to contain two minerals that are not found on Earth</a> <font color="#6f6f6f">Interesting Engineering</font>
Erdogan holds high-level talks with Sudan's leader in Ankara - Yeni Safak English
<a href="https://news.google.com/rss/articles/CBMiogFBVV95cUxNVVlVMWM0U1lBNVJ4dkJzLS1Nc3k1RDFya3hseERiSm5mVC1LaGN4NjJhNnkybjB6Sk94WDcteXd5MWRYTTNXN1VZejI2bnpWYjVVUE91VUlTSTlsc2JvYkxCd1NUVXdjOEhndzhTRllKaENhNWlhb0tTTkdDSXZzU0E3RC1FelNtMWszVVlGc1R0NEVtYk9QX1VzMnpPbzNwZHc?oc=5" target="_blank">Erdogan holds high-level talks with Sudan's leader in Ankara</a> <font color="#6f6f6f">Yeni Safak English</font>
Stop Searching, Start Contributing: How GoodFirstGo is Making Open Source Approachable
<p>Remember the first time you tried contributing to open source?</p> <p>If you were like most developers, the experience involved staring at a massive, complex codebase on GitHub, clicking the <code>Issues</code> tab, and immediately feeling overwhelmed. The ecosystem is massive, and while hundreds of maintainers out there are actively asking for help, finding those rare "beginner-friendly" issues requires sifting through mountains of bugs and features that require deep domain knowledge.</p> <p>I realized developers shouldn't have to write custom GitHub API queries or dig through unrelated repositories just to make their first Pull Request.</p> <p>That’s exactly why I built <code>GoodFirstGo</code>.</p> <p>What is <code>GoodFirstGo</code>?<br> <code>GoodFirstGo</code> is a CLI tool built
Character.AI Open Sources pipeling-sft: A Scalable Framework for Fine-Tuning MoE LLMs like DeepSeek V3 - Character.AI Blog
<a href="https://news.google.com/rss/articles/CBMixwFBVV95cUxPUHFEUWJVSE1DUEJpTG5qOEM4N2M4eU55cERodFlHRUdtUzRNYlVtX0I4bzBPdGkzNkxWaHhoM1NuRVNtTzBxajljUi13NW1aSDg5TEh5bWtDeWhLWVJXTnZQZVFOYnhqcnd6WGVIM2FlLVFIcE9hdGRrNHBJVUdNSlVTYXRocXo0RXo5ZldWU1NvQURSdG02dExuWlZXbFd4eUVWSUtYX0NlV01VeTUtUHBSYnR1SmtpZV93bml2TWF0RXJmT2pN?oc=5" target="_blank">Character.AI Open Sources pipeling-sft: A Scalable Framework for Fine-Tuning MoE LLMs like DeepSeek V3</a> <font color="#6f6f6f">Character.AI Blog</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!