Open-source context retrieval layer for AI agents
-
Updated
Apr 17, 2026 - Python
Open-source context retrieval layer for AI agents
Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.
14-stage Fusion Pipeline for LLM token compression — reversible compression, AST-aware code analysis, intelligent content routing. Zero LLM inference cost. MIT licensed.
Semantica 🧠 — A framework for building semantic layers, context graphs, and decision intelligence systems with explainability and provenance.
🤖 Official Interactive Tutorial for OpenHarness – Zero to Hero in 12 Chapters | Learn OpenHarness like Claude Code: Agent Loop, Tools, Memory, Multi-Agent | 面向零基础的 AI Agent 交互式教程
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Open-source protocol suite standardizing LLM, Vector, Graph, and Embedding infrastructure across LangChain, LlamaIndex, AutoGen, CrewAI, Semantic Kernel, and MCP. 3,330+ conformance tests. One protocol. Any framework. Any provider.
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require approvals, and produce audit-ready decision trails.
Grov automatically captures the context from your private AI sessions and syncs it to a shared team memory. It auto injects relevant memories across developers and future sessions to save tokens and time spent on tasks.
Local-first AI conversation memory hub to capture, search, summarize, and export chats across major AI platforms. 本地优先的 AI 对话记忆与知识中台。
Route inference across LLM providers. Track cost per request.
AI Infrastructure Engineer Learning Track - Production ML infrastructure curriculum (2-4 years experience)
The open source, no-code MCP Server for AI-Native API Access
Distributed data mesh for real-time access, migration, and replication across diverse databases — built for AI, security, and scale.
A Rust runtime that unifies relational tables, graph relationships, and vector embeddings in a single tensor-based storage layer with distributed consensus and semantic search
One API for 25+ LLMs, OpenAI, Anthropic, Bedrock, Azure. Caching, guardrails & cost controls. Go-native LiteLLM & Kong AI Gateway alternative.
Open-source AI agent runtime built in Rust. Define once, deploy isolated instances per tenant with built-in memory, encrypted vault, self-extending skills, cron, and multi-channel support.
Self-hosted orchestration layer for autonomous AI agent teams. Shared memory, heartbeat scheduling, vault-first secrets, and cross-model peer review — one command to deploy.
MachineAuth provides authentication and permission infrastructure that allows AI agents to securely access APIs, tools, and services.
Zero trust LLM gateway. OpenAI-compatible proxy with semantic routing and load balancing across OpenAI, Anthropic, Ollama, vLLM, and any compatible backend. Identity-based access, virtual API keys, and end-to-end encryption via OpenZiti
Add a description, image, and links to the ai-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the ai-infrastructure topic, visit your repo's landing page and select "manage topics."