Agentic Memory

Agents that remember
across time.

An open-source, temporal knowledge graph-backed memory layer. Extract information from agent interactions, codebases, and web research exactly when needed—without overloading the context window.

The State of Agentic Memory

This section explores the fundamental tension in modern AI infrastructure: context management. As we push more data into LLM context windows, we face a tradeoff between retaining critical information and maintaining high reasoning quality. Interact with the slider below to see how the context window dilemma manifests in production workloads.

The Context Window Dilemma

🧠
Aggressive PruningInfinite Context

Agent Status

Balanced Processing

Optimal retrieval. The agent has enough context to operate without losing focus.

Reasoning Quality

High

Models degrade when flooded with irrelevant data. Agentic Memory solves this by keeping state out of the prompt until it is explicitly needed.

4-Stage Hybrid GraphRAG Pipeline

Agentic Memory does not just dump text into a vector database. This section breaks down our opinionated Temporal Knowledge Graph architecture. Click through the tabs to understand how we transform raw multi-modal inputs into a structured, queryable, and time-aware intelligence layer.

1

Chunking & ID

Content is cleanly chunked. Deterministic hashing ensures re-ingesting the same file or conversation is idempotent.

2

Extraction

The LLM passes over chunks, extracting structured entities (People, Concepts) and the claims/relationships between them.

3

Temporal Resolution

Classifies into Facts, Events, Instructions, Tasks. Old contradictory memories are marked superseded, not deleted, creating a version chain.

4

Graph Upsert

Claims are written to the graph database (nodes/edges). Raw text and facts are embedded into a vector index asynchronously.

Beyond Pure Vector Search

This visualization compares standard vector-based RAG architectures against our Hybrid GraphRAG pipeline. Relying purely on vector similarity often fails for complex, multi-hop queries.

By integrating a Graph Database (Neo4j/SpacetimeDB) with temporal metadata, Agentic Memory excels at tracking evolving state ("When did we switch to SQLModel?"), multi-hop logic, and exact keyword precision, outperforming standard baselines on complex long-running agent benchmarks.

  • ✔️ Avoids token burn on retrieval strategies
  • ✔️ Explicit citations from graph traversals
  • ✔️ Native temporal supersession handling

What you can build

Agentic Memory acts as the universal memory layer across diverse architectures. Below are the core modules available in the open-source framework, representing practical deployment scenarios from individual coding assistants to team-wide institutional knowledge bases.

💻

Codebase Memory

CodeMemory ingests entire repositories into a graph representation. Agents trace function calls, understand architectural decisions, and retain tribal knowledge that usually lives only in senior engineers' heads.

🌐

Deep Web Research

WebMemory gives research agents the ability to crawl, chunk, and extract structured claims from the web, building a factual graph of a domain rather than dumping HTML context.

💬

Continuous Chat

ChatMemory provides standalone agents persistent memory across sessions. Your agent remembers previous debugging steps, personal preferences, and evolving project requirements.

🤝

Shared Team Memory

Share a Neo4j database so that knowledge learned by one person's coding agent is available to everyone's tools, establishing genuine institutional sovereignty over AI-generated context.

Programmatic Interaction

Agentic Memory can be integrated via the Python SDK, run as an MCP server for tools like Claude Desktop and Cursor, or accessed via our browser extension proxy. Your data lives in your own Neo4j, SpacetimeDB, or local SQLite instances.

View Documentation
from agentic_memory.core import AgenticMemory, MemoryConfig # Initialize the memory client config = MemoryConfig(storage_backend="neo4j", namespace="project-alpha") memory = AgenticMemory(config) # Ingest -- extract temporal facts and claims await memory.ingest([ {"role": "user", "content": "Use FastAPI."}, {"role": "assistant", "content": "Scaffolded API."}, {"role": "user", "content": "Switch to SQLModel."} ], session_id="session-001") # Remember -- explicitly store a rule await memory.remember( content="Migrations must be generated using Alembic.", memory_type="instruction" ) # Recall -- retrieve synthesized answers using Hybrid GraphRAG results = await memory.recall("What ORM is used?") print(results.answer) # "The user is currently using SQLModel, having switched from FastAPI."