A Product by 13afoundry
Experimental

Persistent Memory
for AI Agents.

SuperMemory gives your AI semantic memory that persists across sessions. Store knowledge, recall context, and build on past conversations.

Get Started Explore Use Cases
Claude Desktop
Pro or Team
Claude Web
Pro or Team
💬
ChatGPT
Plus or Team
Cursor
Any plan
Gemini CLI
API key
💾
store_memory
🔍
retrieve_memory
🗑
delete_memory

6 Ways AI Memory Helps You

Real examples of what SuperMemory does for your AI assistant.

🛠
01

Remember Your Preferences

Preference

Your AI learns your coding style, tool preferences, and workflow habits once — and remembers them forever.

store_memory
User says "I prefer TypeScript with strict mode"
embed
Preference stored with semantic embedding
retrieve_memory
Next session: AI searches for coding preferences
apply
Generates code using your preferred style
📚
02

Build on Past Conversations

Context

Pick up exactly where you left off. Your AI recalls decisions, discussions, and context from previous sessions.

store_memory
Save "decided to use Postgres for auth service"
tag
Tagged with: architecture, auth-service
retrieve_memory
Days later: "What database did we choose?"
recall
Returns decision with full context
🎓
03

Store Learned Skills

Skill Procedure

Teach your AI a procedure once — deploy scripts, debugging workflows, build steps — and it remembers how.

store_memory
Save deployment procedure: build, test, deploy to staging
tag
Tagged: skill, deployment, staging
retrieve_memory
"How do I deploy to staging?"
execute
AI follows stored procedure step by step
🗺
04

Track Project Context

Fact Context

Architecture decisions, file conventions, API patterns — your AI keeps a living knowledge base of your project.

store_memory
Save API naming convention: /api/v1/{resource}
store_memory
Save "frontend uses React 19 with RSC"
retrieve_memory
"Create a new users endpoint"
generate
Follows project conventions automatically
🔧
05

Cross-Session Debugging

Skill Fact

Remember past bugs, solutions, and workarounds. Your AI never solves the same problem twice from scratch.

store_memory
Save "CORS fix: add credentials: include + server allow-origin"
embed
Indexed with semantic meaning of the fix
retrieve_memory
Weeks later: "Getting CORS errors again"
fix
Instantly recalls the exact solution
📖
06

Personal Knowledge Base

Fact

Store research, meeting notes, reference material — anything you want your AI to know without re-explaining.

store_memory
Save meeting notes, API docs, team decisions
store_memory
Save research findings and benchmarks
retrieve_memory
"What did the team decide about caching?"
answer
Surfaces relevant knowledge across all stored memories

Ready to try it?

Add persistent memory to your AI in under a minute.

View Setup Guide

How It Works

Built on modern retrieval technology for fast, accurate memory recall.

🧮

Vector Embeddings

Semantic understanding

🔍

Semantic Search

Find by meaning

🏆

Cross-Encoder Reranking

Precision refinement

🗃

Flexible Storage

SQLite or Firestore

Semantic, Not Keyword

Memories are embedded as vectors using OpenAI or Gemini models. Search finds memories by meaning, not exact words — "deployment process" finds your CI/CD procedure even if you never used that phrase.

Precision Reranking

A two-stage retrieval pipeline: fast vector similarity first, then cross-encoder reranking for precision. Returns the most relevant memories, not just the closest vectors.

Local or Cloud

Run locally with SQLite for full privacy, or deploy to the cloud with Firestore for access across devices. Same MCP interface either way — your AI doesn't need to know the difference.

Setup Guide

Add SuperMemory to your AI client in seconds.

💡
How SuperMemory Works

SuperMemory is an MCP server that gives any AI assistant persistent semantic memory. It runs alongside your AI client and provides three tools:

  1. store_memory — Save any text with optional tags. The server generates vector embeddings automatically.
  2. retrieve_memory — Search by meaning (semantic search) or by ID. Filter by tag. Results are reranked for precision.
  3. delete_memory — Remove outdated memories to keep your knowledge base clean.

Your AI learns to use these tools naturally. Say "remember this for next time" and it stores a memory. Ask about something from weeks ago and it retrieves the relevant context.

Add SuperMemory to your Claude Desktop configuration file:

claude_desktop_config.json
{
  "mcpServers": {
    "supermemory": {
      "command": "npx",
      "args": ["@nicepkg/supermemory"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}
  1. Open Claude Desktop → Settings → Developer → Edit Config
  2. Paste the configuration above
  3. Replace sk-... with your OpenAI API key (for embeddings)
  4. Restart Claude Desktop

Connect SuperMemory to Claude on the web using the hosted server:

  1. Go to claude.ai → Settings → Integrations
  2. Click Add Integration
  3. Enter the MCP server URL:
https://supermemory.13afoundry.com/mcp

The hosted server uses Firestore for cloud-persistent storage. Your memories are available across all sessions.

ChatGPT supports MCP servers through Developer Mode:

  1. Open ChatGPT → Profile → Settings → Developer Mode
  2. In the developer console, add a new MCP server
  3. Enter the server URL:
https://supermemory.13afoundry.com/mcp

Once connected, ChatGPT can store and retrieve memories across conversations.

Add SuperMemory to your Cursor MCP configuration:

~/.cursor/mcp.json
{
  "mcpServers": {
    "supermemory": {
      "command": "npx",
      "args": ["@nicepkg/supermemory"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}
  1. Create or edit ~/.cursor/mcp.json (Mac/Linux) or %USERPROFILE%\.cursor\mcp.json (Windows)
  2. Paste the configuration above
  3. Restart Cursor

SuperMemory works with any application that supports the Model Context Protocol.

For stdio-based clients (local): run the server with npx:

npx @nicepkg/supermemory

For HTTP-based clients (cloud): point to the hosted endpoint:

https://supermemory.13afoundry.com/mcp

Environment variables: set OPENAI_API_KEY for embeddings (or GEMINI_API_KEY with EMBEDDING_PROVIDER=gemini to use Google's models instead).