Persistent AI Memory

Give your AIpermanent memory

Context that persists across every AI conversation. Works with Claude, GPT, and Gemini.

0ms
Response time
Memory retention

Same conversation, different experience

Watch how the same project evolves over three days

Monday
No memory

I'm building a React app with TypeScript for my startup

You

I'll help you with your React TypeScript project

AI Assistant

Every conversation starts from scratch

See it in action

Real examples with Gemini CLI and Claude Code

Loading terminal...

Simple integration

Native SDKs with full TypeScript support

JavaScript
1import { Recall } from 'r3';
2
3// Zero configuration - works immediately
4const recall = new Recall();
5
6// Remember work context
7await recall.add({
8 content: 'Dashboard uses Next.js 14, TypeScript, and Tailwind CSS',
9 userId: 'work'
10});
11
12// Remember personal context
13await recall.add({
14 content: 'Kids: Emma (8, loves robotics), Josh (5, into dinosaurs)',
15 userId: 'personal'
16});
17
18// AI remembers across sessions
19const context = await recall.search({
20 query: 'What framework am I using?',
21 userId: 'work'
22});

Open source memory layer

Every feature addresses a real pain point from daily AI coding

AI Intelligence Engine

Real vector embeddings, entity extraction, and knowledge graphs - all running locally

Semantic Search

Find memories by meaning, not just keywords

Knowledge Graph

Build connections between people, projects, and technologies

<10ms Latency

Lightning fast local processing with optimized embeddings

Redis-powered caching

In-memory data store for sub-millisecond response times

Automatic failover

Works offline with local Redis, syncs when online

Efficient storage

Compressed entries with automatic TTL management

MCP protocol compatible

Works with Claude Desktop, Gemini CLI, and any MCP client

TypeScript SDK

Full type definitions with IntelliSense support

Local-first architecture

Embedded Redis server, no external dependencies

Redis caching. Mem0 persistence. Zero configuration.
Start building context-aware AI applications.