Docs/Getting Started

AI Intelligence Features

r3 v1.3.0 introduces powerful AI intelligence features that automatically enhance your memory storage with semantic understanding, entity extraction, and knowledge graph construction.

Overview

The AI intelligence features are enabled by default and provide:

  • Real vector embeddings for semantic search
  • Automatic entity extraction from text
  • Relationship mapping between entities
  • Knowledge graph construction
  • Multi-factor relevance scoring

All processing happens 100% locally with no external API calls, maintaining privacy and speed.

Entity Extraction

Every memory is automatically analyzed to extract meaningful entities:

Extracted Entity Types

  • People - Names and references to individuals
  • Organizations - Companies, teams, groups
  • Technologies - Programming languages, frameworks, tools
  • Projects - Project names and initiatives
  • Dates - Temporal references and timelines
  • Places - Locations and geographical references

Example

TypeScript
1const memory =
2 "Sarah from Marketing works on the Dashboard project with React and TypeScript";
3
4// Automatically extracts:
5// - People: ["Sarah"]
6// - Organizations: ["Marketing"]
7// - Projects: ["Dashboard"]
8// - Technologies: ["React", "TypeScript"]
9// - Relationships: [
10// { from: "Sarah", to: "Marketing", type: "WORKS_FOR" },
11// { from: "Dashboard", to: "React", type: "USES" }
12// ]

Semantic Search

Search memories by meaning, not just keywords:

How It Works

  1. Vector Embeddings - Each memory is converted to a 384-dimensional vector
  2. Semantic Similarity - Find memories with similar meaning
  3. Multi-factor Scoring - Combines multiple relevance signals

Relevance Scoring Algorithm

TypeScript
1// Final score calculation
2const relevanceScore =
3 semanticSimilarity * 0.5 + // Meaning-based matching
4 keywordOverlap * 0.2 + // Traditional text matching
5 entityOverlap * 0.15 + // Shared entities
6 recencyBonus * 0.1 + // Prefer recent memories
7 accessFrequency * 0.05; // Popular memories rank higher

Example Usage

TypeScript
1// Semantic search finds related concepts
2const results = await recall.search({
3 query: "machine learning and AI",
4 limit: 5,
5});
6
7// Will find memories about:
8// - "neural networks and deep learning"
9// - "artificial intelligence applications"
10// - "ML models and training data"
11// Even without exact keyword matches!

Knowledge Graph

Build a connected graph of your knowledge:

Graph Structure

TypeScript
1interface KnowledgeGraph {
2 nodes: Array<{
3 id: string;
4 type: "person" | "organization" | "technology" | "project";
5 name: string;
6 mentions: number;
7 }>;
8
9 edges: Array<{
10 from: string;
11 to: string;
12 type: RelationshipType;
13 confidence: number;
14 }>;
15}

Relationship Types

  • WORKS_FOR - Person works at organization
  • MANAGES - Person manages person/project
  • USES - Project uses technology
  • BUILT_WITH - Created using technology
  • DEPENDS_ON - Technical dependency
  • INTEGRATES_WITH - System integration
  • LOCATED_IN - Geographical relationship
  • PART_OF - Hierarchical relationship

MCP Tools

When using r3 as an MCP server with Claude or other LLMs:

Bash
1# Extract entities from text
2extract_entities(text: string)
3
4# Query knowledge graph
5get_knowledge_graph(
6 entity_type?: string,
7 entity_name?: string,
8 relationship_type?: string,
9 limit?: number
10)
11
12# Find connections between entities
13find_connections(
14 from_entity: string,
15 to_entity?: string,
16 max_depth?: number
17)

Performance

All AI features are optimized for speed:

OperationLatencyNotes
Embedding generation<5ms384-dimensional vectors
Entity extraction<10msUsing wink-nlp
Semantic search<10msFor 1000+ memories
Graph traversal<5msBFS with depth limit

Configuration

Default Mode (AI Enabled)

TypeScript
1// AI features are enabled by default
2const recall = new Recall();
3
4// Or explicitly
5const recall = new Recall({
6 intelligenceMode: "enhanced",
7});

Basic Mode (Opt-out)

TypeScript
1// Disable AI features if needed
2const recall = new Recall({
3 intelligenceMode: 'basic'
4});
5
6// Or via environment variable
7INTELLIGENCE_MODE=basic npx r3
8
9// Or via CLI flag
10npx r3 --basic

Technical Details

Embedding Model

  • Model: all-MiniLM-L6-v2
  • Dimensions: 384
  • Library: transformers.js
  • Processing: CPU-optimized
  • Cache: Embeddings are cached for reuse

NLP Engine

  • Library: wink-nlp
  • Model: wink-eng-lite-web-model
  • Features: Tokenization, POS tagging, NER
  • Language: English

Vector Storage

  • Library: Vectra
  • Index: Local file-based
  • Search: Cosine similarity
  • Updates: Incremental indexing

Privacy & Security

  • 100% Local Processing - No data leaves your machine
  • No External APIs - All models run locally
  • Cached Models - Downloaded once, used offline
  • Encrypted Storage - Optional encryption for vectors

Examples

Building a Personal Knowledge Base

TypeScript
1// Store memories with automatic intelligence
2await recall.add({
3 content:
4 "Met with Dr. Chen about the AI research project. She suggested using transformer models for better accuracy.",
5 userId: "researcher",
6});
7
8// Later, find connections
9const connections = await recall.findConnections({
10 from: "Dr. Chen",
11 to: "transformer models",
12});
13// Returns: Dr. Chen -> AI research project -> transformer models

Project Context Management

TypeScript
1// Store project context
2await recall.add({
3 content:
4 "Dashboard project uses React v18, TypeScript v5, and connects to PostgreSQL database on AWS RDS.",
5 userId: "project-dashboard",
6});
7
8// Query technology stack
9const techStack = await recall.getKnowledgeGraph({
10 entityType: "technology",
11 userId: "project-dashboard",
12});

Team Knowledge Sharing

TypeScript
1// Store team information
2await recall.add({
3 content:
4 "Sarah leads the frontend team and reports to Mike. She specializes in React and accessibility.",
5 userId: "team",
6});
7
8// Find team relationships
9const teamGraph = await recall.getKnowledgeGraph({
10 relationshipType: "REPORTS_TO",
11 userId: "team",
12});

Troubleshooting

High Memory Usage

The embedding model uses ~100MB RAM. To reduce memory:

TypeScript
1// Use basic mode for low-memory environments
2const recall = new Recall({
3 intelligenceMode: "basic",
4});

Slow First Load

Models are downloaded on first use (~50MB). This is one-time only.

Entity Extraction Accuracy

For better extraction:

  • Use proper capitalization for names
  • Include context around entities
  • Use full sentences when possible

What's Next

Future enhancements planned:

  • Multi-language support
  • Custom entity types
  • Graph visualization API
  • Clustering and topic modeling
  • Incremental learning from feedback