AI Intelligence Features
r3 v1.3.0 introduces powerful AI intelligence features that automatically enhance your memory storage with semantic understanding, entity extraction, and knowledge graph construction.
Overview
The AI intelligence features are enabled by default and provide:
- Real vector embeddings for semantic search
- Automatic entity extraction from text
- Relationship mapping between entities
- Knowledge graph construction
- Multi-factor relevance scoring
All processing happens 100% locally with no external API calls, maintaining privacy and speed.
Entity Extraction
Every memory is automatically analyzed to extract meaningful entities:
Extracted Entity Types
- People - Names and references to individuals
- Organizations - Companies, teams, groups
- Technologies - Programming languages, frameworks, tools
- Projects - Project names and initiatives
- Dates - Temporal references and timelines
- Places - Locations and geographical references
Example
1const memory =2 "Sarah from Marketing works on the Dashboard project with React and TypeScript";3
4// Automatically extracts:5// - People: ["Sarah"]6// - Organizations: ["Marketing"]7// - Projects: ["Dashboard"]8// - Technologies: ["React", "TypeScript"]9// - Relationships: [10// { from: "Sarah", to: "Marketing", type: "WORKS_FOR" },11// { from: "Dashboard", to: "React", type: "USES" }12// ]
Semantic Search
Search memories by meaning, not just keywords:
How It Works
- Vector Embeddings - Each memory is converted to a 384-dimensional vector
- Semantic Similarity - Find memories with similar meaning
- Multi-factor Scoring - Combines multiple relevance signals
Relevance Scoring Algorithm
1// Final score calculation2const relevanceScore =3 semanticSimilarity * 0.5 + // Meaning-based matching4 keywordOverlap * 0.2 + // Traditional text matching5 entityOverlap * 0.15 + // Shared entities6 recencyBonus * 0.1 + // Prefer recent memories7 accessFrequency * 0.05; // Popular memories rank higher
Example Usage
1// Semantic search finds related concepts2const results = await recall.search({3 query: "machine learning and AI",4 limit: 5,5});6
7// Will find memories about:8// - "neural networks and deep learning"9// - "artificial intelligence applications"10// - "ML models and training data"11// Even without exact keyword matches!
Knowledge Graph
Build a connected graph of your knowledge:
Graph Structure
1interface KnowledgeGraph {2 nodes: Array<{3 id: string;4 type: "person" | "organization" | "technology" | "project";5 name: string;6 mentions: number;7 }>;8
9 edges: Array<{10 from: string;11 to: string;12 type: RelationshipType;13 confidence: number;14 }>;15}
Relationship Types
WORKS_FOR
- Person works at organizationMANAGES
- Person manages person/projectUSES
- Project uses technologyBUILT_WITH
- Created using technologyDEPENDS_ON
- Technical dependencyINTEGRATES_WITH
- System integrationLOCATED_IN
- Geographical relationshipPART_OF
- Hierarchical relationship
MCP Tools
When using r3 as an MCP server with Claude or other LLMs:
1# Extract entities from text2extract_entities(text: string)3
4# Query knowledge graph5get_knowledge_graph(6 entity_type?: string,7 entity_name?: string,8 relationship_type?: string,9 limit?: number10)11
12# Find connections between entities13find_connections(14 from_entity: string,15 to_entity?: string,16 max_depth?: number17)
Performance
All AI features are optimized for speed:
Operation | Latency | Notes |
---|---|---|
Embedding generation | <5ms | 384-dimensional vectors |
Entity extraction | <10ms | Using wink-nlp |
Semantic search | <10ms | For 1000+ memories |
Graph traversal | <5ms | BFS with depth limit |
Configuration
Default Mode (AI Enabled)
1// AI features are enabled by default2const recall = new Recall();3
4// Or explicitly5const recall = new Recall({6 intelligenceMode: "enhanced",7});
Basic Mode (Opt-out)
1// Disable AI features if needed2const recall = new Recall({3 intelligenceMode: 'basic'4});5
6// Or via environment variable7INTELLIGENCE_MODE=basic npx r38
9// Or via CLI flag10npx r3 --basic
Technical Details
Embedding Model
- Model: all-MiniLM-L6-v2
- Dimensions: 384
- Library: transformers.js
- Processing: CPU-optimized
- Cache: Embeddings are cached for reuse
NLP Engine
- Library: wink-nlp
- Model: wink-eng-lite-web-model
- Features: Tokenization, POS tagging, NER
- Language: English
Vector Storage
- Library: Vectra
- Index: Local file-based
- Search: Cosine similarity
- Updates: Incremental indexing
Privacy & Security
- 100% Local Processing - No data leaves your machine
- No External APIs - All models run locally
- Cached Models - Downloaded once, used offline
- Encrypted Storage - Optional encryption for vectors
Examples
Building a Personal Knowledge Base
1// Store memories with automatic intelligence2await recall.add({3 content:4 "Met with Dr. Chen about the AI research project. She suggested using transformer models for better accuracy.",5 userId: "researcher",6});7
8// Later, find connections9const connections = await recall.findConnections({10 from: "Dr. Chen",11 to: "transformer models",12});13// Returns: Dr. Chen -> AI research project -> transformer models
Project Context Management
1// Store project context2await recall.add({3 content:4 "Dashboard project uses React v18, TypeScript v5, and connects to PostgreSQL database on AWS RDS.",5 userId: "project-dashboard",6});7
8// Query technology stack9const techStack = await recall.getKnowledgeGraph({10 entityType: "technology",11 userId: "project-dashboard",12});
Team Knowledge Sharing
1// Store team information2await recall.add({3 content:4 "Sarah leads the frontend team and reports to Mike. She specializes in React and accessibility.",5 userId: "team",6});7
8// Find team relationships9const teamGraph = await recall.getKnowledgeGraph({10 relationshipType: "REPORTS_TO",11 userId: "team",12});
Troubleshooting
High Memory Usage
The embedding model uses ~100MB RAM. To reduce memory:
1// Use basic mode for low-memory environments2const recall = new Recall({3 intelligenceMode: "basic",4});
Slow First Load
Models are downloaded on first use (~50MB). This is one-time only.
Entity Extraction Accuracy
For better extraction:
- Use proper capitalization for names
- Include context around entities
- Use full sentences when possible
What's Next
Future enhancements planned:
- Multi-language support
- Custom entity types
- Graph visualization API
- Clustering and topic modeling
- Incremental learning from feedback