Johnny

v0.2.0

Semantic Memory System for RAG Applications

A lightweight, project-agnostic semantic memory system designed to provide the retrieval layer for RAG (Retrieval-Augmented Generation) applications. Named after Johnny Mnemonic from the film of the same name, it enables storing facts, conversations, or documents with auto-generated embeddings and querying them by semantic similarity.

RAGsemantic-searchembeddingsvectorpgvectorprismaOpenAIAILLM
View on GitHubView on npm

Quick Install

npm install @ticktockbent/johnny
LicenseMIT
LanguageTypeScript

Key Features

Everything you need to add semantic memory capabilities to your AI applications.

Vector Embeddings

Powered by OpenAI's text-embedding-3-small model with 1536 dimensions for accurate semantic matching.

PostgreSQL + pgvector

Persistent storage and efficient similarity search using the pgvector extension.

Namespace Isolation

Separate memory spaces per user, project, or context for clean data organization.

Usage Tracking

Mention counts and cooldowns to prevent repetition in your AI applications.

TTL Expiration

Automatic cleanup of stale memories with configurable time-to-live settings.

Prisma Integration

Seamless database operations with full TypeScript support.

API Reference

Simple, intuitive methods for storing and retrieving semantic memories.

store()

Add single memories with metadata

storeMany()

Batch insert multiple memories

search()

Find semantically similar memories with filtering

get()

Retrieve specific memory by ID

update()

Modify metadata without re-embedding

delete()

Remove individual or grouped memories

recordMention()

Track memory usage for analytics

pruneExpired()

Automatic deletion of expired records

Built for Testing

Johnny exports a MockMemoryService for unit testing without database or API dependencies. It uses deterministic hash-based embeddings and maintains full API compatibility with the production service.

  • No database required for tests
  • No OpenAI API calls needed
  • Deterministic embeddings for predictable results
  • Full API compatibility
import { MockMemoryService } from '@ticktockbent/johnny';

// Create a mock service for testing
const memory = new MockMemoryService();

// Use the same API as production
await memory.store({
  namespace: 'test',
  content: 'Test memory content',
  type: 'fact',
  tags: ['test']
});

// Search returns predictable results
const results = await memory.search({
  namespace: 'test',
  query: 'memory content'
});

Ready to Get Started?

Check out the documentation on GitHub or install the package from npm to start building semantic memory into your AI applications.