MemoireMemoire
// FEATURES

Everything your AI agents
need to remember

Sessions, facts, documentation, and project activity captured, enriched, governed, and retrieved across every tool your team uses.

Platform layers
Capture

Sessions, facts, docs, and activity collected from the tools your team already uses.

Assemble

Relevant context ranked and packed into prompts without blowing the token budget.

Operate

Memi and other agents can work on top of the same context layer instead of starting cold.

MemoryDocsPrompt assemblyGoverned retrievalMemi
// CAPABILITIES

A context system, not a bag of point features

Each layer solves a different continuity problem, from remembering work to grounding answers in exact references to feeding the right context into the right execution phase.

// MEMORY

SESSION MEMORY

Every AI session is captured and indexed. Prompts, tool calls, decisions, code changes, and summaries are stored with full provenance. When an agent starts a new session, it picks up exactly where the last one stopped.

  • >Automatic session capture across all connected tools
  • >Timeline-based event storage with metadata
  • >SPO (Subject-Predicate-Object) fact extraction
  • >Contradiction detection and fact superseding
  • >Token-budgeted prompt assembly from memory
// SHARING

CROSS-CLIENT MEMORY

Cursor, Claude Code, Windsurf, Codex, and any MCP-compatible tool share one unified memory backend. Context discovered in one tool is instantly available to all others. No more re-explaining your architecture when you switch tools.

  • >Unified memory layer via MCP protocol
  • >Works with Cursor, Claude Code, Windsurf, Codex
  • >Team-wide memory sharing with access controls
  • >Real-time sync across all connected clients
  • >Zero config required per client
// DOCS

VERSION-AWARE DOCUMENTATION

Memoire reads your package.json, detects your dependency versions, and retrieves documentation matching your exact versions. No more answers based on stale training data or wrong API signatures.

  • >Automatic dependency detection from package.json
  • >Version-matched documentation retrieval
  • >Indexed and searchable doc corpus
  • >LanceDB vector search for semantic queries
  • >Supports npm, PyPI, and custom doc sources
// CONTEXT

PROMPT ASSEMBLY

Memoire ranks and assembles search results, timelines, facts, and documentation into a single token-budgeted prompt block. Your agents get the most relevant context without blowing token limits.

  • >Intelligent ranking by relevance and recency
  • >Token budget enforcement
  • >Multi-source aggregation (memory, docs, connectors)
  • >Configurable context windows
  • >Automatic summarization for older context
// GOVERNED

GOVERNED MEMORY INJECTION

Based on the Governed Memory paper (arxiv 2603.17787), Memoire uses a dual-model architecture with tiered access. Research phases get broad context, coding phases get only conventions and relevant code, and summary phases get none. This reduces token usage by approximately 38% while improving output quality.

  • >Per-phase memory injection policies
  • >Dual-model architecture (router + executor)
  • >Entity isolation for multi-tenant safety
  • >Tiered access: broad, conventions-only, none
  • >38% average token savings measured
// MEMI

AI ENGINEER IN SLACK

Memi is your AI engineer that lives in Slack. Post @memi add OAuth login and it researches, plans, codes, tests, creates a PR, and posts the result back to your thread. Full transparency, full control.

  • >Mention @memi in any Slack channel
  • >Automatic research, planning, and coding
  • >Test and lint verification with retries
  • >PR creation with full context
  • >Thread-based progress updates

Give your AI agents a memory

Bring shared context, current docs, and governed retrieval into the workflows your team already uses.