MemoireMemoire
How it works

Concepts

The mental model you need to get the most out of Memi. None of this is required reading — but if you're going to ship serious work through Memi, knowing what's happening behind the curtain helps.

Procedures

Every task Memi takes on runs through a procedure— a sequence of phases (research → plan → code → verify → PR → demo → summary). Each phase is a separate prompt with a tight, role-specific system message and only the tools it needs. That's why Memi rarely sprawls or hallucinates: research-phase Memi can't edit files, code-phase Memi can't open new browser tabs, and so on.

There are five built-in procedures: code, debugger, question, pr-review, and batch. The brain picks one when classifying your message; you can override by being explicit ("just review this PR" → pr-review).

Governed memory

Memi has a persistent memory layer (vectors + structured blocks) that survives across conversations. Each phase pulls a tiered slice of that memory — research gets broad recall, the coding phase only sees coding conventions, the summary phase sees nothing. This saves about 38% of tokens vs. dumping all memory into every call, and stops unrelated context from leaking into a focused subroutine.

You can read and tune what's in memory from the dashboard — the four core blocks are company, learnings, preferences, and codebase. If you want Memi to remember "our test runner is vitest, not jest," just tell it once in Slack — it writes that to preferences automatically.

BYO coding agents

Memi doesn't generate code itself. It spawns Claude Code or Codex as a subprocess and uses your credentials. Two reasons:

  1. Cost. The biggest line item in any code-shipping AI is the model call. By keeping that on your account, your Memoire bill stays small and predictable — you pay for orchestration credits, not raw token volume.
  2. Trust.You decide which model touches your code, and you can audit usage in your provider's dashboard exactly like a human dev's. We never see your code or your prompts; we just see "agent ran for 4m 12s, exited cleanly."

Verification + retry

After every code phase, Memi runs the project's test, lint, and type-check commands. If they fail, it loops back into the coding phase with the failure output up to four times before giving up. Most teams set the loop limit lower ("just try twice, then ask me") via workspace settings.

Demo capture

For tasks that actually change behaviour, Memi spins up a Screenbox sandbox after the PR is ready, walks through the new flow with a real cursor on a real desktop, and posts the resulting screen recording in your Slack thread. This is the "demo video" you've seen — it isn't a render or a synthesis, it's the actual feature running.

Credits

Memoire bills in credits: one currency for everything Memi does on our infrastructure (orchestration, memory, demo capture). Coding-agent tokens (Claude/Codex) are billed by your provider, not us — you bring your own key and they bill you directly. The trial gives you 10,000 credits for 14 days; pricing for ongoing plans is on the pricing page.

Reliability

The gateway has graceful shutdown, a process watchdog that kills runaway runners after a configurable timeout, LLM retry with exponential backoff, and a per-block memory mutex so concurrent edits never corrupt state. Health is exposed at /dashboard/status — version, Fly machine, recent errors, and live event log.