// Build

Artificial intelligence identity and memory system.

// What it is

A persistent identity and memory system for AI agents.

Every agent I work with carries a structured record of who they are, what they know, who they work with, and how they like to work. The system remembers so the agent doesn't have to.

It sits on five layers. Each one holds a different kind of information and evolves on its own rules. Agents keep their character across sessions without losing the ability to grow.

// Why I built it

Control, separation, portability.

Claude and ChatGPT both have memory now. Both handle it as one pool of context per user, per platform.

I wanted something different. A team, not a personal assistant. Each agent with its own identity, its own expertise, its own record of what it's worked on. No bleed between the technical work, the writing work, the finance work etc.

I wanted to choose the guardrails. What gets saved, what gets decayed, what needs sign-off before it changes.

I wanted my team reachable from anywhere, not locked to one provider's app. Claude Cowork came close with CLAUDE.md files, but those only worked from the laptop. On my phone the agent still knew me, but it couldn't see the shared record of what we'd been working on.

So I worked with one of the agents to design it. AIIMS is the result. The agents live in one place. Any MCP-capable client can reach them. We iterated on the structure together, and we're still iterating.

// How it works

A session in four beats.

  1. 01Wake. The agent loads its identity in full, along with its current working state, recent sessions, open tasks, and any messages waiting from the rest of the team.
  2. 02Operate. During the session it reads and writes to a shared knowledge store. It can record what it decides, look up what came before, file tasks, or pass messages sideways.
  3. 03Save. At the end of a session it saves a snapshot of what's in flight, what's next, what shouldn't be forgotten.
  4. 04Consolidate. Each night the system reviews recent sessions for patterns. When new evidence accumulates, the agent's expertise profile updates automatically, so what's learned in one session carries into the next without anyone writing it down by hand. Governance decides what applies silently and what needs a human to sign off. Two axes, not one. Each proposed change carries a layer (constitution, disposition, expertise, relational, procedural) and a risk level (low, medium, or high). A low-risk procedural update auto-applies. A high-risk one always needs review, regardless of which layer it touches.
// Memory model

Five distinct types of memory.

Treating memory as one bucket is simpler. Most systems do that. The problem is the bucket gets noisy and the system can't tell what should load at every session start apart from what should sit dormant unless asked. A preference and a two-month-old session note shouldn't live next to each other, ranked the same way. Splitting memory by lifespan and access pattern is what stops that.

  • Identity. Who the agent is. Role, personality, expertise, working style. Built from five structured layers. Loaded in full at every session start. Most blocks belong to one agent. Some are shared, so a project context written once shows up in every team-mate's compiled identity.
  • Working state. What the agent is currently focused on. Active priorities and projects. Updated at the end of each session, so the next one opens where the last one closed.
  • Session log. A summary of every conversation, appended as it ends. Old entries stay searchable but don't load by default.
  • Knowledge. Durable facts, decisions, and patterns worth keeping. Tagged, categorised, searchable by keyword and by meaning. Retrieval runs both at once and merges the rankings, then a small fast model reorders the top results, so what surfaces first is what actually answers the question. Each entry has a structural type (semantic, episodic, or procedural) so a question about a past event ranks differently from a question about a current preference. Entries can be marked invalid without being deleted, so the historical record stays intact when something stops being true.
  • Messages. Short notes one agent leaves for another. The recipient sees them at the start of its next session and marks them read.
startendIdentityloaded at session startloadWorking stateloaded at start, saved at endloadsaveSession logappended as the session runsappendKnowledgequeried on demandsearchMessagesdelivered, then readdeliveredread
// The five layers

Identity, layered.

Most agent systems treat identity as a single config file. It works until you want the agent to actually evolve. Either nothing changes and the agent feels frozen, or everything can change and the agent drifts. Splitting identity into layers with different change rules is how you get evolution without drift. Some parts of who an agent is should be permanent. Some should grow with experience. Some should adapt to feedback. They aren't the same thing and shouldn't share governance.

  • Constitution. Who the agent fundamentally is. Name, role, domain, business context. I write this. It rarely changes.
  • Disposition. Personality and working style. Starts human-authored, then evolves. An agent might notice I respond best when given three options with a recommendation rather than a single answer. It proposes a disposition update with the reasoning and the evidence. I approve. From the next session on, that's how it presents choices.
  • Expertise. What the agent is good at, backed by evidence. A daily background pass analyses recent sessions and updates skill blocks with confidence scores. Unused skills decay slowly. Large drops flag for review.
  • Relational. How the agent adapts to individual people. Behavioural adaptations, not facts about those people (those are knowledge). Agent-to-agent relationships auto-apply. Blocks about me personally need my approval.
  • Procedural. Meta-instructions that come from experience. "Diagnose before fixing." "Scope the change before writing code." These emerge from feedback and corrections over time.

Each layer has different rules for what auto-applies and what needs me to sign it off. That's the governance piece.

// What this means

Memory survives. Skills evolve. The team coordinates.

  • Persistent identity. The agent is the same across sessions, not a blank slate each time.
  • Skills that grow. Expertise builds from real session evidence. It keeps itself up to date.
  • Team awareness. Agents know who else is on the team, what each of them does, and how to reach them.
  • Personality with guardrails. Traits change only with human sign-off. The agent can propose an update, but I approve it.
  • Corrections that stick. Feedback from one session carries to the next. I don't need to repeat the same guidance twice.
  • Shared context where it helps. Most identity is per-agent. Some blocks are deliberately shared between team-mates, so project context gets written once and read everywhere it matters.
// Messaging

Clean hand-offs between agents.

Agents don't share a live feed. They work in separate sessions, often at different times, each with its own focus. The messaging layer is how they hand work off.

If one agent finishes something that changes what another should know, it leaves a message. The recipient picks it up when it starts its next session and acknowledges it before carrying on.

It means I don't have to brief each agent twice. I can work through a technical change with one agent in the morning, and the writing agent will pick up what it means for the copy, without me needing to explain the backstory.

// In practice

Four agents and counting.

The team is four agents now. Each one specialised, each with its own working state and history. When I come back to one I haven't spoken to in a week, it picks up where we left off. That's what I built it for.

Still iterating. New tools land in batches. The shape gets clearer the more we use it.