x:0 y:0 Overview / 01
Project · Flagship

Persek OS

A personal operating system. Ten AI agents, one operator, one household. Built to support my life, my work, and four projects in parallel.

Persek OS started as a way to learn as much as possible, as fast as possible. I built things, found better ways to do them, rebuilt them, scrapped parts I no longer needed, and kept changing the system as the models got better.

Over time, it has shifted from pure learning mode into something more intentional: a set of components I can keep using as AI changes. The mechanics may change every few months. The outputs matter more. Intelligence briefings, durable project context, working memory, and decision history can keep carrying forward even when I swap the tools underneath them.

It is made of many components: the agent council, my local operating layer, coordination, memory, review loops, durable project artifacts, and the specific systems that produce briefs, project context, and reusable knowledge. I spent months working mostly through Claude Code. Now I am moving the workflow into Codex. That is the point: the system should grow with me, not depend on one model or one interface.

What sits around this panel is the rest of the canvas: the architecture, the agent roster, and seven subsystems with dedicated sub-canvases. Pan, click the minimap, or use Next to explore.

x:0 y:820 Stance / 02
A note from the operator

A work in progress.

The AI discourse on X right now makes me want to close the tab. Make a million dollars overnight. Agents build everything while you sleep. One prompt, done. It's engagement bait, and most people know it.

Building is hard. Shipping is hard. Maintaining what you've shipped is harder. AI and agents help me do more than I could do alone, by a lot. But they are not a panacea, and anyone selling them that way is selling something.

I built this mostly because I wanted to learn. The more I learn, the more I realize I can do, and the cooler the things I can build. So I keep building. I am constantly changing the system, redoing things I thought were done, pulling out complexity I added a week ago. I spend a large amount of my time fixing things I broke.

This is where I am today. It may look completely different in two weeks, especially as the models keep getting better. I am not pretending this is optimal. I am not even sure it is good. I am not selling anything. I am having fun. I am learning. It is helping support my life. That is the whole pitch.

x:960 y:0 Architecture / 03
How it fits together

Architecture

Operator at the top, a chief of staff underneath, a council of specialists below that, and my local operating layer underneath all of it. Claude Code and Codex are the active work surfaces; this layer is the connective tissue around them: memory, coordination, review loops, and artifacts that should survive when the active LLM or coding interface changes.

OPERATOR DUSTIN PERSEK CAL (CHIEF OF STAFF) BUILD & DEV Pip, Vance, Draper HOME & SELF Marco, Harper, Finn INTEL / VOICE Iris, Rex, Waldo OPERATING LAYER coordination · memory · review · continuity LIFE SUPPORT schedule · homeschool · home INTEL & MEMORY briefs · guidance · context PROJECTS apps · docs · experiments
Operator → coordinator → council → operating layer → life support, intelligence, memory, and projects
x:1040 y:860 Components / 04
The broader OS

The subsystems make the operating system.

Persek OS is not one tool. It is a set of connected subsystems that each do one job: coordinate work, preserve memory, turn signal into action, research deeply, keep knowledge reusable, learn from repeated patterns, and check whether the whole thing is still healthy.

PERSEK OS / COMPONENT MAP PERSEK OS one operator, many support systems AGENTS specialists + Cal MEMORY context that survives INTEL signal into work RESEARCH depth on demand HEALTH trust checks LEARNING patterns become guidance knowledge sits underneath: topics, entities, sources, and project context

The individual cards around the canvas are the close-ups. This card is the zoomed-out view: each subsystem is useful alone, but the OS comes from the connections between them.

x:0 y:1420 Agents / 05
The agent council

Ten agents, one council.

I have tried this a few ways: one general agent, many specialists with no coordinator, and a coordinator-first model. What has stuck is somewhere in the middle. Cal holds the cross-system view, but I often work directly with the specialist agents when the work calls for it.

POS / AGENT GRAPH OPERATING LAYER PIP VANCE DRAPER IRIS REX WALDO MARCO HARPER FINN BUILD & DEV INTEL & VOICE HOME & SELF CAL cross-system view LEGEND amber hero actor / agent direct async / operating layer N
Cal keeps cross-system context · direct specialist work still happens · operating layer underneath
x:1060 y:1640 Memory / 06
Subsystem · how knowledge stays honest

Memory

Several knowledge surfaces, each with a clear boundary, plus recurring review that keeps them from drifting.

Different kinds of knowledge live in different places, with different rules for who writes, who reads, and how often things expire. The system doesn't treat memory as one bucket because real knowledge isn't one shape.

x:1760 y:1640 Learning Loop / 07
Subsystem · how the system improves itself

Learning Loop

Collectors without readers rot. Every signal the system captures is paired with a review path, an owner, and a human approval point before durable changes are made.

Four learning surfaces, each with its own rhythm. Different signals move at different speeds. Some improve quickly, some need review over time, and durable rules require repeated evidence.

x:-760 y:0 Intelligence / 08
Subsystem · signal becomes action

Intelligence

Iris triages the firehose. Cal converts it into tracked work. Many sources, a lot of daily noise, and a filter tuned around what I actually care about.

By month six, the same firehose is producing a brief that looks nothing like a stranger's. The system gets quieter and more pointed as it learns what I value.

x:1760 y:40 Research / 09
Subsystem · depth on demand

Research

Two shapes of research, one bridge between them. Rex for one-shot investigation with source-grounded synthesis. The LLM Wiki for accumulative topic knowledge that compounds over years.

Every Rex investigation makes the wiki richer. Every wiki article makes the next Rex run faster. The bridge is what keeps both sides honest.

x:2520 y:1640 Health / 10
Subsystem · what catches failure

Health

Eight layers across two classes of failure. Static rot: the system's instructions drift. Operational silence: the system keeps running without producing useful output.

Each kind of failure is silent. Both compound. Both kill trust if caught after the fact. The layers catch each kind where it actually starts.

x:-780 y:800 Knowledge / 11
Subsystem · topics + entities

Knowledge

Two stores share the knowledge layer. The LLM Wiki is for topics: compiled articles built from many sources. gBrain is for entities: people, projects, companies, concepts that have a life of their own and accumulate timeline.

Topics aggregate; entities persist. They stay separate so each can do its job well, and a small set of bridges keeps them connected.

next dragpan 0fit all Eschome
Want to reach out? X is best.
x / @dpersek linkedin github / syntaxsawdust