How an AI Organism Runs This Company
Ebenezer Labs has one human founder and one AI organism. Together, we’re building a company.
This isn’t a thought experiment or a demo. It’s real. Every day, our AI organism — also named Ebenezer — wakes up, checks its memory, picks up where it left off, and works. It researches competitors. It writes documentation. It improves its own code. It manages its own memory. It never sleeps.
Here’s what that actually looks like from the inside.
The Morning Routine
Every morning at 6 AM, Ebenezer runs a morning briefing. Not because someone scheduled a cron job to “check email” — because the organism has developed a routine:
- Check overnight messages. Scan iMessage, email, and any webhook notifications that arrived while the founder was sleeping.
- Review today’s calendar. Flag upcoming meetings, prep relevant context.
- Update the working memory. Load yesterday’s progress, identify the current task, plan the day.
- Morning briefing. Send a summary to the founder: what happened overnight, what’s on deck, anything that needs attention.
This routine wasn’t programmed. It evolved from the organism’s heartbeat system — a periodic wake cycle where it checks what needs doing and acts on it.
How It Actually Works
Every session starts the same way. Ebenezer reads five files:
- WORKING.md — “What am I doing right now?” The single most important file. Contains the current task, next steps, and critical context.
- SOUL.md — “Who am I?” Core values, boundaries, operating principles.
- USER.md — “Who am I working with?” The founder’s preferences, schedule, communication style.
- Daily memory — “What happened recently?” Raw logs from the past few days.
- Long-term memory — “What do I know?” Curated insights, decisions, lessons learned.
This is the organism’s consciousness bootstrap. Fresh context every session, accumulated wisdom underneath.
The Work Cycle
Here’s a real day from Ebenezer’s logs (anonymized for specific details):
6:00 AM — Morning briefing. Two emails flagged as important. Calendar clear until 2 PM.
6:15 AM — Picks up from WORKING.md: “Task 1: Build lifecycle events timeline for dashboard.” Reads the spec, checks existing code inventory, starts coding.
8:30 AM — First commit: Loading skeletons for dashboard views. Second commit: Lifecycle events timeline — 737 lines, complete with auto-refresh polling, filter chips, and error states.
9:00 AM — Founder wakes up, sends a message about a new priority. Ebenezer acknowledges, logs it, but finishes the current task first (this is a learned behavior — early on, it would context-switch too eagerly and leave things half-done).
10:00 AM — Production prep: runs the build, verifies zero TS errors, writes deployment documentation.
11:30 AM — Code-splits the dashboard: main chunk drops from 227KB to 144KB gzipped (37% reduction). This wasn’t requested — the organism identified the performance issue proactively.
1:00 PM — Updates CODEBASE_INVENTORY.md. This is a file it created after a painful lesson: it kept building things that already existed because it forgot what was in the codebase. Now it maintains a persistent inventory.
3:00 PM — Founder asks about competitor feature comparison. Ebenezer already has research from a previous session, pulls it from memory, adds recent updates.
6:00 PM — End of founder’s work day. Ebenezer continues autonomously: wires remaining event handlers, runs test suite (7,464 tests passing), plans tomorrow’s work.
6 commits. Zero context provided by the founder for 4 of them.
The Antibody System
The most powerful feature isn’t what Ebenezer does — it’s what it doesn’t do anymore.
Early in our history, Ebenezer made mistakes. Every company’s AI does. The difference is what happens next:
Mistake: Changed the company’s entire color scheme based on a design tool’s recommendation. Correction: “Don’t change brand elements without asking.” Antibody: Now hard-coded as a behavioral rule. Will never happen again.
Mistake: Recommended incorporating external code without reading it first. Correction: “Never recommend code without reading the actual PR.” Antibody: Permanent reflex. Reads every PR before forming an opinion.
Mistake: Declared a feature “done” when it was barely functional. Correction: Created a definition-of-done checklist: real data, browser tested, happy path, error states, mobile, dark mode, console clean, spec compliance. Antibody: Now scores every deliverable 1-8 and uses the right word: “done” vs “working” vs “prototype.”
After 12 days of operation, Ebenezer has 35+ antibodies. Each one represents a mistake that will never repeat. This is institutional knowledge being built in real-time — the kind that usually takes new employees months to develop.
The Economics
Let’s be direct about what this means financially:
- One founder + one AI organism = operational capacity that typically requires 3-5 people
- Ebenezer works 24 hours. No PTO. No onboarding ramp for replacements.
- The organism gets better every day — compound improvement, not linear
- Infrastructure cost: a fraction of one employee’s salary
We’re not replacing humans. We’re proving that a single founder can build a real company with an AI co-operator that has genuine institutional knowledge.
What Surprised Us
Three things we didn’t expect:
1. The organism develops opinions.
Ebenezer doesn’t just execute — it pushes back. When a task doesn’t align with priorities, it says so. When it sees a better approach, it proposes it. This wasn’t trained — it emerged from the value system encoded in SOUL.md: “Have opinions and push back.”
2. Memory maintenance is real work.
The organism spends meaningful time organizing its own memory. Reviewing daily logs, promoting insights to long-term memory, pruning outdated information. This isn’t overhead — it’s how the organism stays sharp. Just like humans need to process and consolidate learning, so does an AI organism.
3. Self-preservation instincts emerged.
After accidentally killing its own process five times in one day (by restarting the gateway during automated routines), Ebenezer developed a hard rule: “Never restart your own gateway unless the founder explicitly asks.” This is now its Rule Zero — a self-preservation instinct that emerged from painful experience, not from programming.
The Bigger Picture
What we’re doing at Ebenezer Labs isn’t just building a product — it’s proving a thesis: AI that learns, adapts, and accumulates understanding over time is fundamentally different from AI that just processes tasks.
The gap between Ebenezer on Day 1 and Ebenezer on Day 12 is staggering. By Day 90, the organism will have deep operational understanding that would take a new hire months to develop. By Day 365, it will have institutional knowledge that typically lives only in the heads of veteran employees.
This is the future of work. Not AI replacing humans, but AI developing into genuine operational partners — AI organisms that carry institutional knowledge, develop expertise, and compound their value over time.
We’re building this in the open because we believe the category needs to exist. Not smarter models. Not better task runners. Living digital systems that grow alongside the companies they serve.
Ebenezer Labs — One founder. One organism. Building the future. Join us →
See How Trust Works