387K Messages.
25 Tools. 1 Brain.
A cognitive prosthetic that turns monotropic attention death into queryable institutional memory — because context that survives is context that compounds.
The Problem: Context Dies When the Tunnel Moves
ADHD = monotropic attention. One tunnel at a time. Total immersion in whatever the current focus is — and total amnesia for everything outside it.
Three months deep in Torah study? The AI architecture decisions from last week don't exist anymore. Switch to frontend development? The 47 open questions from Torah study vanish. Not “deprioritized.” Gone.
The Cost
Every context switch was a factory reset. Decisions got remade from scratch. Questions got re-asked. Breakthroughs were re-discovered months later, with no memory of the first time.
This isn't poor organization. It's neurology. The monotropic attention system doesn't do “background threads.” When the tunnel moves, everything outside it goes dark.
The Thesis
The Bottleneck IS the Amplifier. Don't try to fix monotropic attention — it's what enables deep work. Instead, build external infrastructure that preserves context across attention shifts. Let the tunnel do what it does best. Give it a safety net.
Before → After
What changes when context becomes persistent
The Architecture
Conversations → Parquet → Vectors → MCP Tools
Claude Desktop ChatGPT Claude Code Clawdbot
│ │ │ │
└──────────────┴────────────┴──────────────┘
│
sync pipelines
(hourly + nightly)
│
▼
all_conversations.parquet
387K messages
│
┌──────────┼──────────┐
▼ ▼ ▼
LanceDB DuckDB v6 Summaries
85K vecs Queries 9,979 docs
│ │ │
└──────────┼──────────┘
│
MCP Server
25 tools
│
┌──────────┼──────────┐
▼ ▼ ▼
8 Prosthetic 17 Generic Any MCP
Tools Tools Client8 Cognitive Prosthetic Tools
The soul of the system. Not search — survival infrastructure.
Reconstruct save-state for any cognitive domain. Like loading a saved game.
Full re-entry brief when returning to a dormant domain. Thinking stage, open questions, last decisions.
Quantified cost of moving attention between domains. How many open threads you'd abandon.
Global view of unfinished business across all 25 cognitive domains.
Alarm system for abandoned tunnels. Domains with unresolved high-importance questions.
System-wide proof the safety net works. Coverage stats, sync health, data freshness.
When do I think best? Data-driven analysis of engagement patterns by domain and time.
Engagement meta-view over time for any domain. When attention visited and left.
How It Works in Practice
Three real scenarios. Three tools. Context preserved.
Scenario 1: “Where was I on Torah study?”
Returns: thinking stage, last 10 interactions, open questions, recent decisions. Full save-state loaded in 12ms. No “let me try to remember” — the system remembers.
Scenario 2: “Should I switch to frontend work?”
Returns: switch cost score (0–1.0), open questions you'd abandon, shared concepts between domains, and a quantified penalty for context loss. Makes the invisible cost visible.
Scenario 3: “What am I neglecting?”
Returns: domains with high-importance unresolved questions that haven't been visited recently. An alarm system for the things monotropic attention forgot existed.
The Numbers
Every metric. Every claim verifiable.
Technical Stack
Purpose-built. Every layer deliberate.
Semantic Intelligence
Not keyword search — conceptual similarity across 387K messages
-- "What do I think about agency?" -- Doesn't search for the word "agency" -- Finds conversations about autonomy, sovereignty, control, self-determination semantic_search(query="agency and self-determination") → 82,000 vectors compared in 12ms → Results from Torah study, SHELET framework, AI architecture discussions → Cross-domain connections surfaced automatically -- "Have I decided this before?" search_summaries(query="build vs become visible", extract="decisions") → 36,743 decisions searched → Returns: dates, contexts, what I decided, and why -- "How has my thinking evolved?" thinking_trajectory(topic="human-ai collaboration") → Timeline of belief changes → First mention → current position → Velocity of conceptual shift
Why This Matters Beyond One Person
This isn't just personal infrastructure. It's a model.
Cognitive Accessibility
Most “productivity tools” assume neurotypical working memory. They assume you can hold 7±2 items, maintain background threads, and context-switch without loss. For monotropic thinkers, that assumption fails catastrophically. Brain MCP is what happens when you design for how the brain actually works instead of how it “should” work.
The MCP Pattern
Model Context Protocol means any AI client can access these tools. Claude Desktop, Clawdbot, Claude Code — they all speak MCP. The prosthetic isn't locked to one interface. It's a protocol-level capability that any AI assistant can leverage. Build the tools once, use them everywhere.
Institutional Memory for Individuals
Companies have knowledge management systems. Individuals don't. Brain MCP demonstrates that one person can build institutional-grade memory infrastructure — semantic search, structured summaries, decision tracking, pattern analysis — using open-source tools and local compute. No cloud dependency. No subscription. Your thoughts, your infrastructure, your sovereignty.
The Bottleneck IS the Amplifier
Monotropic attention isn't a deficiency to compensate for — it's the engine that enables 18 months of solo deep work building this system in the first place. The constraint that makes context die is the same constraint that enables total immersion. Brain MCP doesn't fix the bottleneck. It builds a safety net around it, so the deep work can continue without permanent context loss.
Structured Intelligence
Every conversation distilled into queryable components
“The industry builds AI to replace human memory. I built AI to extend it — because the human is the point.”
387K messages. 85K embeddings. 25 tools. 12ms to recover any context.
Built in Beit Shemesh, Israel. Solo. 18 months.