Cognitive Prosthetic · Brain MCP

387K Messages.
25 Tools. 1 Brain.

A cognitive prosthetic that turns monotropic attention death into queryable institutional memory — because context that survives is context that compounds.

387K
Messages Indexed
25
MCP Tools
12ms
Avg Query Time
18mo
Built Solo

The Problem: Context Dies When the Tunnel Moves

ADHD = monotropic attention. One tunnel at a time. Total immersion in whatever the current focus is — and total amnesia for everything outside it.

Three months deep in Torah study? The AI architecture decisions from last week don't exist anymore. Switch to frontend development? The 47 open questions from Torah study vanish. Not “deprioritized.” Gone.

The Cost

Every context switch was a factory reset. Decisions got remade from scratch. Questions got re-asked. Breakthroughs were re-discovered months later, with no memory of the first time.

This isn't poor organization. It's neurology. The monotropic attention system doesn't do “background threads.” When the tunnel moves, everything outside it goes dark.

The Thesis

The Bottleneck IS the Amplifier. Don't try to fix monotropic attention — it's what enables deep work. Instead, build external infrastructure that preserves context across attention shifts. Let the tunnel do what it does best. Give it a safety net.

Before → After

What changes when context becomes persistent

Past Context
Gone when tunnel moves12ms retrieval
Open Questions
Forgotten permanently111,942 preserved
Decisions
Remade from scratch36,743 tracked
Domain Switching
Full restart each timeQuantified cost + recovery brief

The Architecture

Conversations → Parquet → Vectors → MCP Tools

Claude Desktop    ChatGPT    Claude Code    Clawdbot
      │              │            │              │
      └──────────────┴────────────┴──────────────┘
                         │
                    sync pipelines
                  (hourly + nightly)
                         │
                         ▼
            all_conversations.parquet
                    387K messages
                         │
              ┌──────────┼──────────┐
              ▼          ▼          ▼
          LanceDB    DuckDB    v6 Summaries
         85K vecs   Queries    9,979 docs
              │          │          │
              └──────────┼──────────┘
                         │
                    MCP Server
                    25 tools
                         │
              ┌──────────┼──────────┐
              ▼          ▼          ▼
          8 Prosthetic  17 Generic  Any MCP
            Tools        Tools      Client
Hourly
Clawdbot Sync
Nightly
Full Sync + Embeddings
768d
Vector Dimensions

8 Cognitive Prosthetic Tools

The soul of the system. Not search — survival infrastructure.

tunnel_state
12ms

Reconstruct save-state for any cognitive domain. Like loading a saved game.

context_recovery
12ms

Full re-entry brief when returning to a dormant domain. Thinking stage, open questions, last decisions.

switching_cost
9ms

Quantified cost of moving attention between domains. How many open threads you'd abandon.

open_threads
2.7s

Global view of unfinished business across all 25 cognitive domains.

dormant_contexts
2.7s

Alarm system for abandoned tunnels. Domains with unresolved high-importance questions.

trust_dashboard
59ms

System-wide proof the safety net works. Coverage stats, sync health, data freshness.

cognitive_patterns
10ms

When do I think best? Data-driven analysis of engagement patterns by domain and time.

tunnel_history
5ms

Engagement meta-view over time for any domain. When attention visited and left.

How It Works in Practice

Three real scenarios. Three tools. Context preserved.

Scenario 1: “Where was I on Torah study?”

tunnel_state(domain="torah")

Returns: thinking stage, last 10 interactions, open questions, recent decisions. Full save-state loaded in 12ms. No “let me try to remember” — the system remembers.

Scenario 2: “Should I switch to frontend work?”

switching_cost(current="torah", target="frontend-dev")

Returns: switch cost score (0–1.0), open questions you'd abandon, shared concepts between domains, and a quantified penalty for context loss. Makes the invisible cost visible.

Scenario 3: “What am I neglecting?”

dormant_contexts()

Returns: domains with high-importance unresolved questions that haven't been visited recently. An alarm system for the things monotropic attention forgot existed.

The Numbers

Every metric. Every claim verifiable.

377,326
Messages Indexed
82,000+
Semantic Embeddings
9,979
Structured Summaries
111,942
Open Questions Preserved
36,743
Decisions Tracked
31
Breakthroughs Captured
25
Cognitive Domains
143/143
Tests Passing

Technical Stack

Purpose-built. Every layer deliberate.

Vector Search
LanceDB, nomic-embed-text-v1.5, 768-dimensional embeddings, 85K vectors
Structured Data
DuckDB, Apache Parquet, 387K messages, 9,979 structured summaries
Protocol
Model Context Protocol (MCP) — 25 tools exposed to any MCP-compatible client
Sync Pipeline
Python, hourly Clawdbot sync, nightly full sync + embedding generation
Sources
Claude Desktop, ChatGPT, Claude Code, Clawdbot — all platforms unified
Extraction
v6 structured summaries: decisions, questions, breakthroughs, quotes per conversation
Runtime
Keep-alive daemon mode, <12ms average query, 143/143 tests passing

Semantic Intelligence

Not keyword search — conceptual similarity across 387K messages

-- "What do I think about agency?"
-- Doesn't search for the word "agency"
-- Finds conversations about autonomy, sovereignty, control, self-determination

semantic_search(query="agency and self-determination")
  → 82,000 vectors compared in 12ms
  → Results from Torah study, SHELET framework, AI architecture discussions
  → Cross-domain connections surfaced automatically

-- "Have I decided this before?"
search_summaries(query="build vs become visible", extract="decisions")
  → 36,743 decisions searched
  → Returns: dates, contexts, what I decided, and why

-- "How has my thinking evolved?"
thinking_trajectory(topic="human-ai collaboration")
  → Timeline of belief changes
  → First mention → current position
  → Velocity of conceptual shift

Why This Matters Beyond One Person

This isn't just personal infrastructure. It's a model.

Cognitive Accessibility

Most “productivity tools” assume neurotypical working memory. They assume you can hold 7±2 items, maintain background threads, and context-switch without loss. For monotropic thinkers, that assumption fails catastrophically. Brain MCP is what happens when you design for how the brain actually works instead of how it “should” work.

The MCP Pattern

Model Context Protocol means any AI client can access these tools. Claude Desktop, Clawdbot, Claude Code — they all speak MCP. The prosthetic isn't locked to one interface. It's a protocol-level capability that any AI assistant can leverage. Build the tools once, use them everywhere.

Institutional Memory for Individuals

Companies have knowledge management systems. Individuals don't. Brain MCP demonstrates that one person can build institutional-grade memory infrastructure — semantic search, structured summaries, decision tracking, pattern analysis — using open-source tools and local compute. No cloud dependency. No subscription. Your thoughts, your infrastructure, your sovereignty.

The Bottleneck IS the Amplifier

Monotropic attention isn't a deficiency to compensate for — it's the engine that enables 18 months of solo deep work building this system in the first place. The constraint that makes context die is the same constraint that enables total immersion. Brain MCP doesn't fix the bottleneck. It builds a safety net around it, so the deep work can continue without permanent context loss.

Structured Intelligence

Every conversation distilled into queryable components

111,942
Open Questions
Across 25 domains, preserved
36,743
Decisions Tracked
With context and reasoning
31
Breakthroughs Captured
Moments that changed direction
“The industry builds AI to replace human memory. I built AI to extend it — because the human is the point.”

387K messages. 85K embeddings. 25 tools. 12ms to recover any context.
Built in Beit Shemesh, Israel. Solo. 18 months.