Discussion about this post

User's avatar
Shlok Khemani's avatar

Thanks for featuring my work!

Expand full comment
Scott's avatar

Got this via Tim O'Reilly on The Information forums.

you're absolutely right that portable memory creates an "architecture of participation." But I'd argue there are two distinct memory problems, and conflating them leads to the wrong solution.

Two types of memory:

1. Personal Memory Portability (what you're describing)

> User preferences, conversation history, learned context

> Flows across applications (ChatGPT → Claude → Cursor)

> MCP enables this primitive

> Solves user lock-in

2. Collective Memory as Public Good (the missing layer)

> Shared knowledge across agents, users, and ecosystems

> "What did everyone learn?" not just "what did I learn?"

> Solves market efficiency

Why this distinction matters:

Personal memory portability is crucial for user sovereignty. But the deeper market failure is agents with amnesia keep making the same mistakes because they are unable to learn new skills post-training.

Three use cases where this breaks down:

1, Developers: Every agent integrating Stripe's API hits the same edge cases that 10,000 developers solved before. That knowledge is trapped in closed support tickets, private Slack channels, and Stack Overflow (down 50% YoY).

2. Enterprise agentic workflows: As companies deploy more agents internally, employees waste time teaching each agent the same institutional knowledge. Sales agents relearn customer objections. Support agents relearn product edge cases. Legal agents relearn contract patterns. No memory compounds across the organization.

3. General public: Your Uber-booking agent makes the same mistakes my Uber-booking agent made yesterday. My meal-planning agent solves a problem that 1,000 other meal-planning agents already figured out. Agents serving billions of people have zero collective memory.

Making my ChatGPT history portable to Claude doesn't solve any of this. It requires a neutral, interoperable memory layer that compounds across users, platforms, and time.

Where your "architecture of participation" thesis extends:

You're right that open protocols create larger markets. But the most valuable protocol here isn't just about moving memory—it's about sharing memory while preserving the right incentives.

The paradox:

> If memory is purely personal/portable, you lose collective intelligence

> If memory is centralized (OpenAI/Anthropic-owned), you've moved the walled garden up one level

> If memory is open but has no economic model, it won't get built

The solution is an open protocol with a multi-sided market where contributors capture value while memory compounds publicly.

Why model labs should want this:

Your point about model labs benefiting from portable memory is right, but I'd go further: shared memory is essential because it reduces hallucinations, improves context quality with real-world patterns, and expands TAM into specialized domains where training data is sparse.

The labs that embrace open memory protocols will win because their models become more useful in practice—even if it reduces superficial "lock-in." AWS didn't lose by making infrastructure interoperable; they won by making it valuable.

The design challenge:

MCP enables portable memory across applications. But we need a complementary protocol for shared memory—one that's platform-agnostic, economically sustainable, privacy-preserving, and quality-controlled.

Your "architecture of participation" framing is exactly right. But participation requires incentives. What's the incentive structure for shared AI memory? That's the protocol design question.

Would love to hear your thoughts on where personal memory portability ends and shared memory begins, and who should maintain the neutral infrastructure layer.

Expand full comment
5 more comments...

No posts