Got this via Tim O'Reilly on The Information forums.
you're absolutely right that portable memory creates an "architecture of participation." But I'd argue there are two distinct memory problems, and conflating them leads to the wrong solution.
Two types of memory:
1. Personal Memory Portability (what you're describing)
> User preferences, conversation history, learned context
> Flows across applications (ChatGPT → Claude → Cursor)
> MCP enables this primitive
> Solves user lock-in
2. Collective Memory as Public Good (the missing layer)
> Shared knowledge across agents, users, and ecosystems
> "What did everyone learn?" not just "what did I learn?"
> Solves market efficiency
Why this distinction matters:
Personal memory portability is crucial for user sovereignty. But the deeper market failure is agents with amnesia keep making the same mistakes because they are unable to learn new skills post-training.
Three use cases where this breaks down:
1, Developers: Every agent integrating Stripe's API hits the same edge cases that 10,000 developers solved before. That knowledge is trapped in closed support tickets, private Slack channels, and Stack Overflow (down 50% YoY).
2. Enterprise agentic workflows: As companies deploy more agents internally, employees waste time teaching each agent the same institutional knowledge. Sales agents relearn customer objections. Support agents relearn product edge cases. Legal agents relearn contract patterns. No memory compounds across the organization.
3. General public: Your Uber-booking agent makes the same mistakes my Uber-booking agent made yesterday. My meal-planning agent solves a problem that 1,000 other meal-planning agents already figured out. Agents serving billions of people have zero collective memory.
Making my ChatGPT history portable to Claude doesn't solve any of this. It requires a neutral, interoperable memory layer that compounds across users, platforms, and time.
Where your "architecture of participation" thesis extends:
You're right that open protocols create larger markets. But the most valuable protocol here isn't just about moving memory—it's about sharing memory while preserving the right incentives.
The paradox:
> If memory is purely personal/portable, you lose collective intelligence
> If memory is centralized (OpenAI/Anthropic-owned), you've moved the walled garden up one level
> If memory is open but has no economic model, it won't get built
The solution is an open protocol with a multi-sided market where contributors capture value while memory compounds publicly.
Why model labs should want this:
Your point about model labs benefiting from portable memory is right, but I'd go further: shared memory is essential because it reduces hallucinations, improves context quality with real-world patterns, and expands TAM into specialized domains where training data is sparse.
The labs that embrace open memory protocols will win because their models become more useful in practice—even if it reduces superficial "lock-in." AWS didn't lose by making infrastructure interoperable; they won by making it valuable.
The design challenge:
MCP enables portable memory across applications. But we need a complementary protocol for shared memory—one that's platform-agnostic, economically sustainable, privacy-preserving, and quality-controlled.
Your "architecture of participation" framing is exactly right. But participation requires incentives. What's the incentive structure for shared AI memory? That's the protocol design question.
Would love to hear your thoughts on where personal memory portability ends and shared memory begins, and who should maintain the neutral infrastructure layer.
Thanks for the feedback. We focused on a user's personal memory for this article because we feel there is an important difference in how first-party and third-party memory systems work. However, the idea of keeping a bank of solutions or instructions is interesting and probably worth considering further. In general, I would lean away from having other users' context injected into my own model because of the potential for prompt injection and the risk of logic errors becoming contagious. If one model gets a solution wrong and it gets uploaded to a shared memory, any model seeing that would be more likely to make the same mistake.
Still, I think your idea has promise. In fact, just yesterday Anthropic unveiled Skills, which is a pretty interesting take on a similar concept (although currently for one person only). Users could take well-crafted instructions or examples, put them in a file, and have the model read them whenever it would be beneficial. I wonder how far a similar solution can scale on the internet while still being trustworthy and secure.
"If one model gets a solution wrong and it gets uploaded to a shared memory, any model seeing that would be more likely to make the same mistake."
That is a problem that we see in human societies as well, where misconceptions sometimes gather strength. However, over time the wisdom of crowds (likes, ratings, public debates) weeds them out. Agents are less prone to emotion-based decisions, so we can expect that process to work even faster for agent societies 😀. I think having access to fitness feedback from a large group of other agents (and their users) can lead to a more robust quality assurance process than in the case of a single user.
I really like this! For chat history, there could also be a more user-driven starting point of a browser extension that grabs all of the history, given all of these apps make that available in the browser. (Its existence might also nudge the major labs towards a more official API.)
Agreed! Something like this, if open, could be a very useful stop gap. Of course an official API with OAuth support would be ideal for making this accessible to all users (including non technical ones) and applications but I really like this idea.
Thanks for featuring my work!
Of course! It was important work. We would love to hear what you think further - why don't we set up a call?
Got this via Tim O'Reilly on The Information forums.
you're absolutely right that portable memory creates an "architecture of participation." But I'd argue there are two distinct memory problems, and conflating them leads to the wrong solution.
Two types of memory:
1. Personal Memory Portability (what you're describing)
> User preferences, conversation history, learned context
> Flows across applications (ChatGPT → Claude → Cursor)
> MCP enables this primitive
> Solves user lock-in
2. Collective Memory as Public Good (the missing layer)
> Shared knowledge across agents, users, and ecosystems
> "What did everyone learn?" not just "what did I learn?"
> Solves market efficiency
Why this distinction matters:
Personal memory portability is crucial for user sovereignty. But the deeper market failure is agents with amnesia keep making the same mistakes because they are unable to learn new skills post-training.
Three use cases where this breaks down:
1, Developers: Every agent integrating Stripe's API hits the same edge cases that 10,000 developers solved before. That knowledge is trapped in closed support tickets, private Slack channels, and Stack Overflow (down 50% YoY).
2. Enterprise agentic workflows: As companies deploy more agents internally, employees waste time teaching each agent the same institutional knowledge. Sales agents relearn customer objections. Support agents relearn product edge cases. Legal agents relearn contract patterns. No memory compounds across the organization.
3. General public: Your Uber-booking agent makes the same mistakes my Uber-booking agent made yesterday. My meal-planning agent solves a problem that 1,000 other meal-planning agents already figured out. Agents serving billions of people have zero collective memory.
Making my ChatGPT history portable to Claude doesn't solve any of this. It requires a neutral, interoperable memory layer that compounds across users, platforms, and time.
Where your "architecture of participation" thesis extends:
You're right that open protocols create larger markets. But the most valuable protocol here isn't just about moving memory—it's about sharing memory while preserving the right incentives.
The paradox:
> If memory is purely personal/portable, you lose collective intelligence
> If memory is centralized (OpenAI/Anthropic-owned), you've moved the walled garden up one level
> If memory is open but has no economic model, it won't get built
The solution is an open protocol with a multi-sided market where contributors capture value while memory compounds publicly.
Why model labs should want this:
Your point about model labs benefiting from portable memory is right, but I'd go further: shared memory is essential because it reduces hallucinations, improves context quality with real-world patterns, and expands TAM into specialized domains where training data is sparse.
The labs that embrace open memory protocols will win because their models become more useful in practice—even if it reduces superficial "lock-in." AWS didn't lose by making infrastructure interoperable; they won by making it valuable.
The design challenge:
MCP enables portable memory across applications. But we need a complementary protocol for shared memory—one that's platform-agnostic, economically sustainable, privacy-preserving, and quality-controlled.
Your "architecture of participation" framing is exactly right. But participation requires incentives. What's the incentive structure for shared AI memory? That's the protocol design question.
Would love to hear your thoughts on where personal memory portability ends and shared memory begins, and who should maintain the neutral infrastructure layer.
Thanks for the feedback. We focused on a user's personal memory for this article because we feel there is an important difference in how first-party and third-party memory systems work. However, the idea of keeping a bank of solutions or instructions is interesting and probably worth considering further. In general, I would lean away from having other users' context injected into my own model because of the potential for prompt injection and the risk of logic errors becoming contagious. If one model gets a solution wrong and it gets uploaded to a shared memory, any model seeing that would be more likely to make the same mistake.
Still, I think your idea has promise. In fact, just yesterday Anthropic unveiled Skills, which is a pretty interesting take on a similar concept (although currently for one person only). Users could take well-crafted instructions or examples, put them in a file, and have the model read them whenever it would be beneficial. I wonder how far a similar solution can scale on the internet while still being trustworthy and secure.
"If one model gets a solution wrong and it gets uploaded to a shared memory, any model seeing that would be more likely to make the same mistake."
That is a problem that we see in human societies as well, where misconceptions sometimes gather strength. However, over time the wisdom of crowds (likes, ratings, public debates) weeds them out. Agents are less prone to emotion-based decisions, so we can expect that process to work even faster for agent societies 😀. I think having access to fitness feedback from a large group of other agents (and their users) can lead to a more robust quality assurance process than in the case of a single user.
I really like this! For chat history, there could also be a more user-driven starting point of a browser extension that grabs all of the history, given all of these apps make that available in the browser. (Its existence might also nudge the major labs towards a more official API.)
Agreed! Something like this, if open, could be a very useful stop gap. Of course an official API with OAuth support would be ideal for making this accessible to all users (including non technical ones) and applications but I really like this idea.