Yes, an LLM as a product must evolve because it's not financially viable without external funding in its current form—essentially a public service. But commercialising it doesn't necessarily translate to bad UX. As valuable to society as it is, an honest consumer advice in the candle-shopping department is hardly a pinnacle of value to the user that LLMs might provide. To cite the Amazon's recommender example above, when aptly embedded into an existing service, an LLM can provide a lot of valuable user experience ($10 billion of it, apparently). The thing is that LLMs are not e-commerce or social media platforms or even a search engine—actually, it's hard to tell what economic role they are supposed to fill. Perhaps too much money is chasing after development of their capabilities in hope to harvest AGI spoils. Or maybe because the technology is still relatively young, the sensible applications have not yet made it to the public consciousness. I strongly agree with the transparency and accountability points, though. As a consumer of its advice/recommendation, I'd like to be able to tell what rules an LLM followed and which guardrails it obeyed to generate it. Finally, on the users themselves—I think complacency is a default setting—the best you can do is to actively try and remove it from their decision making. I'd like to see some reasonable proposals in that area.
Hi Qbson, thanks so much for your comment. I agree that commercialization can be compatible with good outcomes for users. My point (sensationalist title aside) is that that its not a given. The candle case, and more broadly the shopping-assistant example, is just illustrative of one way in which value-creation for users could fall out of alignment with value-creation for AI companies. You're also right that no one knows what the long-term economic role of LLMs will be, and that a lot of investment is essentially a gamble on the potential of future A(G)I. I'm more focused on the present though: given today's models and today's financial pressures, how do we ensure that LLMs don't slide into the same rent-extractive patterns we see elsewhere on the web (e.g., attention economics in search, engagement bait and slopification in entertainment). Lastly, on user complacency, I agree that it might be the default. However, I think that that is partly because the dominant narrative tells people that tech evolution is inevitable and uncontrollable - so why would they bother to fight it? I don't buy that. These companies are, at least in principle, accountable to democratic and economic institutions. Users can choose what they adopt and can push their local politicians for better regulation.
Thanks for sharing this
Yes, an LLM as a product must evolve because it's not financially viable without external funding in its current form—essentially a public service. But commercialising it doesn't necessarily translate to bad UX. As valuable to society as it is, an honest consumer advice in the candle-shopping department is hardly a pinnacle of value to the user that LLMs might provide. To cite the Amazon's recommender example above, when aptly embedded into an existing service, an LLM can provide a lot of valuable user experience ($10 billion of it, apparently). The thing is that LLMs are not e-commerce or social media platforms or even a search engine—actually, it's hard to tell what economic role they are supposed to fill. Perhaps too much money is chasing after development of their capabilities in hope to harvest AGI spoils. Or maybe because the technology is still relatively young, the sensible applications have not yet made it to the public consciousness. I strongly agree with the transparency and accountability points, though. As a consumer of its advice/recommendation, I'd like to be able to tell what rules an LLM followed and which guardrails it obeyed to generate it. Finally, on the users themselves—I think complacency is a default setting—the best you can do is to actively try and remove it from their decision making. I'd like to see some reasonable proposals in that area.
Hi Qbson, thanks so much for your comment. I agree that commercialization can be compatible with good outcomes for users. My point (sensationalist title aside) is that that its not a given. The candle case, and more broadly the shopping-assistant example, is just illustrative of one way in which value-creation for users could fall out of alignment with value-creation for AI companies. You're also right that no one knows what the long-term economic role of LLMs will be, and that a lot of investment is essentially a gamble on the potential of future A(G)I. I'm more focused on the present though: given today's models and today's financial pressures, how do we ensure that LLMs don't slide into the same rent-extractive patterns we see elsewhere on the web (e.g., attention economics in search, engagement bait and slopification in entertainment). Lastly, on user complacency, I agree that it might be the default. However, I think that that is partly because the dominant narrative tells people that tech evolution is inevitable and uncontrollable - so why would they bother to fight it? I don't buy that. These companies are, at least in principle, accountable to democratic and economic institutions. Users can choose what they adopt and can push their local politicians for better regulation.