4 Comments
User's avatar
Fred Malherbe's avatar

They want machines that "think", so they create machines that track and replicate language. And for a few minutes, it does seem that you can chat with the robot and it "understands" you. But there's much more to thinking than just responding to language prompts with autocomplete language strings.

The more compute they throw at this problem, the more dimensions they add, the more complex and tangled the whole process becomes, and the more the machines will get caught up in their own routines and start hallucinating.

I don't hear anyone talk about the "curse of dimensionality", but this is a brick wall that AI is already hitting. When you add a dimension to a problem, the complexity of the calculations goes up factorially. If you have four dimensions, the complexity scales as 4! = 4 x 3 x 2 x 1 = 24. Add one dimension and it scales as 5! = 5 x 4 x 3 x 2 x 1 = 120.

Computing power scales exponentially (Moore's Law).

Check it out: after a certain point, factorial expressions *always* increase faster than exponential ones, way faster. As they add dimensions, they are absolutely crippling their ability to compute within them.

You're right, they have to monetize their chatbots and it's going to be very ugly. But there are more fundamental issues they are going to hit up against.

I wrote a very strange article describing the hardware and software you would require to create a machine that truly thinks. It's mostly written as a joke to show just how far they are from even beginning to understand what's involved. But the platform I outline is deadly serious. Take a look.

You'll see that the thoughts in our heads are actually just a shadow of those Platonic forms that exist in the plane of pure ideas, the astral realm.

So when they model language to get at thoughts in people's heads, they're actually modelling a proxy of a proxy. They have absolutely no way of getting directly at the kind of ideas that bubble up in the human mind. They think they can duplicate this process with electrical circuitry. They are spectacularly wrong. And it's going to take a whole-of-society collapse to prove this to them.

This bubble cannot burst soon enough.

https://systemshaywire.substack.com/p/the-platform-needed-for-artificial

Dhruv Ghulati's avatar

Thanks for sharing this

Qbson's avatar

Yes, an LLM as a product must evolve because it's not financially viable without external funding in its current form—essentially a public service. But commercialising it doesn't necessarily translate to bad UX. As valuable to society as it is, an honest consumer advice in the candle-shopping department is hardly a pinnacle of value to the user that LLMs might provide. To cite the Amazon's recommender example above, when aptly embedded into an existing service, an LLM can provide a lot of valuable user experience ($10 billion of it, apparently). The thing is that LLMs are not e-commerce or social media platforms or even a search engine—actually, it's hard to tell what economic role they are supposed to fill. Perhaps too much money is chasing after development of their capabilities in hope to harvest AGI spoils. Or maybe because the technology is still relatively young, the sensible applications have not yet made it to the public consciousness. I strongly agree with the transparency and accountability points, though. As a consumer of its advice/recommendation, I'd like to be able to tell what rules an LLM followed and which guardrails it obeyed to generate it. Finally, on the users themselves—I think complacency is a default setting—the best you can do is to actively try and remove it from their decision making. I'd like to see some reasonable proposals in that area.

Rufus Rock's avatar

Hi Qbson, thanks so much for your comment. I agree that commercialization can be compatible with good outcomes for users. My point (sensationalist title aside) is that that its not a given. The candle case, and more broadly the shopping-assistant example, is just illustrative of one way in which value-creation for users could fall out of alignment with value-creation for AI companies. You're also right that no one knows what the long-term economic role of LLMs will be, and that a lot of investment is essentially a gamble on the potential of future A(G)I. I'm more focused on the present though: given today's models and today's financial pressures, how do we ensure that LLMs don't slide into the same rent-extractive patterns we see elsewhere on the web (e.g., attention economics in search, engagement bait and slopification in entertainment). Lastly, on user complacency, I agree that it might be the default. However, I think that that is partly because the dominant narrative tells people that tech evolution is inevitable and uncontrollable - so why would they bother to fight it? I don't buy that. These companies are, at least in principle, accountable to democratic and economic institutions. Users can choose what they adopt and can push their local politicians for better regulation.