Discussion about this post

User's avatar
Fred Malherbe's avatar

They want machines that "think", so they create machines that track and replicate language. And for a few minutes, it does seem that you can chat with the robot and it "understands" you. But there's much more to thinking than just responding to language prompts with autocomplete language strings.

The more compute they throw at this problem, the more dimensions they add, the more complex and tangled the whole process becomes, and the more the machines will get caught up in their own routines and start hallucinating.

I don't hear anyone talk about the "curse of dimensionality", but this is a brick wall that AI is already hitting. When you add a dimension to a problem, the complexity of the calculations goes up factorially. If you have four dimensions, the complexity scales as 4! = 4 x 3 x 2 x 1 = 24. Add one dimension and it scales as 5! = 5 x 4 x 3 x 2 x 1 = 120.

Computing power scales exponentially (Moore's Law).

Check it out: after a certain point, factorial expressions *always* increase faster than exponential ones, way faster. As they add dimensions, they are absolutely crippling their ability to compute within them.

You're right, they have to monetize their chatbots and it's going to be very ugly. But there are more fundamental issues they are going to hit up against.

I wrote a very strange article describing the hardware and software you would require to create a machine that truly thinks. It's mostly written as a joke to show just how far they are from even beginning to understand what's involved. But the platform I outline is deadly serious. Take a look.

You'll see that the thoughts in our heads are actually just a shadow of those Platonic forms that exist in the plane of pure ideas, the astral realm.

So when they model language to get at thoughts in people's heads, they're actually modelling a proxy of a proxy. They have absolutely no way of getting directly at the kind of ideas that bubble up in the human mind. They think they can duplicate this process with electrical circuitry. They are spectacularly wrong. And it's going to take a whole-of-society collapse to prove this to them.

This bubble cannot burst soon enough.

https://systemshaywire.substack.com/p/the-platform-needed-for-artificial

Dhruv Ghulati's avatar

Thanks for sharing this

2 more comments...

No posts

Ready for more?