Like any new technology, AI (in particular, LLMs) is also riddled with too much hype and too little information. On social media, you often see polarizing opinions: one claiming that Skynet is here, while another demonstrating how AI is still as dumb as bricks.
Instead of gaining understanding, you're forced to be a spectator to a debate. Neither side makes a lot of sense, while just being concerned with proving the other wrong.
Moreover, most of the information on AI are highly mathematical in nature. They're not readily interpretable without the proper background knowledge, and they're far more nuanced than the gloom and doom picture painted every day.
Occam's razor says that all things being equal, the solution that requires the fewest assumptions is the best. However, most do not notice the phrase "all things being equal." It's only when there are two equally valid options wherein you then lean toward the simpler one. But most just choose the easy one that doesn't require much thinking, and assume that it is also simple.
Well, easy and simple are not the same thing.
And the easy version, one that's easily understandable, is also invariably wrong. This surface-level study is the reason why most predictions about AI are incorrect. A faulty mental model always leads to faulty predictions, some of which are very extreme rather than being nuanced.
The real problem with AI, then, is not the lack of opinions or predictions. It's the lack of a mental model that gives everyone enough understanding to make up their own minds. And if you don't make up your own mind, someone else will.


