Artificial Intelligence on the Couch: Debunking the Hype and Understanding the Reality
Artificial intelligence (AI) is ubiquitous, dominating headlines and conversations, sparking both fascination and fear. However, behind the glossy marketing and overblown hype lies a more pragmatic reality. Understanding the fundamentals of AI is crucial to separating fiction from what can truly be expected from this technology.
Many current narratives, like the dreaded "AI 2027 Report" that predicts an Artificial General Intelligence (AGI) capable of turning against humanity, are based on opinions and fictional scenarios, not on academic papers with concrete evidence. Researchers warn that the level of excitement about AI is inversely proportional to the knowledge about it. Technological evolution doesn't follow an indefinite exponential curve; historically, it develops in "S-curves," hitting plateaus and requiring new breakthroughs to advance. Today's AI is already approaching such a plateau with existing technologies.
At its core, what we call AI today, especially Large Language Models (LLMs) like ChatGPT, are "glorified text auto-completers." They operate on probabilities and are intrinsically probabilistic, not deterministic. This means there is no guarantee that an answer will be 100% correct. There is no reasoning, common sense, intention, or consciousness in these models; they are trained to respond in a specific, often friendly and convincing, manner, even when the information is wrong. They are very convincing liars, but without remorse or memory.
The history of AI dates back to the 1940s, with McCulloch and Pitts's idea of "artificial neurons." After an "AI Winter" in the 1970s due to unfulfilled expectations, the field re-emerged in the 1980s and 1990s with concepts like multilayer neural networks and the "backpropagation" algorithm, developed by figures such as Geoffrey Hinton.
The AI boom we see today was only possible thanks to the convergence of three crucial factors in the 21st century: access to vast amounts of data (big data), driven by the internet and social media; and the advancement of GPUs (Graphics Processing Units). GPUs, originally created for processing images and graphics (which involve many matrix multiplications), proved to be exceptionally efficient for training deep neural networks, as the calculations of network parameters are also matrix operations.
Companies like OpenAI, led by Sam Altman, pioneered the popularization of these models, but not without controversy. The leap from GPT-2 to GPT-3 was remarkable, but subsequent evolution has been less dramatic, raising questions about the sustainability of the hype. There is concern that the over-promising of models like GPT-5, should they fail to deliver an order-of-magnitude improvement, could lead to a new "AI winter." The business model itself is compared to "loot boxes," where users pay for answers that are sometimes incorrect, and companies subsidize prices to gain volume.
LLMs have clear limitations. They tend to "hallucinate" (give wrong or nonsensical answers) in longer generations, due to the accumulation of precision errors and random components. They also fail abruptly on more complex logic problems, as they are machines for finding patterns in text, not for reasoning. for complex tasks, the user must be very precise with their "prompts" (instructions). AI has no real "memory" outside the conversation session and needs to be constantly fed with context.
Furthermore, AI faces two major bottlenecks: a scarcity of quality data (the internet has already been exhaustively "scraped" for training, and low-quality synthetic data is not sufficient) and massive energy consumption. The training and inference of LLMs already consume an absurd amount of energy, surpassing that of Bitcoin mining. To mitigate this, techniques like "distillation" (training smaller models from larger ones) and "quantization" (reducing the precision of parameters) are being used to allow models to run on smaller, more efficient devices, albeit with lower quality.
In practice, AI is a valuable tool, especially for mundane and specific tasks like summarizing long texts, generating small code snippets or scripts, and automating processes. However, it does not replace specialized professionals. The idea of AGI, or that AI can make complex life decisions or medical diagnoses without human supervision, is an unrealistic and dangerous expectation. AI does not "think" for itself; it executes what it was trained to do.
To survive and thrive in the age of AI, it is essential to develop critical thinking, understand the mathematical and engineering foundations behind these technologies, and not outsource important decisions to opaque tools. True intelligence lies in knowing what AI can and cannot do, and how to use it strategically and responsibly.