top of page
Search

AI Isn’t What You Think, and That’s the Point

  • nathanalbinagorta
  • Jun 17
  • 2 min read

The AI conversation today is filled with noise, from hyped headlines and apocalyptic fears to startup buzzwords and believer-driven optimism. Much of it is less about what AI is and more about what people want it to be. As with any emerging technology, we bring our own agendas, hopes, and insecurities to the table. But it’s especially pronounced with AI, and it’s not hard to see why.


Humans have always had a need to anthropomorphise (to assign human characteristics to the nonhuman). We do it with pets, with weather (“the sky looks angry”), and now, with machines. AI systems, especially large language models, speak in fluent, human-like prose. They’re polite, confident, and often insightful. It’s easy to forget that they are not sentient, wise, or even “intelligent”.


At their core, language models do one thing: they predict the next most likely word based on everything that came before it. That's it. They don’t “know”, or have beliefs/values. They generate responses that feel familiar because they’ve been trained on vast swaths of human language. But familiarity is not the same as understanding, and confidence is not the same as truth (as with some humans, may I add).


If there’s one modern parallel to the current AI moment, it’s the evolution of search engines — especially Google. Remember the early days, when “Don’t be evil” was a guiding principle? Search wanted to be your helpful, unbiased, know-it-all friend. And for a while, it was. Until, of course, it had to monetise.


Once incentives shifted, so did the experience. Search results became increasingly tailored not to truth, but to engagement and profitability. SEO became a game. Ads took over the top of the page. Organic results gave way to commercially optimised answers. The tool started shaping what we see, and by extension, what we think.


Already, AI tools are being embedded in products not to inform users, but to retain them. The goal isn’t accuracy but stickiness. Models are being finetuned not just for quality, but for likeability. Bias isn’t just inevitable: it’s deliberate. AI won’t tell you what’s true: it will tell you what you want to hear.


This isn’t a conspiracy; it’s just commerce. AI, like search, is a saleable product. That’s not inherently bad, but we should treat it as such. Just as we learned to read search results with a critical eye, we’ll need to do the same with AI-generated content.


There’s still a lot of potential in AI (research, scaling expertise and processing data volumes) but we can’t lose sight of what it is and isn’t: AI is not a digital oracle; it’s not an objective guide either. It’s a tool built by people, for profit, operating within systems of incentive.


For companies, this presents both a challenge and an opportunity. The challenge: staying informed and skeptical in a market full of AI noise. The opportunity: working with experts who understand not just how to deploy AI, but how to interpret it — critically, technically, and ethically.


AI may be fluent, but it still needs translators.


Image Source: Pixabay

 
 
 

Comments


bottom of page