AI tech news is a constant back and forth between announcing exciting innovative potential and sharing warnings about the concerning capabilities of AI, akin to sci-fi thrillers featuring killer robots – like a tug of war between the devil on your shoulder and the angel whispering in your ear.
And, because AI tech is changing and progressing at a rate of knots, almost unlike anything we’ve ever experienced before, even prominent, knowledgeable figures from deep within the industry seem to be unable to commit to a single narrative.
Do we like AI or not? Is it changing our lives for the better, or will it lead to the eventual demise of life as we know it and the human race?
A little dramatic, sure, but the point is, there’s no real consensus or long-lasting agreement on the AI issue.
It’s healthy (if not necessary), of course, to have experts change their minds after having been presented with new information and evidence – nobody should be resolutely stuck to a single plan just for the sake of it. However, too much back and forth and contradiction is unsettling, and it feels as though this is becoming the norm in the AI industry.
The latest switch-back in the AI universe is centred on reasoning. Up until recently, the primary objective of AI experts and tech developers was to replicate human reasoning – that would put us one (big) step closer to achieving generative AI.
Or so we’ve been told.
However, hold on to your britches (or, hold on to your, uh, cargo pants? What are the kids wearing these days?), because as of yesterday, we’re no longer necessarily all about deep thinking AI. In fact, we’re being told that having AI do too much reasoning is actually bad news.
The main headline? It’s expensive.
Not just a few extra pennies expensive – we’re talking AI companies not only bleeding money, but also consuming way more energy than necessary, making an industry that was already an environmental concern become a greedy, wasteful monstrosity.
What Does “Too Much Reasoning” Mean?
In the past, we’ve been told that “intelligent”, so to speak, processing of AI models is the aim of the game – that we’re trying our best to make them as sophisticated as possible. Naturally, a big component of that has been trying to get these models to be able to “think” and “reason” in the same way that humans can and do.
So, it’s no surprise that the most recent revelation that models are doing “too much reasoning” feels like it’s come out of left field. Why is it bad for AI models to “think” too much? Surely we want them to become as sophisticated and “smart” as possible?
Well, yes and no.
Yes, we want them to progress in terms of their capabilities to become more sophisticated and capable than ever before.
But, what experts are quickly realising is that while processing is expensive and requires the consumption of energy, having AI models do more advanced “reasoning” requires even more processing than normal, making it exponentially more expensive and making its energy consumption excessive.
Not only is deep thinking more of a drain on resources, however, the problem is that it’s also being done unnecessarily.
For instance, Sam Altman of OpenAI recently made headlines by explaining that the simple act of saying “please” and “thank you” to ChatGPT (or any other AI model, for that matter) costs companies an exorbitant amount of money and requires a shocking amount of energy consumption.
More from Artificial Intelligence
- How AI Is Revolutionising UK Government Housing Systems
- What Are AI Memory Features and How Do They Work?
- Top 8 AI Companies In Germany
- Why Fairness in AI Algorithms Matters More Than Ever
- What Are AI Crawlers, And How Do They Work?
- What Does the European Commission’s AI Continent Action Plan Mean for European AI Companies?
- How To Integrate AI Into Existing Apps
- What Are The Applications Of AI In Renewable Energy?
The Solution, According To Industry Leaders
Much remains to be seen, but Google’s DeepMind has introduced its own solution to this new “problem” – a dial that allows you to change how much the model reasons. The intention behind this isn’t to stunt its capabilities and stop it from producing high-quality responses, it’s intended to stop Gemini from thinking more than it needs to in specific contexts.
For instance, an issue that most experts haven’t yet been able to solve is the fact that AI models tend to think more deeply than necessary about simple queries, costing way more money and using way more power than it needs to.
In fact, in conversation with MIT Technology Review, Nathan Habib, an engineer at Hugging Face, asserted that this is not the exception, as we may think – rather, overthinking is more like the rule.
Thus, this new reasoning “dial” has been introduced to allow developers (not yet end users) to decide how much they’re willing to spend on reasoning, which then dictates how much reasoning can take place. The result is that models can’t just “think” and reason endlessly over basic prompts – they’re limited.
The problem that still exists, however, is that it’s not clear just how much reasoning is a good amount of reasoning for certain tasks. Perhaps in the future, experts will be able to set out specific parameters dictating what kind of prompts require different levels of reasoning, but for now, this is a fairly new issue that AI companies are facing, with potentially serious effects on the environment.