Sam Altman Claims We’ve Passed the AI Event Horizon: What Does That Mean?

In a recent and rather provocative statement that’s got the AI community pretty fired up, Sam Altman, CEO of OpenAI, made what might be his boldest claim to date – indeed, according to Altman, we’ve “already gone past the event horizon” when it comes to artificial intelligence (AI).

Whether intentional or not (I’ll take a wild gander and assume the former), this is the kind of phrase that’s bound to attract attention. It sounds simultaneously exciting and ominous, but what does it mean?

Well, if you’re not familiar with the term “event horizon”, don’t worry, you’re definitely not alone. It’s actually a phrase borrowed from astrophysics – an event horizon marks the point around a black hole where nothing, not even light, can escape. In other words, it’s a tipping point, the point of no return, so to speak.

Now, the idea of a “tipping point” or “the point of no return” is a little alarming no matter the context, and there’s no arguing the fact that having the head of one of the world’s mot influential AI companies suggest that we’ve not only reached but actually passed this point with technology is pretty darn scary.

He didn’t say it outright, but Altman’s comment suggests that artificial intelligence has reached a stage of development so significant that its trajectory is no longer fully within our control. He doesn’t pose this as a big warning for humanity, striking fear in the hearts of people around the world (who may or may not have watched “I-Robot” one too many times). Rather, he actually asserts that he personally thinks the outcome will be a “gentle singularity”. But still, it’s kind of concerning and definitely something to ponder.

So, given Sam Altman’s expertise, position and experience, what are we supposed to do with this information? Is Altman correct in these assertions? Or, is the OpenAI CEO making grandiose claims to add a little fuel to so-called “AI fire”?

But, first things first, let’s democratise the complexity of this verbose tech talk and explain a few concepts so that we can all be equally terrified or calm, depending on our conclusion.

What Are AGI and ASI?

 

Alright, so AI, ordinary artificial intelligence as we know it – language models, image generators, recommendation algorithms and so on – are very different to what we’re referring to when the future and potential of AI becomes concerning. In those contexts, but people are actually talking about is AGI and, in extreme cases, ASI.

AGI is Artificial Generative Intelligence and ASI is Artificial Superintelligence. The former goes beyond narrow AI systems and is capable of learning, understanding and even reasoning across a broad range of tasks at a level that is equal to the ability of humans in the same contexts. It’s able to not only do things like write code or play chess, it would have the ability to actually learn a new language, invent a whole new game, debate philosophical concepts or even manage a business – it would be able to do all of this without needing to be retrained or receive specific data inputs.

Now, this is an incredible idea in itself, something that we haven’t yet achieved – unless you’re Sam Altman in which case you may think differently.

ASI, on the other hand, goes even further than AGI. At this point, the AI doesn’t just become as intelligent as humans, it actually becomes way smarter than humans in every way imaginable (as well as in ways we can’t even imagine). ASI would have the ability to solve problems we haven’t even thought of yet, optimise entire economies and innovate at a pace far beyond human capacity.

So, when we think of sci-fi movies in which robots are taking over the world, these are the two types of AI that are actually being referenced, most commonly ASI. Unsurprisingly, both concepts are intrinsically terrifying.

 

 

Why Altman’s Making Headlines Once Again

 

That’s why Altman’s assertions that we’ve already crossed the threshold – already moved into a realm in which achieving AGI and even ASI is possible – have caused a fair bit of commotion. He’s not saying we’ve reached AGI just yet, but he believes we’re heading in the right direction and we’re well on our way.

According to Altman, developments in machine learning and neural networks are happening faster than anyone predicted a decade ago. Tools like ChatGPT, voice synthesis, autonomous agents and multimodal systems have rapidly advanced our expectations of what AI can do. And, with every breakthrough, we’re inching closer to systems that resemble AGI in both behaviour and potential. While we’re not there yet, Altman’s comment suggests we may have entered a momentum phase, where progress builds upon itself, while, at the same time, human ability to halt or fully steer clear of it is quickly diminishing.

A pretty scary prospect, if you ask me.

So, why is Altman making these claims? Are they worth worrying about or is he simply making a strategic PR move to spark conversation? He is the CEO of one of the world’s most successful AI companies, after all – it’s his job to keep OpenAI relevant, and I’d argue that this is one hell of a way to do that. All publicity is good publicity! And all those other age-old mantras we hear so often in the world of marketing.

There’s no denying the fact that someone like Altman has pretty serious insight into AI tech, far beyond the ordinary person, as the leader of OpenAI – his professional thoughts and opinions are, by no means, irrelevant. However, having said that, many think that this particular statement is mostly about just stirring the pot, for lack of a better term.

If you consider the thoughts of other very experienced and knowledgeable experts in the field, the opinions are varied, with some believing that we’re still many decades away from AGI, others adamant that it’ll be upon us in the early 2030s and another whole camp steadfast in the opinion that AGI, nevermind ASI, isn’t possible at all. So, if nobody else can agree on this, how is Altman so confident in his opinion? But more importantly, what actually is his opinion?

Well, the answer, some believe, is that he’s not really saying as much as we may think he is. Altman’s statement about us having “already passed the event horizon” was intentionally as vague as it was ominous. It’s an assertion that’s bold enough to cause both experts and laypeople to stop and think, but it’s still blurry and imprecise enough for it to slip under the radar and allow him to avoid garnering the title of being “unrealistic” or “fear-mongering”. Is he really saying anything particularly novel?

Because, while his statement seems pretty terrifying and sci-fi-esque at first glance, what it really means, if we take it to its most extreme extent (for the sake of argument), is that we’re kind of, probably heading down what might, potentially be the path that could, one day in the future, maybe lead to AGI, or something vaguely similar to it. But, isn’t this something we’re already somewhat aware of?

As Dr. Lance Elliot, a world-renowned AI scientist, puts it in an article recently published by Forbes, Altman has consistently changed his definitions and timelines associated with AGI and ASI, making it difficult to follow his predictions for the future and somewhat tough to interpret the CEO’s approach to understanding what these prospects mean for humanity, both on the positive and the negative sides of the coin.

And, no matter your title, experience or expertise, who’s really to say what the future holds for humanity with regard to the development of AI or anything else, for that matter? Dr. Elliot references a pretty fitting quote by Franklin D. Roosevelt, who said, “There are as many opinions as there are experts,” and in this case, it’s probably wise to bear this sentiment in mind.

Ultimately, while the notion of humanity passing the “event horizon” may have seemed like a big, bold statement at first, many are leaning more towards that fact that in reality, Sam Altman has managed to use a grandiose, poignant phrase and statement to say a whole lot of nothing we didn’t already know. Why? To get us talking.

And hey, hats off to Altman. After all, we’re talking, aren’t we?