Experts Share: What Risks Do Investors Face If Superintelligence Never Arrives? N

Meta has set aside nearly $15 billion for a 49% cut of data specialist Scale AI, The Information said on Tuesday. The transaction also brings Scale AI’s founder Alexandr Wang to Menlo Park, where he will run a new research unit.

Wang built Scale into a platform that feeds data to OpenAI and the US military.

Meta chief Mark Zuckerberg plans to seat the lab beside the company’s VR and social media teams, and people close to the deal say he views superintelligence as the next frontier.
 

How Might Scale AI Change Meta’s Data Power?

 
Scale AI built the Scale Data Engine, a tool that gathers and labels information for machine learning teams at OpenAI, Nuro and Harvard University.

Its engine can customise and test specialised agents for defense clients such as the US Army. The same tooling also labels edge-case images such as night time traffic scenes, giving Meta richer material for video models.

Meta already trains large language models at record speed, access to Scale AI’s annotated streams could speed that work even further.

Bringing Wang’s platform in-house also spares Meta from needing to use outside vendors during the current rush for data pipelines.
 

What Is “Superintelligence”?

 
IBM says superintelligence is software that thinks past any human.

The IBM guide on superintelligence also puts it that such a system would carry “cutting-edge cognitive functions” past every human limit.

Today’s relatively limited AI needs people for each new skill. A super smart system would teach itself across fields. IBM lists language models, multisensory input and neuromorphic chips as milestones… So in Meta’s case, the Scale AI deal brings Meta closer to those parts.

Certain researchers doubt that it is possible. Meta says rising compute and data make the trial worthwhile. The lab will start with language work, then add vision and audio.
 

What If Superintelligence Never Happens? What, Then, Are The Risks For Investors?

 
I asked a few experts what they believe the risks are as far as investment in superintelligence goes. Here’s what they said…
 

Toju Duke, Founder and CEO, Bedrock AI

 

 

“Artificial superintelligence (ASI) as a technology is still highly speculative, and is yet to be proven. It’s predicted to be introduced after Artificial General Intelligence (AGI) which hasn’t arrived yet. AGI’s predictions have ranged from a couple of years to decades.

“While the focus of superintelligence is the ability for an AI to exceed human cognitive abilities across all domains, it fails to address human emotional abilities such as emotional intelligence or self awareness. As it stands, there’s still no universally agreed-upon definition of superintelligence and despite the impressive capabilities of LLMs, they’re still far from achieving the capabilities of AGI and still struggle with reasoning, planning and true abstraction (the ability to manage complexity by reducing a problem to simpler, manageable parts).

“There are also several critical issues to consider when thinking of superintelligent systems. The emergent risks and unpredictability of current AI will prove much more difficult to address, including the risk of the systems overriding human controls, or falling into the hands of bad actors, posing a real threat to human existence and national security.

“The alignment problem where ASI has goals and priorities aligned with human values is still under debate if it’ll achieve a real form of superintelligence or merely higher levels of automation. There are also concerns on the amount of computational requirements for ASI where the true processing power might exceed current capabilities, even when combined with advanced emerging technologies such as quantum computing.

While there’s ongoing research and safety efforts on ASI such as value alignment, reward engineering, and continuous monitoring of these systems, mitigations are yet to be proven. Heavy investments in a technology that’s still highly speculative, unpredictable, and probably unachievable is not advised.”
 

 

Cahyo Subroto, Founder, MrScraper

 

 

“If superintelligence doesn’t arrive soon, or doesn’t arrive at all, I think the risk to investors isn’t just the loss of a moonshot. But also the cascading effect on the entire capital stack built around that promise.

“Let me explain what I mean.

“Many startups today aren’t just betting on AI progress, they’re pricing in future breakthroughs as if they’re guaranteed. If superintelligence stalls, valuations tied to that horizon will be the first to fall. But it won’t stop there. Those teams built to chase that vision may become over-resourced and under-leveraged.

“The product roadmaps may be misaligned with what’s actually feasible. And the timelines for monetisation may be pushed back so far that early investors are forced to exit at a loss or face a liquidity drought.

“This kind of risk compounds quietly, because it’s not about a single failed product but portfolios shaped by an assumption that the technology curve will bend fast enough to justify the burn. If that curve flattens, a lot of high-conviction bets will turn into long hauls with no clear exit. And that’s where investors get stuck— not because they were wrong about the potential, but because they misjudged the pace.”

 

David Nicholson, Chief Technology Advisor, The Futurum Group

 

 

How reliable is investing in this tech?
“Picking a winner is a gamble. Creating a basket of companies in AI as an investment is a safer bet. Artificial General Intelligence or Super Intelligence seems to be the answer to “How can I make money or save money TODAY with all of these magical things that have come out this year?” That answer? “Just wait for Super Intelligence!”

What risks might investors face should it not arrive soon enough?
“This is a question of timing. The narrative that is being spun is that Meta and Zuck are hunkering down in war rooms with the best human minds that unlimited money can buy. The details are exciting. Rumors of 9-figure pay packages for top engineers. People living in Zuckerberg’s homes. It is reminiscent of Elon Musk famously sleeping at the Tesla factory.

“The question is how long an investor will continue to buy into “the dream”. Tesla shareholders seem to believe in Tesla’s robotics and AI vision. They believe. Will Zuckerberg be able to sustain irrational beliefs without breakthroughs in line with the massive spending being reported? We will know within a year. I would expect a roller coaster ride.”