Will Humanity Struggle to Keep Pace Generative AI Models ?
Since the launch of ChatGPT two years ago, significant advancements in technology have sparked hopes of near-human intelligence in machines…
But doubts are mounting in Generative AI models.
Leading companies in the sector are promising with significant and rapid performance gains, with the term “general AI,” which is expected to emerge soon, as expressed by OpenAI’s Sam Altman.
These companies believe in scaling principles.
They think that feeding generative AI models with large amounts of data and computational power will increase their strength.
Even this strategy has worked well so far, but many experts are worried that it might advance too quickly, leaving humanity struggling to keep pace.
Technology that Captures the Attention of Millions
Microsoft (the primary investor in OpenAI), Google, Amazon, Meta, and other companies have spent billions of dollars.
It also has launched tools that easily produce high-quality texts, images, and videos.
Mentioning that those tools are taking the attention of millions.
Elon Musk’s AI company, xAI, is reportedly raising $6 billion, according to CNBC.
This amount is dedicated to buy 100,000 Nvidia chips (advanced electronic components used to run large models).
OpenAI raised $6.6 billion in early October, making the company value reach $157 billion.
Industry expert Gary Marcus said:
“Sky-high valuations of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence.”
He added: “As I have always warned, that’s just a fantasy.”
Generative AI Models Limits
Recent reports in the American press suggest that generative AI models which are under development seem to be reaching their limits.
Particularly at Google, Anthropic (Claude), and OpenAI.
Ben Horowitz, co-founder of a16z, a venture capital firm involved with OpenAI and competing companies like Mistral, said:
“We’re increasing (computational power) at the same rate, but we’re not getting smarter improvements from it.”
OpenAI’s latest addition, Orion, which hasn’t been announced yet, outperforms its predecessors.
But “the quality improvement was much less compared to the leap from GPT-3 to GPT-4,” the last two major models of the company.
This is according to sources cited by The Information.
The Issue is Not Just Knowledge
Many experts believe that scaling laws have reached their limits.
Scott Stephenson, head of Spellbook, a company specializing in generative legal AI, says:
“Some of the labs out there were way too focused on just feeding in more language, thinking its just going to keep getting smarter.”
By training on vast amounts of data collected from the internet, models can predict, very convincingly, the sequence of words or pixel arrangements.
However, companies are starting to run out of new material to operate on.
The issue is not just knowledge: for progress to occur, machines will need to somehow understand the meaning of their sentences or images.
Industry leaders deny any slowdown in AI progress.
Dario Amodei, head of Anthropic, said on Lex Fridman’s computer science podcast:
“If you just eyeball the rate at which these capabilities are increasing.
It does make you think that we’ll get there by 2026 or 2027.”
Sam Altman wrote on X on Thursday: “There’s no dead end.”
However, OpenAI has delayed the release of the system that will follow GPT-4.
The UAE’s youth are leading the country’s discussions and negotiations at COP29, with their energy, innovative ideas, and effective and influential role in formulating global climate policies with the aim of bringing about real change for a sustainable future and building on the… pic.twitter.com/mGwNOf6TMU
— UAE Voice (@uae_voiceeng) November 19, 2024
In September, Silicon Valley startup changed its strategy by presenting o1, a model expected to answer more complex questions.
Especially in math, thanks to training that relies less on accumulating data and more on enhancing reasoning ability.
According to Scott Stephenson, o1 “spend more time thinking rather than responding” has led to “radical improvements”
Stephenson compares technology development to discovering fire:
“Instead of adding fuel in the form of data and computing power, it’s time to develop the equivalent of a lantern or a steam engine.
Humans will be able to delegate tasks online to these AI tools.”