Google CEO Says AI’s ‘Low-Hanging Fruit’ Era Is Over

0

Google’s CEO warned Wednesday that scaling AI models has become harder, ending rapid development benefits.

Google CEO Sundar Pichai warned at the New York Times’ annual Dealbook Summit on Wednesday that AI engineers may no longer use massive databases of internet data and “low-hanging fruit” may be finished.

“In the current generation of LLM models, a few businesses have converged at the top, but I believe we’re all working on our future iterations,” Pichai remarked. “I believe advancement will grow harder.”

Pichai’s remarks coincide with the publication of research that indicates a decline in the performance of AI models in comparison to the two years prior to the public release of ChatGPT.

Ethereum co-founder Vitalik Buterin, a16z’s Marc Andreessen and Ben Horowitz, and former OpenAI co-founder and Chief Scientist Ilya Sutskever noted that scaling AI models learning from massive amounts of unlabeled data has ceased.

“When I look into 2025, the low-hanging fruit is gone; the curve, the hill, is steeper,” Pichai said. Pichai’s comments reflect a rising AI opinion that constructing larger models and dumping more data at them may not be enough to improve outcomes.

The potential of an “AI ouroboros” effect, where models learn on data provided by other AIs rather than human content, adds to these worries.

The ouroboros effect in AI occurs when one AI utilizes data from another AI, producing a feedback loop. As the system relies on AI rather than human data, it hits a wall and produces repetitious or distorted results.

Some developers worry about AI model stagnation, but Pichai predicts tremendous progress in the next year. “I foresee a lot of advancement in 2025; therefore I don’t totally agree with the wall notion,” Pichai stated.

Also Read: Tencent’s New Free AI Video Generator Battles OpenAI’s Sora – TrueBitcoiner

Leave A Reply

Your email address will not be published.