The AI Revolution: A Human Centipede Effect?
As the world of artificial intelligence continues to evolve, we’re seeing more and more instances of large language models (LLMs) training each other. But what does this mean for the future of AI, and is it a good thing?
Aidan Gomez, CEO of Cohere, likens this phenomenon to a ‘human centipede effect’
Recently, Elon Musk weighed in on the topic, agreeing with Aidan Gomez, CEO of Cohere, that models training on each other’s data is like a “human centipede effect.” This raises important questions about the role of human oversight in AI development.
“Sadly quite true. It takes a lot of work to purge LLMs from the internet training data. Grok 2, which comes out in August, will be a giant improvement in this regard” - Elon Musk
The Rise of Grok
Musk’s xAI startup is planning a huge breakthrough with its Grok LLM models. The latest iteration, Grok 3, will be powered by 100,000 x NVIDIA H100 AI GPUs. This is a significant investment, and one that could have major implications for the future of AI.
NVIDIA H100 AI GPUs will power the next generation of Grok LLM models
The AI Arms Race
As the demand for AI GPUs continues to surge, companies like NVIDIA and AMD are reaping the benefits. Tesla, in particular, is investing heavily in AI silicon, with plans to spend billions of dollars on NVIDIA AI GPUs this year.
NVIDIA commands 90% of the AI GPU market, with competitors years from catching up
The Future of AI
As we move forward in this AI-driven world, it’s essential that we consider the implications of LLMs training each other. Will we see a future where AI models are entirely autonomous, or will human oversight remain a crucial component of AI development?
The future of AI is uncertain, but one thing is clear: it’s an exciting time to be alive