The Managing Director of NoobLabs AI at the NextTech Summit
August 30, 2023
In 1993, aspiring science fiction writer and computer scientist Vernor Vinge made a bold prediction – within 30 years, advancements in technology would allow the creation of artificial intelligence that surpasses human intelligence, leading to the “end of the human era.”
Vinge theorized that once AI becomes capable of self-improvement, it would trigger a feedback loop with rapid, exponential enhancements in AI systems. This hypothetical point in time, where AI surpasses human intelligence, has become known as the “Singularity.”
While predictions of superhuman AI might have sounded far-fetched in 1993, today they are taken seriously by many AI experts and tech investors aiming to develop “artificial general intelligence” or AGI – AI capable of fully matching human performance in any meaningful task.
Top AI researcher Roman Yampolskiy explains: “The whole point is that once machines take over the process of conducting science and engineering, progress is so fast that you can’t keep up with it.”
Yampolskiy sees a glimpse of this in his own field of AI research, where new innovations are published at a pace too rapid for experts to stay updated. He and others believe that AGI could trigger a cycle of extensive improvements, enabling machines to accelerate scientific understanding and technological innovation beyond human comprehension.
Once developed, an AGI system could take on the design of even more capable AI systems, advancing at a rate no human researcher could compete with. This scenario worries researchers like Yampolskiy, who argues that because humans cannot reliably predict or understand the capabilities of AGI systems, we will not be able to control or contain them. For Yampolskiy, the only way to avoid catastrophic consequences is to prevent the development of AGI from the start.
However, expert opinion remains mixed regarding the feasibility and risks of AGI. In a 2022 survey of over 700 AI researchers by the AI Impact think tank, only 33% considered an uncontrolled AGI scenario to be “likely” or “very likely,” while 47% deemed it “unlikely” or “very unlikely.”
Critics like Sameer Singh, an AI researcher at the University of California, Irvine, argue that AGI and Singularity assumptions distract from the urgent real-world problems arising from current AI systems – including biases, job loss, and legal concerns around content creation. Singh believes that an excessive focus on speculative futures diverts attention from these pressing issues that need to be addressed now. (Personally, I strongly agree with this viewpoint.)
“It’s much more exciting to talk about achieving this science fiction goal than the actual realities of things,” he says. Singh supports calls for a pause on developing AI more powerful than models like GPT-3 to give researchers time to study the risks and ethics involved.
The debate over AGI highlights a growing rift in the AI community. Pioneers like Geoffrey Hinton and Yoshua Bengio have expressed doubts about the field’s direction, urging caution in developing increasingly capable AI systems.
Yampolskiy advocates for a halt, arguing that “the only way to win is not to play.” However, many leading AI labs are investing heavily in the race to build ever more powerful models, trusting that society will benefit from continued progress. With billions available for funding, the pressure to push AI capabilities forward remains intense.
Dramatically distinct perspectives are emerging – some fear that AI may end the human era within our lifetime, while others feel these concerns are exaggerated and distract from practical issues. Nevertheless, both sides agree that as AI systems advance, researchers have a profound responsibility to promote progress safely and ethically.
How can we guide a field accelerating rapidly toward uncertainty? For now, the clash between dramatic speculation and practical challenges for oversight remains unresolved.
However, the choices researchers make today could set the course for generations to come.
Copyright NoobLabs AI All Rights Reserved