Elon Musk, the CEO of Twitter, and Steve Wozniak, co-founder of Apple, are among the thousands who have endorsed an open letter urging a six-month halt on the development of artificial intelligence systems surpassing GPT-4.
Interestingly, even some researchers from companies developing their AI systems, such as Deepmind, a UK-based company owned by Alphabet, have signed the petition. Musk, who was a founder of OpenAI and supported its launch in 2017, has recently expressed concerns about its focus on new systems and profit rather than ethical considerations.
What the petition seeks to achieve
The petition highlights the risks posed by AI with “human-competitive intelligence,” including the spread of disinformation, job automation, and other catastrophic outcomes. It also cautions against an “out-of-control race” among AI labs to develop more powerful systems beyond human comprehension and control. The tech experts have urged AI labs to pause the training of such systems for at least six months or face government intervention. Some countries are already working on AI regulations to mitigate high-risk tools.
Who created the petition?
The nonprofit organization Future of Life Institute initiated the petition, which includes notable signatories like Yoshua Bengio, a Turing Award-winning AI pioneer, and prominent AI researchers such as Stuart Russell and Gary Marcus. Other signatories include Steve Wozniak, former U.S. presidential candidate Andrew Yang, and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, a science-oriented advocacy group known for its stance against nuclear war that could potentially cause humanity’s demise.
Has OpenAI agreed to this petition?
The tech giants OpenAI, Microsoft, and Google did not respond to requests for comment on the matter. The letter has already received criticism from some who believe that it is vague and fails to acknowledge the regulatory issues at hand.
One expert, James Grimmelmann, a professor of digital and information law at Cornell University, has criticized Elon Musk for signing the letter, calling it “deeply hypocritical” due to Tesla’s past fights against accountability for the defective AI in its self-driving cars.
Should we be afraid of artificial intelligence?
Many people continue to feel anxious or fearful about the rapid development of AI, and with good reason. While the behavior of AI machines is not determined by their intellect but rather by their creators and the information they are programmed with, we are still at the mercy of large corporations and their profit-driven motives.
These companies have the power to shape the direction of AI development and its impact on society, and there is always a risk that their decisions may prioritize financial gain over the well-being of people and the planet. Therefore, it is important to remain vigilant and critical of the actions of these corporations, and to demand transparency and accountability in their AI development processes.
The potential risks of AI systems with human-competitive intelligence, as highlighted by the recent petition calling for a six-month pause on their development, are significant and cannot be ignored. As such, we should be cautious and aware of the implications of these fast-growing AI systems, and push for responsible and ethical development practices that prioritize the common good over corporate profit.