News

800+ Public Figures Demand Ban on Superintelligent AI: What's Next?

Article Highlights:
  • 800+ global leaders including Geoffrey Hinton and Steve Wozniak demand a ban on superintelligent AI development
  • 73% of Americans want robust advanced AI regulation; 60% demand safety proof before development proceeds
  • Sam Altman predicts superintelligence by 2030, but only 5% of the public supports the "move fast and break things" approach
  • Perceived risks include human economic obsolescence, loss of civil liberties, and potential human extinction
  • The 2023 letter with Elon Musk had little effect; success of current petition remains uncertain

Introduction

Over 800 global figures, including two of the "Godfathers of AI", Apple co-founder Steve Wozniak, and Prince Harry, have signed an open letter calling for a ban on superintelligent AI development until specific safety conditions are met. This movement represents one of the most significant collective statements against the breakneck pace of technological advancement characterizing the AI industry.

Who Signed the Letter Calling for a Ban on Superintelligent AI?

Superintelligent AI is an artificial intelligence system that significantly surpasses human capabilities in essentially all cognitive tasks. The petition has garnered signatures from a broad coalition: two of the three "Godfathers of AI", Geoffrey Hinton and Yoshua Bengio, tech founders like Richard Branson (Virgin Group), leading political figures, academics, religious leaders, and even members of the British royal family.

Other notable signatories include public figures such as Glenn Beck, former Trump strategist Steve Bannon, former Joint Chiefs of Staff Chairman Mike Mullen, actor Joseph Gordon-Levitt, musicians Will.I.am and Grimes, and the Duke and Duchess of Sussex, Prince Harry and Meghan.

What Does the Open Letter Request?

The letter, organized by the AI safety group Future of Life Institute (FLI), calls for a moratorium on superintelligent AI development until:

  • Broad scientific consensus exists that such technology can be developed safely and controllably
  • Strong public support materializes for proceeding with superintelligent AI development

The petition acknowledges potential AI benefits, such as unprecedented improvements in human health and prosperity, but emphasizes that major tech companies' stated goal of achieving superintelligence within the coming decade has raised significant concerns about safety and control.

Perceived Risks of Superintelligent AI

Letter signatories cite multiple risks associated with superintelligence:

  • Human economic obsolescence: massive job loss and shift of economic control to machines
  • Loss of freedom and dignity: risks of disempowerment, civil liberties violations, and loss of human autonomy
  • National security threats: potential geopolitical conflicts arising from technological disparities
  • Extreme scenario: the potential for total human extinction, though less probable, is explicitly mentioned

What Does the Public Think About the Race Toward Superintelligent AI?

A survey of 2,000 American adults revealed important data on public sentiment. Only 5% of Americans support the "move fast and break things" mantra characterizing superintelligent AI development at major tech companies.

Even more significant: 73% of Americans demand robust regulation of advanced AI, and 60% agree that AI should not be developed until proven safe and controllable. This stark misalignment between private sector research pace and public expectations fuels the movement for a ban on superintelligent AI.

Industry Leaders' Timeline for Superintelligence

Despite regulatory pressures, AI company executives continue predicting rapid superintelligence realization. Sam Altman, CEO of OpenAI, recently stated superintelligence will arrive by 2030 and that up to 40% of today's economic tasks will be automated by AI in the near future.

Mark Zuckerberg, CEO of Meta, argued superintelligence is close and will "empower" individuals. However, Meta's recent division of Meta Superintelligence Labs into four smaller groups suggests the target may be further away than initially predicted.

The 2023 Precedent and Effectiveness of Mobilization

In 2023, a similar letter signed by Elon Musk called for a pause on AI development. That petition had little to no tangible effect on commercial research and development. Experts believe the current letter, while involving more signatories and covering a broader spectrum of sectors, may face similar resistance from the tech industry.

Future Prospects and Regulation

The call for a ban on superintelligent AI represents a critical moment in AI governance debate. While the scientific community and general public express concrete concerns, tech companies will likely continue their race toward superintelligence. The real challenge will be balancing technological innovation with adequate safety measures.

The question remains open: will the collective pressure from over 800 global figures succeed where the 2023 attempt failed, or will the AI market continue unimpeded toward superintelligent scenarios?

Frequently Asked Questions About Superintelligent AI and the Ban

Here are answers to the most common questions regarding the movement to ban superintelligent AI and related safety concerns.

What Exactly Does "Superintelligent AI" Mean?

Superintelligent AI is a system that significantly surpasses human capabilities in almost all intellectual tasks, from science to art to engineering. Unlike narrow AI (specialized in one domain), superintelligence would be a general artificial intelligence operating beyond human limits on a massive scale.

Why Are 800+ Figures Calling for a Ban on Superintelligent AI?

Signatories, including AI "godfathers" Geoffrey Hinton and Yoshua Bengio, fear concrete risks: massive job loss, loss of human control, civil rights violations, and, in worst-case scenarios, human extinction. They believe safety should precede development speed.

What Percentage of Americans Support Banning Superintelligent AI?

According to a survey of 2,000 American adults, 73% demand robust advanced AI regulation, and 60% agree superintelligent AI should not be developed until proven safe and controllable. Only 5% support the "move fast and break things" approach.

When Will Superintelligence Arrive According to Experts?

Sam Altman of OpenAI predicts superintelligence by 2030, while Mark Zuckerberg of Meta claims it is imminent. However, exact timelines remain highly uncertain and depend on unguaranteed technical breakthroughs.

Who Signed the Open Letter to Ban Superintelligent AI?

Among 800+ signatories are Geoffrey Hinton and Yoshua Bengio (AI pioneers), Steve Wozniak (Apple), Richard Branson (Virgin Group), Prince Harry, Meghan, Glenn Beck, Will.I.am, Grimes, and numerous global politicians, academics, and religious leaders.

Did the 2023 Letter Have Any Effect on AI Development?

The 2023 petition, also signed by Elon Musk, had little to no tangible effect on commercial AI research and development. Experts believe the current letter may face similar industry resistance.

What Are the Main Risks of Superintelligent AI According to Signatories?

Cited risks include: human economic obsolescence (massive job loss), loss of freedom and dignity, national security threats, human disempowerment, and in worst-case scenarios, total human extinction.

Introduction Over 800 global figures, including two of the "Godfathers of AI", Apple co-founder Steve Wozniak, and Prince Harry, have signed an open letter Evol Magazine