Leaders from OpenAI, Google DeepMind, Anthropic, and others warn that advanced A.I. could eventually pose an existential threat to humanity.
More than 350 executives, researchers, and engineers have signed the open letter from the Center for AI Safety comparing A.I. risk to pandemics and nuclear war.
Signatories include Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic, as well as prominent A.I. researchers.
Concerns include the potential for A.I. to spread misinformation, eliminate jobs, and disrupt society.
Some skeptics argue that A.I. is still too immature to pose an existential threat, while others believe it may soon surpass human-level performance.
Recommendations include increased cooperation among A.I. makers, the formation of an international A.I. safety organization, and government-issued licenses for large A.I. models.