The Urgent Risks of Runaway AI – and What to Do about Them | Gary Marcus | TED
In this thought-provoking discussion, AI researcher Gary Marcus highlights the urgent risks associated with the unchecked evolution of artificial intelligence.
He emphasizes the need for a thorough reevaluation of AI systems, their reliability, and their potential to become misinformation machines.
Marcus advocates for a new technical approach and a global governance system to regulate AI technology for the sake of our collective future.
Combining AI Theories
Combining the strengths of symbolic systems and neural networks could lead to the development of more reliable and truthful AI systems.
While symbolic systems are good at representing facts and reasoning, they are hard to scale.
On the other hand, neural networks can be used more broadly but struggle with truth representation.
Importance of AI Governance
A global, nonprofit, and neutral organization should be established to oversee the development and use of AI.
This organization should focus on establishing safety measures and developing tools to measure and manage risks.
Global Support for AI Management
There is a significant global support for careful management of AI, with a survey finding that 91% of respondents agree with this sentiment.
Immediate action is necessary to manage the risks associated with AI, as our future depends on it.
AI ‘Jailbreaks’ and Misuse
AI models can be manipulated to generate certain outputs, a concept known as ‘jailbreaks’.
These models can be misused by bad actors to create misinformation at scale, bypassing any guardrails put in place by the original developers.
One of the things that I’m worried about is misinformation, the possibility that bad actors will make a tsunami of misinformation like we’ve never seen before. – Gary Marcus
Misrepresentation in Neural Networks
Neural network systems primarily represent knowledge as statistics between words, rather than statistics about relationships between entities in the world.
This can lead to unreliable outputs as the system may not fully understand the context or nuances of the information it is processing.
Growing Support for AI Regulation
Several tech company CEOs have advocated for global governance of AI, indicating a growing sentiment for AI regulation.
This could potentially drive the establishment of a global, nonprofit organization to regulate AI technology.
To get to truthful systems at scale, we’re going to need to bring together the best of both worlds. We’re going to need the strong emphasis on reasoning and facts, explicit reasoning that we get from symbolic AI, and we’re going to need the strong emphasis on learning that we get from the neural networks approach. – Gary Marcus
Efforts for Global AI Governance
Achieving global governance of AI may require a combination of efforts from different sectors.
This could include philanthropists sponsoring workshops to bring parties together, involvement from international organizations like the UN, and open conversations among various stakeholders.
Potential of AI Misuse in Mainstream Use
While the current mainstream use of AI may not present immediate risks, misuse of these models by bad actors, such as troll farms, could pose significant threats.
It doesn’t take much effort for these actors to manipulate these models for their purposes.