1/The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea.
I'm seeing many new applications in education, healthcare, food, … that'll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks.
2/There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy.
3/Responsible AI is important, and AI has risks. The popular press narrative that AI companies are running amok shipping unsafe code is just not true. The vast majority (sadly, not all) of AI teams take responsible AI and safety seriously.
4/A 6 month moratorium is not a practical proposal. To advance AI safety, regulations around transparency and auditing would be more practical and make a bigger difference. Let's also invest more in safety while we advance the technology, rather than stifle progress.
Originally tweeted by Andrew Ng (@AndrewYNg) on March 29, 2023.