- OpenAI inaugurates the OpenAI Red Teaming Network, a group of experts aiding AI model risk assessment and mitigation efforts.
- Red teaming helps identify biases in AI models like DALL-E 2, ChatGPT and GPT-4, providing an extra layer of security.
- The initiative will involve experts from different domains and aims to deepen OpenAI’s collaborations with scientists, research institutions, and civil society organizations.