technology

To address AI safety, OpenAI is establishing a red teaming network.

To address AI safety, OpenAI is establishing a red teaming network.
53views

To combat AI security, OpenAI is constructing a red teaming network.Members of the OpenAI Red Teaming Network will get compensation for their time, and prior knowledge of language models is not necessary.ety

OpenAI Red Teaming Network

Over 100 million people have downloaded OpenAI’s ChatGPT internationally, demonstrating both the useful use of AI and the need for tighter regulation. In order to create models that are more reliable and secure, OpenAI is currently assembling a team.

The OpenAI Red Teaming Network, which is made up of specialists who can aid the firm with information to improve its risk assessment and mitigation methods to deploy safer models, was officially launched by OpenAI on Tuesday.

This network will transform how OpenAI conducts its risk assessments into a more formal process involving various stages of the model and product development cycle, as opposed to “one-off engagements and selection processes before major model deployments,” according to OpenAI. 

To assemble the team, OpenAI is looking for specialists from a variety of backgrounds, including those with domain knowledge in psychology, economics, law, languages, and politics, to mention just a few.
However, according to OpenAI, prior knowledge of AI systems or linguistic models is not necessary.

The participants will get payment for their time and be governed by NDAs. Being a member of the red team could just need a five-hour time commitment each year because they won’t be engaged with every new model or project. Through the OpenAI website, you may submit an application to join the network.

“This network offers a unique opportunity to shape the development of safer AI technologies and policies, and the impact AI can have on the way we live, work, and interact,” says OpenAI. 

The experts can discuss general “red teaming practices and findings” with one another in addition to OpenAI’s red teaming efforts, according to the blog post.

Red teaming is a crucial procedure for evaluating the efficiency and guaranteeing the security of more modern technologies. Google and Microsoft, two other tech behemoths, have red teams just for their AI models.

Read More Story Like this

Best mobile VPNs keep your digital data secure Amazon AI announcement today Apple Watch Series 9 review : Your hand becomes a button as a result. Apple Surprise with iPhone 15 Pro Max’s Tetraprism Camera