OpenAI outlines AI safety plan, allowing board to reverse decisions

Artificial intelligence company OpenAI laid out a framework to address safety in its most advanced models, including allowing the board to reverse safety decisions, according to a plan published on its website Monday.

Microsoft-backed OpenAI will only deploy its latest technology if it is deemed safe in specific areas such as cybersecurity and nuclear threats. The company is also creating an advisory group to review safety reports and send them to the company’s executives and board. While executives will make decisions, the board can reverse those decisions.

Read more

You may also like

More in IT

Comments are closed.