US announces 'strongest global action yet' on AI safety

 

US announces 'strongest global action yet' on AI safety



On October 29, 2023, the United States government announced a new executive order on AI safety, which it called "the most significant actions ever taken by any government to advance the field of AI safety." The order requires AI developers to share safety results with the US government and to adopt certain safety practices, such as conducting risk assessments and developing mitigation strategies.

The order also establishes a new National AI Safety Board, which will be responsible for overseeing AI safety research and development and for making recommendations to the government. The board will be composed of experts from a variety of fields, including AI safety, ethics, and law.

The Biden administration said that the order is necessary to address the "serious and growing risks" posed by AI. The administration cited a number of potential AI risks, including the possibility of AI systems being used for malicious purposes, the possibility of AI systems becoming uncontrollable, and the possibility of AI systems causing widespread economic and social disruption.

The order has been welcomed by many AI safety experts, who say that it is a significant step forward in addressing the risks of AI. However, some critics have argued that the order is too broad and that it could stifle innovation in the AI field.

It remains to be seen how the order will be implemented and what its long-term impact will be. However, the order is a clear signal that the US government is taking AI safety seriously.

Here are some of the key provisions of the executive order:

  • Requires AI developers to share safety results with the US government.
  • Establishes a new National AI Safety Board to oversee AI safety research and development.
  • Requires AI developers to adopt certain safety practices, such as conducting risk assessments and developing mitigation strategies.
  • Directs the National Institute of Standards and Technology to develop new AI safety standards.
  • Establishes a new AI safety research program at the National Science Foundation.
  • Creates a new interagency AI safety working group.

The order also includes a number of other provisions related to AI ethics, transparency, and accountability.

Comments