US issues executive order on safe, secure, and trustworthy artificial intelligence

In a landmark move, US President Joe Biden has issued an Executive Order for America to lead the way in seizing the promise and managing the risks of artificial intelligence (AI). Protecting Americans’ privacy, the Executive Order aims to establish new standards for AI safety and security. The idea behind the order is to advance equity and civil rights, stand up for consumers and workers, and promote innovation and competition. The executive order builds on previous actions President Biden has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

What does the Executive Order say? 

America acknowledges that as AI’s capabilities grow, its implications for Americans’ safety and security also grows. This Executive Order directs that developers of the most powerful AI systems share their safety test results and other critical information with the US government. In accordance with the Defense Production Act, the Order will require companies developing any foundation model, that poses a serious risk to national security, national economic security, or national public health and safety, must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. 

The National Institute of Standards and Technology of the US will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. The US government claims that these are the most significant actions ever taken by any government to advance the field of AI safety. 

There is also a provision to protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI. 

To protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content, the US Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.

In addition to the above, US will establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.

While House believes without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids. 

Source link

credite