G-7 Plans to Ask AI Companies to Agree to Watermarks, Audits

The Group of Seven nations are preparing to ask tech companies to agree to a set of rules to mitigate the risks of artificial intelligence systems as part of a proposal aimed at uniting the divided approaches in Europe and the US.

The 11 draft guidelines, which will be voluntary, include external testing of AI products before they’re deployed, public reports on security measures and controls to protect intellectual property, according to a copy that was seen by Bloomberg News and may be agreed to next week in Japan. The document is still being discussed and its contents and the timing of an announcement may still change.

Still, the countries — Canada, France, Germany, Italy, Japan, the UK and US — are divided about whether the companies’ progress should be monitored, people familiar with the matter said. While the US is opposed to any oversight, the European Union is pushing for a mechanism that would check on compliance and publicly name companies that had run afoul of code, said the people, who asked not to be identified because the negotiations are private.

Also read: Looking for a smartphone? To check mobile finder

After OpenAI’s ChatGPT service set off a race among tech companies to develop their own artificial intelligence systems and applications, governments around the world began grappling with how to enforce guardrails on the disruptive technology while still taking advantage of the benefits.

The EU will likely be the first Western government to establish mandatory rules for AI developers. Its proposed AI Act is in final negotiations with the aim of reaching a deal by the end of the year.

The US has been pushing for the other G-7 countries to adopt voluntary commitments it agreed to with companies, including OpenAI, Microsoft Corp. and Alphabet Inc.’s Google in July. President Joe Biden’s administration has also pushed for regulation of AI in the US, however, the government is limited in what it can do without action from Congress.

The proposed guidelines include requirements to:

  • Run internal and external tests before and after deploying products to check for security vulnerabilities, including “red teaming” that emulates attacks
  • Make reports on safety and security evaluations public and share information with organizations including governments and academia
  • Disclose privacy and risk management policies, and implement controls for physical security and cybersecurity
  • Identify AI-generated content with watermarking or other methods
  • Invest in research on AI safety
  • Prioritize developing AI systems that address global challenges including the climate crisis, health and education
  • Adopt international standards for testing and content authentication
  • Control the data going into the systems to protect intellectual property and personal information.

One more thing! HT Tech is now on WhatsApp Channels! Follow us by clicking the link so you never miss any update from the world of technology. Click here to join now!

Source link

credite