Kamala Harris Will Lay Out AI Strategy in London Speech

Vice President Kamala Harris, in a speech in London, will lay out the burgeoning risks related to artificial intelligence, calling for international cooperation and stricter standards to protect consumers from the technology.

“As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the wellbeing of their customers; the security of our communities; and the stability of our democracies,” according to prepared remarks Harris is scheduled to give at the US Embassy in the UK capital on Wednesday.

The speech is part of a broad effort by the White House to put restrictions on new artificial intelligence tools, which are rapidly coming to market often with little to no oversight from regulators. Harris is in London with other foreign leaders to take part at the AI Security Summit convened by UK Prime Minister Rishi Sunak at Bletchley Park. 

Also read: Looking for a smartphone? To check mobile finder

Harris will unveil a series of steps the White House is taking to address risks. Among those are a new US AI Safety Institute inside the Commerce Department, which will create guidelines and tools for mitigating the dangers posed by AI. And the Office of Management and Budget is planning to release draft policy guidance on how AI should be used by the US government.

The vice president will also announce that the US government is working with major foundations, including the David and Lucile Packard Foundation, Ford Foundation, and Heising-Simons Foundation, which have committed $200 million to finance AI security efforts. In addition, Harris will point out that the US is has joined other countries to help establish norms for military use of AI.

The speech comes after President Joe Biden on Monday signed an executive order that empowers the federal government to enact security standards and privacy protections on new AI tools. The order will have broad effects on companies including Microsoft Corp., Amazon.com Inc. and Alphabet Inc.’s Google. The companies will have to submit test results on their new models to the government before releasing them to the public. The directive also calls for AI-generated content to be labeled.

The use of AI tools has soared in recent months with the release of platforms, including OpenAI’s ChatGPT app, that are readily accessible to the average consumer. The increased use of the technology has also spurred concerns that the platforms could be used to spread misinformation or that the underlying algorithms are perpetuating bias.

Several governing bodies, including the United Nations and the Group of Seven, are actively seeking to establish rules-of-the-road for artificial intelligence. The European Union is arguably the farthest along, with its AI Act expected to become law by the end of the year.

The Biden administration’s swift response to rein in AI is in contrast with how Washington has generally approached emerging technologies. Efforts to oversee social media platforms have languished in Washington for years, leaving many disputes to be settled in court, including a landmark federal antitrust case the Justice Department is pursuing against Google.

Still, the White House order still relies on federal agencies — most of which lack a lot of AI expertise — taking internal steps to bolster oversight. Congress would have to act for more comprehensive oversight. Senate Majority Leader Chuck Schumer has begun discussions about AI, but it’s unclear if legislation could pass a bitterly divided Congress.

Source link

credite