5 things about AI you may have missed today: Meta says no downside to AI sharing, new lawsuit against AI, more

It’s the first day of the last month of 2023, and we might be in for an exciting period in artificial intelligence space as companies continue developing this technology while protest groups keep highlighting its dangers. In the first news, Meta executives said that they found no downside to openly sharing its AI technology. This comes as the tech company keeps releasing open-source versions of its large language models. In other news, a group of visual artists has filed an amended lawsuit against text-to-image generating AI models after a US district judge dismissed parts of the lawsuit last month. This and more in today’s AI roundup. Let us take a closer look.

Meta says no downside to sharing AI technology

Meta executives stated at an event that they had not encountered any issues by openly sharing their AI technology, reports Bloomberg. The statement comes at a time when their competitors in OpenAI, Microsoft, and Google have taken a different approach. In the past months, Meta has been releasing open-source versions of its large language models, similar to the technology behind AI chatbots like ChatGPT. The company’s strategy is to make these models freely available and then gain a competitive edge by developing products and services on top of them.

“There is really no commercial downside to also making it available to other people,” said Yann LeCun, Meta’s chief AI scientist.

Also read: Looking for a smartphone? To check mobile finder

Artists file amended lawsuit against image-generating AI platforms

Visual artists have filed an amended copyright lawsuit against Stability AI, Midjourney, DeviantArt, and Runway AI, alleging unauthorized use of their artwork to train AI systems, as per a report by Reuters. US District Judge William Orrick previously dismissed parts of the lawsuit but permitted the plaintiffs to refile. The updated complaint includes more artists and additional details on the alleged infringement, highlighting ongoing legal discussions on intellectual property rights in AI development.

US-sanctioned Chinese AI chipmaker gets massive funding

Shanghai Biren Intelligent Technology Co., a Chinese AI chip firm blacklisted by Washington in October, has reportedly received a 2 billion yuan ($280 million) pledge from Guangzhou government-backed investors. The startup is also in discussions with officials in Hong Kong for additional funding, considering the possibility of establishing operations in the region. However, the outcome of the talks with the Hong Kong government remains uncertain, according to Bloomberg which quoted anonymous sources familiar with the matter.

US govt forces Saudi fund to exit Sam Altman-backed AI startup

According to a Reuters report, the Biden administration has forced a venture capital firm, backed by Saudi Aramco, to offload its shares in Rain Neuromorphics. The Sam Altman-backed Silicon Valley AI chip startup gained popularity for its brain-inspired chip design and successfully secured a substantial $25 million in funding in 2022, with Aramco’s Prosperity7 serving as a key investor. The decision to divest comes in the wake of a thorough review conducted by the Committee on Foreign Investment in the United States (CFIUS), a regulatory body overseeing deals with potential national security implications. This underscores the heightened scrutiny surrounding technology investments amidst evolving geopolitical considerations.

EU’s AI Act faces opposition from differing opinions in legislation

EU lawmakers are struggling to agree on regulating systems like ChatGPT, posing a threat to the proposed AI Act, reports Reuters. The main hurdle is ‘foundation models,’ particularly generative AI, in talks scheduled for December 6. Disagreements, especially from France, Germany, and Italy, may lead to self-regulation for generative AI makers, risking the act being shelved before upcoming European parliamentary elections. Negotiations had progressed smoothly until challenges arose regarding the regulation of foundation models, with differing opinions on risk levels and potential tiered approaches.

Source link

credite