Centre sets deepfake crackdown in motion | Latest News India

New Delhi The Union government will bring in a new regulation to deal with deepfakes and so-called synthetic content online, IT minister Ashwini Vaishnaw said on Thursday after a meeting with social media and technology companies, an industry body, and academics. The form of the regulation is yet to be determined and it could be an act, new rules, or an amendment to existing rules, he added.

Union Minister for Electronics & Information Technology Ashwini Vaishnaw(PTI)

“We will start drafting the regulations today (Thursday) itself. And within a very short timeframe, we will have a new set of regulation on deepfakes,” the minister said.

Responding to a question, Vaishnaw said that there will be “extensive public consultation on the regulation” but did not clarify if the draft regulation would be put in the public domain for discussion.

Vaishnaw said that a nodal officer will be notified to receive feedback on the regulation.

READ | ‘Wait till Nov 24’: Union minister Rajeev Chandrasekhar on Centre’s actions against deepfakes

Calling deepfakes a “new threat to democracy”, the minister said: “Deepfakes weaken trust in society and in its institutions. The use of social media is ensuring that deepfakes can spread significantly more rapidly without any checks, and they are getting viral within a few minutes of their uploading.”

The move comes days after Prime Minister Narendra Modi flagged concerns over deepfakes at an interaction with journalists in the Capital.

The proposed regulation will have four pillars, Vaishnaw explained: detection, prevention, reporting, and awareness. Prevention, he said, is both about preventing deepfakes from being posted and preventing them from going viral. He added that reporting mechanisms need to be more proactive and time-sensitive to mitigate damage. It is understood that his meeting with the companies also focused on these four pillars.

READ | IT Ministry summons social media companies over deepfakes

Social media companies have been instructed to submit plans about dealing with deepfakes and give suggestions for the regulation, he said. The next meeting will happen in the first week of December.

The meeting was attended by representatives from Meta, Google, YouTube, X (formerly Twitter), Microsoft, Amazon Web Services, Snap, Sharechat, Koo, Telegram, and industry body NASSCOM. It was also attended by IIT Jodhpur computer science professor Mayank Vatsa and IIT Ropar data science professor Abhinav Dhall. MeitY officials, apart from the IT minister and secretary, also attended the meeting.

Multiple people aware of the proceedings said that the discussion was directed more towards assessing the problem and coming up with necessary remedial steps, which could include new regulation or strengthening existing laws. There was no discussion about what the regulation could look like, they added.

The meeting was a brainstorming and collaborative consultation over the issue of deepfakes and not a confrontation between social media companies and the government, they said.

There was a general consensus that the Information Technology Act has enough provisions to deal with the issue of deepfakes, impersonation and other allied problems but everybody acknowledged that lapses in the detection of and response to deepfakes was a recurrent problem.

READ | YouTube to penalise creators who don’t reveal use of deepfake in videos

“Free speech and privacy are very important … but they are being undermined by deepfakes. That is why the new regulation will come — so that deepfakes and AI generated synthetic content are not harmful to society and democracy,” Vaishnaw said.

What will the regulation look like?

Vaishnaw specified that if deepfakes were shown in India, the new regulation would apply to them, irrespective of their point of origin.

“When we draft the regulation, we will also look at the penalty, both for the person who has uploaded or created as well as the platform,” he said. He said that in the interim period, while the regulation was being drafted, social media platforms would continue to implement their current policies.

“Labelling and watermarking [deepfakes] were discussed in detail as the bare minimum that will have to be implemented. There was discussion about how people can sidestep these mechanisms too,” Vaishnaw said. It is understood that during the meeting, one of the academics, Dhall emphasised that labelling synthetic content is important. Other participants agreed with him and said that labels need to be more prominent on social media platforms.

What about good synthetic content?

Vaishnaw acknowledged that synthetic content could also be generated for useful purposes such as enhancing photographs. The problem arises with harmful and abusive content, he clarified. “For example, in the recent elections in Madhya Pradesh, a video surfaced in which the chief minister was kind of saying that you vote for the opposite party. That is deepfake, absolute, deep misinformation,” he said.

READ | Dealing with deepfakes: Regulation & education

During the meeting, a similar concern was raised. Participants pointed out that AI-generated synthetic content could be used to help those with speech impairments. Other use cases that were cited included commercial ones, in political campaigns to reach voters who may not speak the candidate’s language, etc. In the meeting too, Vaishnaw acknowledged that while the technology itself is not the problem, the bad actors who abuse it are.

There was some discussion about how the fact-checking system would also need to evolve to deal with deepfakes if certain kinds of synthetic media had to be classified as harmful or not harmful. Vaishnaw said, “Users have a right to know what is natural and what is synthetic. We will structure the entire regulatory mechanism in a way to prevent harm and to give an option to the user to see whether something synthetic or natural.”

Detection remains an issue

During the meeting, Vatsa talked about how existing deepfakes can be successfully detected with 98% accuracy with a specific tool that he uses. However, he warned that deepfake detection in general posed many challenges — identifying deepfakes, especially manually, is a time-consuming task; accuracy remains a problem with many tools; and Indian context, in terms of language, faces, and culture needs to be built into the systems.

Source link

credite