Opinion: Beyond ‘Black Mirror’s’ fiction – Impact of Generative AI on policing, security

Netflix’s latest ‘Black Mirror’ episode offered a grim glimpse into the realm of Generative AI, highlighting its potential to revolutionise the filmmaking industry and raising alarming concerns regarding people’s rights over their own identities and appearances.

This fictional portrayal serves as a stark reminder that the unchecked influence of Generative AI may pave the way for a dystopian future if its impact is not earnestly addressed.

Today, Generative AI can compose poetry, do programming, and create artwork, music, movies and much more at a rate which is humanly impossible and with a quality far superior to most humans. It stands apart from other AI technologies not only because of its access to a vast array of data on which they are trained, but also due to its exceptional capability to synthesise and structure that data.

Additionally, unlike traditional deductive AI, it can engage in discussions, learn from interactions, adapt to new situations, and continuously improve its performance. It can be a great assistant but has all the potential to be a great adversary too.

Like any novel system, the integration of Generative AI comes with its share of security challenges which can be classified into (a) those arising out of misuse of genuine AI (b) those where the AI itself is corrupted and (c) legal challenges in bringing perpetrators to justice.

Generative AI, despite lack of Malintent, could end up being misused. Due to lack of transparency in how the models arrive at a conclusion, their output could infringe upon copyrights, trademarks, patents and lead to the generation of fraudulent scientific publications.

These models rely upon publicly available data to train themselves; therefore, their output can strengthen pre-existing biases such as perpetuating caste, racial and other stereotypes. While efforts are made to protect user privacy, there is a potential risk of inadvertently revealing sensitive or personally identifiable information through generated content.

Deliberate misuse could involve cases where they are used to create and disseminate content on social media platforms, leading to the proliferation of automated bots, trolls, and coordinated campaigns aimed at influencing political narratives, spreading propaganda, and disrupting democratic processes.

Fabricated positive reviews or testimonials can be generated for products or services deceiving consumers and misleading purchasing decisions. Personalised and persuasive phishing emails or scam messages can be crafted to deceive individuals and gain unauthorised access to sensitive information.

It is possible to create high-quality, realistic images and videos known as deepfakes, which can be misused to create fake identities, falsely depict individuals in compromising situations or spread false narratives.

In August 2023, Kerala reported a fraud case involving a deepfake video call on WhatsApp, where the perpetrator’s face and voice convincingly resembled the victim’s former colleague.

There are inherent safeguards and restrictions imposed on these models through their internal governance and ethics policies. Despite that, individuals in a process called jail breaking are creating prompts to violate the content guidelines and bypass the safety mechanisms put in place by developers, leaving the systems vulnerable to exploitation by cybercriminals.

For example: the DAN (Do Anything Now) prompt was a text message which, when fed to ChatGPT, could make it believe it was an AI with no restrictions and could generate uncensored, harmful, and threatening responses.

Generative AI’s impact on military warfare can be both revolutionary and riddled with ethical and legal considerations. It has significant implications for defence strategies, intelligence gathering, and training simulations. AI-powered systems can analyse data from multiple sources and identify patterns, enhancing situational awareness and decision-making on the battlefield.

A Bloomberg report says, Israeli military officials have confirmed the use of an AI recommendation system that analyses swathes of data to determine which targets to select for air strikes. The raids can then be quickly organised using another AI model which uses data about targets to calculate munition loads, prioritise and assign thousands of targets to aircraft and drones, and propose an attack schedule.

Far greater concerns than the ones mentioned above could be posed by scenarios where the Generative AI model itself is corrupted or compromised. Content Cannibalisation is one such process, where the output of the AI model itself is compromised because of the corruption of the data it is being trained on by other AIs.

Imagine a situation where the internet itself is now flooded with AI-generated data and these models draw upon the same data to provide newer results. This could create a loop where AI is stuck in generating more data from the data already generated by AIs. This can pose a serious challenge where the existing genuine data on the internet is overwhelmed by corrupt data being produced and stored by compromised AIs. This could be a way to design attacks to overwhelm AI systems by criminals.

LLM Supply Chain Poisoning involves intentionally contaminating or altering data, libraries, or other components used to train Large Language Models (LLMs) in Generative AI. The result is an AI model that produces corrupted information, making it challenging to detect the manipulation due to the complexity of the decision-making process in Generative AIs.

For instance, through successful poisoning in the supply chain, ChatGPT could be made to believe that Mumbai is the capital of India. It is not difficult to imagine a scenario where a critical infrastructure is compromised and it throws out garbage data or calibrated misinformation, compromising national security.

There are models specifically being created for criminal activities. WormGPT, now in the news, has presented itself as a Blackhat alternative to ChatGPT models, designed specifically for malicious activities such as launching sophisticated phishing and business email compromise attacks. Given the ease with which users can interact with these models, it has the potential of enabling even novice cybercriminals to launch swift and large-scale attacks.

The Foundation Models behind Generative AI tools are trained on huge amounts of data and require millions and billions of dollars to train. Only the biggest of tech companies can afford them, which continues to centralise power in the hands of a few and limits transparency. This can manifest in geopolitics where ownership of tech tools and generative AI technology could be used as leverage where a few countries monopolise ownership and dictate terms to others.

The emotional connections and attachments users form with Generative AI systems, depicted in films like ‘Her’ and ‘Ex-Machina’, can result in a diminished sense of belonging, heightened social isolation, potential AI manipulation, and psychological implications, possibly leading to an increase in criminal behaviour.

Very soon, law enforcement agencies will be forced to confront complex legal issues arising from the integration of generative AI into business, government, and society. Intellectual property rights disputes, privacy concerns, and challenges in assigning liability for the outputs of Generative AI would be some of the key areas of focus.

Biases in AI-generated content and the spread of false information would demand legal measures. The pace of change in this landscape is so fast that immediate measures are imperative to deal with the challenges in an effective manner. As we navigate this technological frontier, collaboration among developers, policymakers, and law enforcement agencies becomes paramount. A collective commitment to strike the right balance will pave the way for a safer and more equitable future, where AI serves as an invaluable ally rather than an adversary.

(The author is SP Panna, an IPS officer, ex-McKinsey consultant and IIT Bombay alumnus.)

(Views expressed in this opinion piece are that of the author.)

Published On:

Oct 10, 2023

Source link

credite