Categories Op-Ed

AI and Gender-based Violence

Editor, Sahar Bandial, fourth industrial revolution, artificial intelligence, AI, machine-based learning, National Artificial Intelligence Policy, Ministry of Information Technology, AI ethical use, AI responsible use, cyber safety in Pakistan, deepfake technology, generative AI, ChatGPT, TFGBV, technology-facilitated gender-based violence, deepfake videos, cyber harassment in Pakistan, AI regulation, EU Artificial Intelligence Act, Unesco AI ethics, US AI Bill of Rights, Pakistan AI policy, Cybercrime Wing FIA, Prevention of Electronic Crimes Act 2016, National Cybercrime Investigation Agency, AI and human rights, AI and privacy, AI in governance
       Sahar Bandial

IT is said that we stand at the cusp of the “fourth industrial revolution”. The emergence and proliferation of artificial intelligence (AI) and machine-based learning is set to usher in a new era of productivity, efficiency and governance across public and private sectors.

In recognition of such potential, the Ministry of Information Technology and Telecommunication laid out a draft National Artificial Intelligence Policy last year, which reportedly was to be presented before the cabinet this month. The policy sets out ambitious targets for the use, scaling and proliferation of AI, while recognising — though almost in passing — the need to ensure its ethical and responsible use, which upholds the fundamental rights and privacy of users. Given the state’s existing performance on cyber safety, particularly of women, these assurances appear unrealistic.

AI is a broad field that encompasses the development of computer systems capable of simulating human learning, comprehension, problem-solving, decision-making and creativity. It employs tools such as machine learning and deep learning to analyse data (including text and images) to identify patterns, make predictions and generate human-like speech.

ChatGPT may be the most commonly encountered/used generative AI tool. Generative AI generates texts and images in response to commands or prompts submitted by a user. The technology has been put to beneficial use in business settings to, for instance, improve efficiency through automation of simple tasks, synthesise and analyse data to enhance business decision-making, and assist with creative undertakings.

However, there is a flip side. Deepfake technology — a form of generative AI — can create realistic but entirely fake images and videos of persons by analysing existing audiovisual data.

Many of us have come across doctored video clips of prominent personalities online — some comical and others more pernicious in intent and nature. In fact, the latter are predominant on the web: 98 per cent of deepfake videos are pornographic in nature; 99pc target women or girls. These statistics are not surprising. Cases involving deepfake videos (often sexual in nature) targeting women journalists and women politicians have made headlines in Pakistan in the recent past.

Technology-facilitated gender-based violence (TFGBV), such as deepfake content, also manifests as image-based abuse and blackmail, misinformation/ defamation, impersonation, cyberstalking and violent threats. While men, too, are subject to cyber violence, it is a gendered phenomenon worldwide. Women are the largest victims of online harassment in Pakistan. According to a Pew Research Centre study, while 26pc of women aged 1824 years experience cyberstalking, only 7pc of men in the same age range do so. Cyberspace is a more unsafe place for women across the world.

AI has now altered the arena where TFGBV plays out, greatly enhancing the apparent authenticity and believability of misinformation and fake news propagated on the internet. This is not just on account of deepfake technology. Generative AI can be used not only to create cyber-harassment templates, but also to generate and modify false, yet convincing personal histories of women, perpetuating the cycle of misinformation and fake news.

TFGBV violates women’s right to dignity, privacy and non-discrimination. Many times, it culminates in physical violence. Issues of consent and intellectual property rights also arise with the (often) non-consensual use of copyrighted data by AI technologies — a matter that gained prominence last year in the Hollywood strike against the unlicensed use of actors’ AI replicas by motion picture studios.

AI regulation is an evolving field. Given the risks that have arisen with the increased deployment of and access to AI technology, there have been efforts worldwide to regulate its use.

Taking the lead in a human rights-centric approach to AI regulation, the EU Artificial Intelligence Act, 2024, positions safety and compatibility with fundamental rights and freedoms as the guiding principles of AI regulation. It bars the use of AI for biometric surveillance and compilation of facial recognition databases (Article 5) and provides that where video, audio or image content is created with deepfake technologies, disclosure regarding such artificial generation/ manipulation be provided (Article 52).

Unesco’s Recommendations on the Ethics of Artificial Intelligence Use (2023), also stipulate the protection of human rights and freedoms as the first guiding “value” in AI regulation. The US Blueprint for an AI Bill of Rights, a white paper published by the White House, articulates certain principles for the protection of civil rights and democratic values in the building, deployment, and governance of automated systems.

Pakistan’s AI policy also recognises the particular dangers of AI to create “fake content such as text, images and videos”, and envisions that the AI Regulatory Directorate (ARD) will issue guidelines to address “possible spread of disinformation, data privacy breaches and fake news”. The exact mechanism of such regulation may be more minutely spelt out in any AI legislation that is eventually passed.

For now, the existing mechanism, under the Cybercrime Wing of the FIA, has largely been ineffective in addressing the countless complaints of TFGBV made by women under the Prevention of Electronic Crimes Act, 2016, which criminalises the transmission of false and defamatory information through an electronic system, distortion of a person’s pictures to show her/ him in a sexually explicit position, and cyberstalking. However, as scathingly observed by the Sindh High Court last year, the Cybercrime Wing does not have the “competency to effectively investigate cybercrime, let alone combating th[ese] [offences]”.

Blocking the flow of information and traffic on the internet will not serve as a solution. The state must ensure that any future regulation of AI-led TFGBV — to be laid out by the ARD or enforced by the newly formed National Cybercrime Investigation Agency — is effective and upholds the standards of ethics and human rights that its AI policy espouses.

The writer is a lawyer.

Author

Avatar photo

Making law simple, clear, and useful for everyone.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

University policy, PhD allowance, QS ranking, faculty bias, academic fairness, Pakistan universities, higher education inequality, teacher value, research excellence, academic vision, local PhD graduates, education policy critique, unfair increment system, faculty division, devaluing local academia, foreign vs local PhD, teaching quality, university inferiority complex, academic recognition, Rabia Chisti

When Universities Create Bias Among PhD Holders

By Rabia Mustafa It is quite disappointing to see how a university, which should ideally…

Saif Ur Rehman, Federal Judicial Academy (FJA), internship program, Quaid-i-Azam University, student experience, legal profession, district judiciary, criminal administration of justice, judicial wellbeing, National Symposium on Judicial Wellbeing, District Court Complex Haripur, Central Prison Haripur, mock trial, legal education, procedural law, diversity, inclusivity, Pakistan Institute for Parliamentary Services (PIPS), LLB students, legal career

Law Student Saif Ur Rehman Reflects on Transformative Experience from the Legal Internship Program of the Federal Judicial Academy

This summer did not unfold as I had initially planned. I had something else lined…

extra-judicial killings, police encounters Pakistan, Crime Control Department Punjab, CCD Pakistan, drone surveillance police, human rights violations Pakistan, fake police encounters, Pakistan criminal justice system, due process of law, Article 9 Constitution Pakistan, police brutality Punjab, Supreme Court on extra-judicial killings, Benazir Bhutto PLD 1998 SC 388, fair trial rights, human rights watchdogs Pakistan, Pakistan law and order, constitutional violations, illegal police killings, Amnesty International Pakistan, Human Rights Watch Pakistan, ICCPR Pakistan obligations, rule of law Pakistan, judicial oversight, unlawful police actions, accountability in law enforcement, public safety and legality, sanctity of human life, legal reforms Pakistan, crime-free Punjab, state-sponsored violence, legal justice vs. revenge

Extra-Judicial Killings: A Threat to Rule of Law

It’s been a couple of weeks that we have been witnessing news pertaining to eliminate…