Idea

秦刚任外交部部长助理 邹加怡任监察部副部长

Data scientist Rumman Chowdhury sounds the alarm about online harassment and gender-based violence in the age of generative artificial intelligence. Moreover, she warns against the dangers of malicious use of new technologies affecting the most vulnerable, and calls for increased focus on the diversity of users.
百度 港区全国人大代表团副团长、香港中旅社荣誉董事长卢瑞安昨日在接受香港文汇报记者访问时批评,港独分子死心不息,明知港独不可能,更在香港失去地盘,遂向外造谣生事,更勾结外力试图破坏一国两制,祸国殃民,行为愚蠢。 Rumman Chowdhury

Interview by Anuliina Savolainen  
UNESCO  

You authored UNESCO’s study on technology-facilitated gender-based violence in the era of generative artificial intelligence (AI), released in November 2023. In which ways have these technologies increased the potential avenues for such violence against women and girls?

With generative AI we will have more convincing fake media, and women will be particularly vulnerable to this threat. Violent threats against prominent women are common. Now imagine that it is accompanied by very realistic photos of you, your children, your loved ones, produced with generative AI. With today's technology, this can be done very easily without any coding skills.

Gender-based violence online usually starts with cyber harassment. 26 per cent of young women have experienced cyberstalking as compared to 7 per cent of men in the same age range according to a recent study. Overall, with generative AI there will be more generated content – violent content, misleading content and just “garbage” content. The sheer flood of this information, meant to overwhelm and distract, will increase violence on women. 

Gender-based violence is further exacerbated by unintentional harms, where implicit discriminatory practices and sexism in society become trained into the AI models. Some of the more well known examples are AI models assuming women are nurses or teachers, rather than doctors or scientists, or images of women automatically sexualized without consent or intention. 
 

What kind of patterns are behind AI-created falsehoods?

Compositional deepfakes are a worrying manifestation: it’s possible to create an entire fake narrative about somebody by combining multiple fabricated media sources, leading to very believable synthetic histories with faked photos, articles, audio, or video, distributed online. People familiar with mis- and disinformation campaigns know that some actors spend years crafting fake accounts and fictional narratives; generative AI can allow this malicious act to be mostly automated. 

It’s possible to create a very believable fake narrative with faked photos or video

Similarly concerning is the ability to create interactive deepfakes. It will be possible to train a chatbot to talk like any human, and fool people into thinking somebody is saying something that they are not saying. This same process can be applied to create an entirely fake social media account and to pretend to be a prominent woman – or any woman – online, saying things, posting things, and doing things that make them look bad. Perpetrators of  technology-facilitated  gender-based violence could use this kind of technology to impersonate women’s identities online and ruin their professional or private relationships, and even track down survivors of such violence by pretending to be someone they know.   

With the new tools, a full-scale online harassment campaign can be created in 15-20 minutes

And then there is malware. Malicious parties can generate malware to steal personal information in order to dox (publish private information about) their victims. With the new tools the bar to entering automated harassment campaigns creating malicious code is much lower; a full-scale online harassment campaign can be created in 15-20 minutes. All you have to do is tell the generative AI what you want to write, and it will generate the code for you. Then you can ask it to help you generate code to post the content on someone’s social media account every ten minutes.

Which of these developments do you find the most worrying?

I worry about all of them and what I worry about in particular is the ultimate effect they’ll have. I worry that we will enter a world that is post-truth, where nothing we see online is believable or trustworthy. In doing so the world will regress from a globalized, communicative, conversational society that exists on the Internet to one where everybody is very suspicious. We would lose out on so many amazing advances in society if we entered a world in which we cannot trust what is online.

We should also care because most online harms start with the most disadvantaged. The most vulnerable communities – such as girls  and  women  of  a  minoritized  race, ethnicity, gender expression, caste, or socio-economic status – are the ones on which these kinds of attacks are tested first, and it should be an indicator for the rest of the world to pay attention, because this is what is coming for everybody else.

In the report you conclude that there’s an onus placed on the victim to protect themselves. How could online protection be reinforced?

Most of the tools that exist for online protection inadvertently create a “chilling effect”, in other words, the purpose of these tools is to remove yourself from the conversation to protect yourself, which is unfair, but also literally impossible for prominent women such as policymakers or journalists whose jobs need to exist in the social media sphere. So basically you're telling women that in order to be protected in this world they have to remove themselves. 

Most of these apps also place all of the responsibility on the action on the victim. Women have to decide and take action on reporting. Instead, apps should be developed to encourage the community to provide support, with zero tolerance for people who are in the act of harassing women.

Moreover, some content distributors have actually shut down the ability to create independent third party tools, such as community-based tools built by startups, to enable people to protect themselves against online harassment.  

I’ve been fortunate to be in some of the most powerful rooms, and each time I say the same thing: how much of that money are you earmarking towards online protection tools? Invest as much in online protection as you spend on AI development. These technologies will never make the positive impact in the world we’re all envisioning if we do not make them safe to use. 

You have said that we need to think about the diversity in the room when working with algorithms. Why is this important?

Companies are trying to make products that are for everybody, but if you look at the demographics of who is in the room, it’s a very small slice of the world. If we have only one type of person, gender, geographical region or educational background represented, we’re missing out on a diversity of knowledge and information.

The scale and scope of issues is so broad that getting input from the public is critically important. The work my nonprofit, Humane Intelligence, is known for is public bias bounties and red-teaming exercises. We open AI models to the public and curate their feedback. 

In some cases, they speak a language or come from a background that is not well-represented in AI data or models. In others, they are professionals, like architects or scientists, who will evaluate the model from their own professional perspective. Last year, we coordinated the largest-ever generative AI red-teaming exercise which hosted over 2,200 individuals’ evaluations. 

To get more women, minorities and other underrepresented groups into the AI industry, we need to understand how attrition happens all throughout the pipeline. Teachers who tell girls that programming is for boys; or hiring practices that favour men over women. People often hire their friends when they make startups, so if their network is only men who are just like them, then you’re not going to get diversity. Building programs to encourage, for example, more female founders to make companies, or programs that normalize the fact that girls can program, helps break down some of the existing stereotypes.

You work alongside major technology companies to enable the responsible use of emerging technologies such as generative AI. Why is it in the interest of the industry to ensure their products are ethical and inclusive?

Generative AI companies want to build products that are safe and reliable because they want other companies to use them. Products will not be built if there’s a risk that AI will say something racist or sexist, perpetuate harms, perpetuate violence, so there is an incentive to work together to adjust these problems.

In preventing harassment, misinformation and disinformation stemming from generative AI, everyone has a role to play

These problems are big and global in scale, so it’s very hard if not impossible for a single company to solve these problems. In preventing harassment, misinformation and disinformation stemming from generative AI, everyone has a role to play – content generators, platform companies, social media companies, policymakers, governments, civil society and non-profits, and just regular people who might be on these platforms. Organizations like UNESCO have an important role to play in helping to define standards that will help companies to design programs that are more respectful of diversity. 

UNESCO warns about the risks posed by Generative AI for women and girls

Generative Artificial Intelligence (AI) has amplified existing online harassment methods and increased the potential avenues for gender-based violence online. This is the key finding of the UNESCO report “Your opinion doesn’t matter, anyway": exposing technology-facilitated gender-based violence in an era of generative AI, published in November 2023.

The report, authored by Big Data specialists Rumman Chowdury and Dhanya Lakshmi, argues that while deep-learning models are revolutionizing the way people access information and interact with content, they present concerns for the overall protection and promotion of human rights and for the safety of women and girls. The harms may include more realistic fake media and fake narratives, and a much wider reach of hate speech and misinformation. Cyber harassment on social media can also be exacerbated with the help of AI-generated harassment templates – a growing concern in a situation where nearly 60 per cent of young women across the world report having faced online harassment on social media platforms.

A second study entitled Challenging systematic prejudices: an Investigation into Gender Bias in Large Language Models, published by UNESCO and IRCAI (International Research Center on Artificial Intelligence) in spring 2024, presents similarly worrying tendencies in large language models (natural language processing tools that underpin popular generative AI platforms) to produce gender bias, as well as homophobia and racial stereotyping.

Both publications highlight the need for action by AI developers and policymakers to combat the new threats. Suggested measures include the implementation of the Recommendation on the Ethics of Artificial Intelligence, adopted by UNESCO Member States in November 2021.

Rumman Chowdhury

Former director of machine learning ethics at Twitter, she is an influential Bangladeshi American data scientist and founder of the tech non-profit Humane Intelligence. 

Subscribe