30 million Nigerian women face AI-generated deepfake attacks by 2030, warns Gatefield
By 2030, 30 to 40 million Nigerian women and girls could be directly targeted by AI-generated deepfakes, impersonation, and coordinated harassment, a new analysis by Gatefield has revealed. The report
By 2030, 30 to 40 million Nigerian women and girls could be directly targeted by AI-generated deepfakes, impersonation, and coordinated harassment, a new analysis by Gatefield has revealed.
The report warns that up to 70 million women and girls may be exposed to AI-facilitated online harm annually.
In response, Gatefield called for urgent measures to protect women and vulnerable groups, including the introduction of clear guidelines on AI-generated violence and harassment.
The leading Sub-Saharan African public strategy and media organization warned that without immediate legal frameworks, Nigeria risks the structural exclusion of women from online spaces, silencing victims and restricting their participation in society.
The group emphasised the need for strict definitions of deepfakes and synthetic media, robust platform accountability, and effective reporting and redress mechanisms to ensure timely removal of harmful content, while highlighting the importance of safeguards for at-risk groups, such as women and children, drawing on international frameworks from other countries.
According to a statement by Farida Adamu, the Insights and Analytics Lead at the organization, the projections, outlined in Gatefield’s State of Online Harms 2025, draw on public data and predictive modelling.
Based on Nigeria’s projected 200 million internet users by 2030, according to her, the analysis shows that nearly half of all Nigerian internet users could experience online harm, with women comprising 58 per cent of victims.
Generative artificial intelligence, she said, is rapidly amplifying harassment campaigns, and Gatefield said the lack of regulatory oversight in Nigeria leaves women vulnerable.
Gatefield called on the National Information Technology Development Agency (NITDA), the Nigerian Communications Commission (NCC), and the National Agency for the Prohibition of Trafficking in Persons (NAPTIP) to implement AI-facilitated violence guidelines and enforce accountability on platforms.
Adamu said unsafe product design is a key factor, stressing, “This is unsafe product design, not just bad actors.
"The report highlights real-world cases of AI abuse targeting Nigerian women. Ayra Starr, the Afrobeat singer, had AI-generated nude images circulated on Instagram in 2025, with delayed platform responses.
"Senator Natasha Akpoti-Uduaghan of Kogi Central was targeted with multiple deepfake audio and video recordings that sought to undermine her political credibility following sexual harassment allegations.
"Nollywood actress Kehinde Bankole was subjected to coordinated AI-generated harassment that digitally “undressed” her online," Gatefield explained.
In response, Gatefield recommended immediate measures to protect women and vulnerable groups from AI-facilitated abuse.
The group called for the introduction of AI-facilitated violence guidelines to define and address cyber harassment and gendered harm.
It urged clear definitions of deepfakes and synthetic media, covering AI-generated images, video, audio, and text designed to mislead, manipulate, or impersonate individuals.
Gatefield also called for mandatory platform accountability, including algorithmic risk assessments and robust content moderation, as well as the establishment of reporting, redress, and enforcement mechanisms with centralized channels, time-bound takedowns, and clear appeals procedures.
The report further emphasized the need for safeguards for vulnerable groups, including women and children, with transparent tracking and prompt removal of harmful content.
Comparative frameworks cited include the EU AI Act, GDPR, France’s Penal Code, and the UK Online Safety Act, all of which provide enforcement, criminalization, and transparency mechanisms for non-consensual AI content.
Gatefield warned that without immediate legal frameworks, Nigeria risks the structural exclusion of women from online spaces, silencing victims and limiting their participation in society.
"The clock is ticking. Generative AI is scaling abuse faster than our laws, politics, media, and culture,” said Shirley Ewang, Advocacy Lead at Gatefield.



