Why global news coverage of violence against women is shrinking — even as abuse moves online and into AI
Content note: This piece discusses sexual and gender-based violence, online harassment and AI-enabled abuse.
Quick take
- Dataset: 1.14 billion online news stories, 2017–2025 (regional and local outlets across multiple languages).
- Coverage fell from a #MeToo peak of about 2.2% to roughly 1.3% of online news in 2025.
- AI-assisted harassment is scaling and changing tactics, but mainstream coverage of these dynamics is patchy.
The numbers that don’t lie
Global reporting on violence against women has declined sharply even as the problem becomes more prevalent and more digital. The analysis covers 1.14 billion online news stories from 2017–2025 — an unusually large dataset that tracks how reporting changed after the #MeToo surge.
At the #MeToo peak (around 2018) terms tied to misogyny appeared in roughly 2.2% of online news stories. By 2025 that share had dropped to about 1.3% — the lowest point across the period studied. The decline is broad: Africa, for example, hit a nine-year low of 1.18% in 2024 despite ongoing conflict zones where sexual violence remains alarmingly common.
“The drop in coverage is alarming given the scale of the problem and shows a failure by the press to make meaningful progress.”
How framing changes the story — the Epstein case study
High-profile scandals demonstrate how coverage can spotlight elites and sensational details while missing the gendered nature and systemic drivers of abuse. The Jeffrey Epstein corpus — roughly 1 million related articles between 2017 and February 2026 — is revealing: the phrase “violence against women” appeared in only 0.1% of those pieces. Around 25% referenced “victims” and 26% referenced “power,” “money,” “elites” or “corruption.”
The takeaway is stark: when reporting prioritizes scandal, gossip and elite networks over a gender-inequality lens, audiences rarely see the architecture of harm — patriarchy, power imbalances and institutional failures that enable abuse to persist.
“Coverage of high-profile cases fails to apply a gender-inequality lens, so reporting often misses the root causes of abuse.”
What “gendered abuse” looks like now
Gendered abuse is targeted harassment that uses a person’s gender to intimidate, shame or silence them. Offline, the World Health Organization estimates 1 in 3 women have experienced physical or sexual violence in their lifetime, and 1 in 9 were assaulted by a man in the previous 12 months. Online, research cited in the report suggests as many as 60% of women worldwide have experienced gendered abuse in digital spaces.
When newsrooms cover these stories, sources are skewed: roughly 1.5 men are quoted for every woman in stories about misogyny, and the Global Media Monitoring Project found men outnumber women among expert sources (24% men vs 17% women). The result is often objectifying narratives and technical descriptions that fail to center survivors’ perspectives or explain systemic drivers.
“Stories about violence against women are rare, and when they appear male voices and narratives that objectify survivors continue to dominate.”
AI as an accelerant
Artificial intelligence is shifting the shape and scale of online gendered abuse. AI lowers the cost of producing harmful content and automates harassment tactics:
- Deepfake videos and synthetic images are used to humiliate or intimidate survivors and public figures.
- Automated bots and coordinated accounts amplify smear campaigns and drown out supportive voices.
- AI-enabled doxxing and identity takeover make targeted threats faster and more precise.
Platforms struggle to keep pace. Content moderation remains reactive, often relying on brittle keyword filters or overwhelmed human teams. The growing complexity of synthetic media complicates verification, and algorithmic amplification can turn a fringe campaign into mainstream visibility within hours.
“There are isolated examples of good coverage, but broad-based change is required — mainstream media must be willing and equipped to shift norms or nothing will change.”
Why the media retreat matters to leaders
This decline in gender-aware reporting has real consequences for organizations, brands and public institutions:
- Reputational risk: Misframing or ignoring gendered abuse can leave companies exposed during scandals, or complicit when workplace cultures reinforce harm.
- Operational risk: Employees and customers targeted by AI-assisted harassment can disrupt operations and reduce trust.
- Policy and regulatory risk: Weak public scrutiny allows exploitation of platform loopholes, increasing the likelihood of stricter regulations that may be imposed suddenly and without input from business stakeholders.
- Civic risk: When systemic causes are obscured, interventions become one-off responses rather than structural fixes.
Practical steps for newsrooms and business leaders
Both media organizations and private-sector leaders have roles to play. The following checklist is practical and actionable.
For newsrooms
- Adopt survivor-centered reporting protocols: prioritize consent, anonymity options, trauma-aware interviewing and support resources for sourced individuals.
- Embed a gender lens in investigative beats: train reporters to connect incidents to power structures and social norms, not only to personalities.
- Increase diversity in sourcing and staffing: hire and promote women editors and journalists and make men aware of sourcing bias.
- Invest in tech literacy: equip reporters to detect and explain AI-enabled abuse and synthetic media.
For businesses and C-suite leaders
- Audit exposure: map how employees or brands might be targeted by coordinated online abuse or AI-enabled attacks.
- Create rapid-response protocols: include forensic verification for deepfakes, legal pathways, communications templates and wellbeing support for targeted staff.
- Fund independent investigation and journalism: support survivor-centered reporting and independent audits of platform harms as part of brand safety programs.
- Press platforms for transparency: demand clearer rules and accountable moderation practices, and commission independent algorithmic impact assessments.
Key questions and clear answers
- How large was the dataset behind the findings?
1.14 billion online news stories published worldwide between 2017 and 2025 (including regional and local outlets scraped across multiple languages).
- How much has misogyny-related coverage fallen?
Terms tied to misogyny fell from around 2.2% of online news at the #MeToo peak to roughly 1.3% in 2025.
- How is high-profile coverage framed (example: Jeffrey Epstein)?
In about 1 million Epstein-related articles, “violence against women” appeared in 0.1% of pieces; coverage more often emphasized “power,” “money,” “elites” or “corruption” than gendered harm.
- How prevalent is online gendered abuse?
Studies cited suggest up to 60% of women worldwide have experienced online gendered abuse; WHO figures show 1 in 3 women have experienced physical or sexual violence in their lifetime.
- What newsroom changes are recommended?
Increase women in leadership and reporting roles, adopt survivor-centered practices, and consistently apply a gender-inequality lens that exposes power imbalances.
Methodology and limits
The findings come from large-scale text analysis across 1.14 billion online stories between 2017 and 2025. The dataset includes national and local outlets collected in multiple languages, and classification relied on keyword and phrase detection for misogyny-related terms and frames. Limitations include variable newsroom archiving, differences in regional vocabulary for gendered abuse, and the evolving lexicon around AI-enabled harms. Quantitative signals should be read alongside qualitative research and survivor testimony to form a complete picture.
Final strategic imperative
Public attention will ebb and flow, but institutions that build capacity now — media that institutionalize a gender lens, and businesses that prepare for AI-enabled harms — will reduce future harm and protect trust. Start with a simple audit: identify exposure to online gendered abuse, test your incident response against synthetic-media scenarios, and commit resources to survivor-centered reporting and independent platform scrutiny. That combination protects people, brands and the public conversation.