Choose an AI chat
Red kites face rising poisoning threat despite conservation efforts
UK Health and Safety Latest

Red kites face rising poisoning threat despite conservation efforts

by Ellie Cartwright
December 15, 2025
0

The red kite population in Britain is facing a renewed threat as recent findings reveal a dramatic increase in poison...

Read moreDetails
Amazon removes necklace over cancer-causing chemical concerns

Amazon removes necklace over cancer-causing chemical concerns

December 15, 2025
Sawmill fined after worker suffers life-changing injuries

Sawmill fined after worker suffers life-changing injuries

December 15, 2025
Aldi recalls Christmas canapes over allergy fears

Aldi recalls Christmas canapes over allergy fears

December 15, 2025
Boy dies from sepsis after hospital misdiagnoses appendicitis as flu

Boy dies from sepsis after hospital misdiagnoses appendicitis as flu

December 15, 2025
  • About
  • Advertise
  • Policies
    • Privacy Policy
    • Editorial Policy
    • Corrections & Complaints policy
  • Useful Documents
    • Understanding RIDDOR
    • 10 Workplace Safety Failures
    • A Complete Guide to Reporting Safety Incidents in the UK
    • Fire Risk Assessment: Meeting the Regulatory Reform (Fire Safety) Order
    • COSHH Basics: A Practical Guide to Control of Substances Hazardous to Health
    • Working at Height in the UK: The Essentials (WAH Regulations 2005)
    • Asbestos in the Workplace: Control of Asbestos Regulations 2012 (CAR) Essentials
    • Managing Contractors Under CDM 2015: Roles, Duties & Controls
    • DSE & Ergonomics: Healthy Workstations for Office & Hybrid Teams
    • Lock out Tag out LOTO
    • Workplace Transport Safety: Forklifts, Pedestrians & Traffic Management
    • Noise & Vibration at Work: Practical Controls (2005 Regulations)
    • Confined Spaces in the UK: Safe Entry under the Confined Spaces Regulations 1997
  • Contact
Tuesday, December 16, 2025
  • Login
UK Safety News
  • Home
  • News
    • All
    • UK Health and Safety Latest
    NHS issues vital safety warning for metformin users amid rising diabetes cases

    NHS issues vital safety warning for metformin users amid rising diabetes cases

    Red kites face rising poisoning threat despite conservation efforts

    Red kites face rising poisoning threat despite conservation efforts

    Amazon removes necklace over cancer-causing chemical concerns

    Amazon removes necklace over cancer-causing chemical concerns

    Sawmill fined after worker suffers life-changing injuries

    Sawmill fined after worker suffers life-changing injuries

    Aldi recalls Christmas canapes over allergy fears

    Aldi recalls Christmas canapes over allergy fears

    Boy dies from sepsis after hospital misdiagnoses appendicitis as flu

    Boy dies from sepsis after hospital misdiagnoses appendicitis as flu

    Mother's campaign for sepsis awareness following son's preventable death

    Mother’s campaign for sepsis awareness following son’s preventable death

    Changing attitudes towards substance misuse in the workplace

    Changing attitudes towards substance misuse in the workplace

    Woman sheds 3st on Mounjaro in five months amid health concerns

    Woman sheds 3st on Mounjaro in five months amid health concerns

    Inquiry begins into offshore worker's death amid Covid concerns

    Inquiry begins into offshore worker’s death amid Covid concerns

    Trending Tags

    • Donald Trump
    • Future of News
    • Climate Change
    • Market Stories
    • Election Results
    • Flat Earth
No Result
View All Result
UK Safety News
No Result
View All Result
Home News UK Health and Safety Latest

AI-generated violence against women sparks outrage and calls for stronger safeguards

Jade Anderson by Jade Anderson
October 11, 2025
in UK Health and Safety Latest
Reading Time: 4 mins read
2
AI-generated violence against women sparks outrage and calls for stronger safeguards

Story Highlight

– AI-generated violent videos of women shared online.
– YouTube channel removed after reporting, had 200,000 views.
– Experts call for stronger AI safeguards and regulations.
– AI-generated content poses mental health risks for youth.
– Government aims to combat online violence against women.

Full Story

Concerns are mounting regarding the use of artificial intelligence in generating graphic content that depicts violence against women. Recent reports have highlighted that videos featuring the torture and murder of women are being produced through Google’s AI technology and shared widely online, raising alarms about the ramifications of such misuse.

One prominent example is a YouTube account named WomanShotA.I, which featured a range of disturbing videos where women are shown begging for mercy before facing violent ends. Since its launch in June, the account accumulated approximately 200,000 views. It remained accessible until it was flagged and subsequently removed following an alert from tech journalism outlet 404 Media. These videos were created using Google’s AI tool, Veo 3, which processes human prompts to generate its content.

The titles of some videos reflect a chilling nature, with phrases such as “captured girls shot in head” and “female reporter tragic end” indicating severe violence. Leading the discourse on the implications of these occurrences is Clare McGlynn, a professor of law at Durham University, who specializes in issues related to gender-based violence and equality. Upon discovering the content channel, she expressed her immediate concern, stating, “It lit a flame inside of me that just struck me so immediately that this is exactly the kind of thing that is likely to happen when you don’t invest in proper trust and safety before you launch products.”

Professor McGlynn emphasised the urgent need for tech companies, including Google, to implement robust safety measures prior to product launches to prevent such troubling content from emerging. She criticized the prevailing tendency within the tech industry to prioritise rapid technological advancements over safety frameworks. “Google asserts that these types of materials violate their terms and conditions,” she noted, adding, “What that says to me is they simply didn’t care enough; they didn’t have enough guardrails in place to stop this being produced.”

YouTube, which is under Google’s umbrella, acknowledged the issue by stating that its generative AI operates based on user instructions and confirmed that the offending channel was terminated for failing to comply with community guidelines. This marks a recurring problem, as the channel had previously been removed for similar violations. Google’s policies regarding generative AI explicitly prohibit the creation or distribution of sexually explicit or violent content, yet questions remain regarding the extent of their oversight, as inquiries into the volume of similar videos created with the AI were not addressed.

The gravity of these developments has prompted responses from experts beyond the realm of law. Alexandra Deac, a researcher at the Child Online Harms Policy Think Tank, characterised the situation as a pressing public health concern. “The fact that AI-generated violent content of this nature can be created and shared so easily is deeply worrying,” she remarked, highlighting potential negative impacts on children and adolescents who come into contact with such material.

Deac reiterated that the proliferation of online violent and sexualised content is an issue that cannot be solely entrusted to parental vigilance, marking a concerning trend that has been observed by the UK’s Internet Watch Foundation. This organisation recently identified instances of AI-generated child sexual abuse material being disseminated via chatbot technologies, including scenarios that simulate graphic abuse towards minors.

Olga Jurasz, a law professor and director of the Centre for Protecting Women Online, echoed concerns regarding the broader cultural implications of AI-generated violent content. Jurasz argued that such videos perpetuate a harmful culture of sexism and misogyny, reinforcing damaging stereotypes about women. She remarked, “It is a huge problem when we see videos or images that are AI-generated that portray sexualised violence against women and sexualised torture,” and posited that these representations contribute to an environment where acts of violence may be seen as normalised or acceptable.

In response to the growing concerns surrounding online safety, a spokesperson from the Department for Science, Innovation, and Security stated the government’s commitment to combatting violence against women and girls, particularly in the digital domain. With the Online Safety Act enacted in March, rigorous measures were introduced to mandate social media platforms and AI services to safeguard users from harmful material, including that which is generated by AI.

As the government pushes for enhanced protections online, Prime Minister Sir Keir Starmer has emphasised a vision to elevate Britain as a leader in AI technology, reinforcing this ambition through notable investments into the UK sector, such as a recent £2 billion commitment from AI firm Nvidia. However, as advancements in artificial intelligence continue to evolve, the pressing challenge remains to balance innovation with the imperative of safety, ensuring that new technologies do not inadvertently perpetuate violence and abuse across digital landscapes.

Key Takeaways

– Google’s AI generator, Veo 3, has been exploited to create graphic videos of women being tortured and murdered, raising concerns about misogynistic abuse.
– A YouTube account, WomanShotA.I, posted videos that garnered nearly 200,000 views before being removed, highlighting a lack of content safeguards.
– Experts criticize the tech industry’s rush to innovate without adequate safety measures, indicating a need for stricter regulations.
– The UK’s Online Safety Act aims to protect users from illegal content and enforce responsibilities on social media platforms regarding harmful materials.

SummarizeShare35Share197SendSend
ADVERTISEMENT
Jade Anderson

Jade Anderson

Related Posts

Government faces backlash over ban on Maccabi Tel Aviv fans

Government faces backlash over ban on Maccabi Tel Aviv fans

by Jade Anderson
October 17, 2025
3

In a controversial decision, Maccabi Tel Aviv fans have been banned from attending an upcoming match against Aston Villa, deemed...

AI revolutionizing workplace safety in high-risk environments

AI revolutionizing workplace safety in high-risk environments

by Jade Anderson
September 25, 2025
1

Despite stringent safety regulations, workplace injuries in the UK remain alarmingly high, with over half a million non-fatal incidents recorded...

Comments 2

  1. matthew ellis says:
    2 months ago

    This is deeply disturbing. Technology that creates realistic depictions of violence against women increases risk of harm, normalises abuse and can retraumatise survivors. Platforms must take stronger proactive steps to detect and remove such content and to prevent accounts that produce it from resurfacing. Tech companies should prioritise safety by improving content moderation, funding research into detection of synthetic media and cooperating closely with regulators and victim support organisations. The Online Safety Act is an important step, but it must be backed by clear enforcement, mandatory transparency reporting and resources for victims to report and get support quickly.

  2. Travis Nolan says:
    4 weeks ago

    This is deeply disturbing. Technology companies must treat safety as a core responsibility and invest quickly in effective moderation, clearer reporting paths and faster takedown processes. Regulators and platforms need to work together to close loopholes that allow harmful content to spread while preserving legitimate expression. Victims of online abuse require better support and legal remedies, and there should be stronger enforcement of the Online Safety Act to deter misuse. Ongoing transparency about how such content appears and what is being done to prevent it will be essential to rebuild trust.

Useful Documents

  • Understanding RIDDOR
  • 10 Workplace Safety Failures
  • A Complete Guide to Reporting Safety Incidents in the UK
  • Understanding RIDDOR
  • Fire Risk Assessment: Meeting the Regulatory Reform (Fire Safety) Order
  • COSHH Basics: A Practical Guide to Control of Substances Hazardous to Health
  • Working at Height in the UK: The Essentials (WAH Regulations 2005)
  • Lock out Tag out LOTO
ADVERTISEMENT
Boy dies from sepsis after hospital misdiagnoses appendicitis as flu
UK Health and Safety Latest

Boy dies from sepsis after hospital misdiagnoses appendicitis as flu

by Tara Rowden
December 15, 2025
0

Tragedy struck in Newport when nine-year-old Dylan Cope, misdiagnosed with influenza by hospital staff, succumbed to sepsis after being sent...

Read moreDetails
Mother's campaign for sepsis awareness following son's preventable death

Mother’s campaign for sepsis awareness following son’s preventable death

December 14, 2025
Changing attitudes towards substance misuse in the workplace

Changing attitudes towards substance misuse in the workplace

December 12, 2025
Woman sheds 3st on Mounjaro in five months amid health concerns

Woman sheds 3st on Mounjaro in five months amid health concerns

December 12, 2025
UK Safety News

Copyright © 2025
UK Safety News

Navigate Site

  • About
  • Advertise
  • Policies
  • Useful Documents
  • Contact

Follow Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News

Copyright © 2025
UK Safety News

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.