Choose an AI chat
Flying Tiger recalls glasses over lead and cadmium safety concerns
UK Health and Safety Latest

Flying Tiger recalls glasses over lead and cadmium safety concerns

by Tara Rowden
March 20, 2026
0

Flying Tiger Copenhagen has issued a recall of select drinking glasses after testing revealed alarmingly high levels of lead and...

Read moreDetails
NHS faced crisis before Covid due to years of underfunding, says UNISON

NHS faced crisis before Covid due to years of underfunding, says UNISON

March 19, 2026
Inquest opened into death of army captain following training incident

Inquest opened into death of army captain following training incident

March 19, 2026
Military inquest opens into death of Captain Muldowney at training exercise

Military inquest opens into death of Captain Muldowney at training exercise

March 19, 2026
MHRA outlines new regulatory framework for medical software in UK

MHRA outlines new regulatory framework for medical software in UK

March 19, 2026
  • About
  • Advertise
  • Policies
    • Privacy Policy
    • Editorial Policy
    • Corrections & Complaints policy
  • Useful Documents
    • Understanding RIDDOR
    • 10 Workplace Safety Failures
    • A Complete Guide to Reporting Safety Incidents in the UK
    • Fire Risk Assessment: Meeting the Regulatory Reform (Fire Safety) Order
    • COSHH Basics: A Practical Guide to Control of Substances Hazardous to Health
    • Working at Height in the UK: The Essentials (WAH Regulations 2005)
    • Asbestos in the Workplace: Control of Asbestos Regulations 2012 (CAR) Essentials
    • Managing Contractors Under CDM 2015: Roles, Duties & Controls
    • DSE & Ergonomics: Healthy Workstations for Office & Hybrid Teams
    • Lock out Tag out LOTO
    • Workplace Transport Safety: Forklifts, Pedestrians & Traffic Management
    • Noise & Vibration at Work: Practical Controls (2005 Regulations)
    • Confined Spaces in the UK: Safe Entry under the Confined Spaces Regulations 1997
  • Contact
  • Agent
Saturday, March 21, 2026
  • Login
UK Safety News
  • Home
  • News
    • All
    • UK Health and Safety Latest
    Food recalls prompt urgent warnings from UK authorities

    Food recalls prompt urgent warnings from UK authorities

    Flying Tiger recalls glasses over lead and cadmium safety concerns

    Flying Tiger recalls glasses over lead and cadmium safety concerns

    NHS faced crisis before Covid due to years of underfunding, says UNISON

    NHS faced crisis before Covid due to years of underfunding, says UNISON

    Inquest opened into death of army captain following training incident

    Inquest opened into death of army captain following training incident

    Military inquest opens into death of Captain Muldowney at training exercise

    Military inquest opens into death of Captain Muldowney at training exercise

    MHRA outlines new regulatory framework for medical software in UK

    MHRA outlines new regulatory framework for medical software in UK

    Investigation launched into army officer's death during training exercise

    Investigation launched into army officer’s death during training exercise

    MHRA outlines new regulatory framework for software as a medical device

    MHRA outlines new regulatory framework for software as a medical device

    Army officer dies during live firing training exercise in Northumberland

    Army officer dies during live firing training exercise in Northumberland

    Investigation launched into death of British Army officer during training exercise

    Investigation launched into death of British Army officer during training exercise

    Trending Tags

    • Donald Trump
    • Future of News
    • Climate Change
    • Market Stories
    • Election Results
    • Flat Earth
No Result
View All Result
UK Safety News
No Result
View All Result
Home News UK Health and Safety Latest

AI-generated violence against women sparks outrage and calls for stronger safeguards

Jade Anderson by Jade Anderson
October 11, 2025
in UK Health and Safety Latest
Reading Time: 4 mins read
2
AI-generated violence against women sparks outrage and calls for stronger safeguards

Story Highlight

– AI-generated violent videos of women shared online.
– YouTube channel removed after reporting, had 200,000 views.
– Experts call for stronger AI safeguards and regulations.
– AI-generated content poses mental health risks for youth.
– Government aims to combat online violence against women.

Full Story

Concerns are mounting regarding the use of artificial intelligence in generating graphic content that depicts violence against women. Recent reports have highlighted that videos featuring the torture and murder of women are being produced through Google’s AI technology and shared widely online, raising alarms about the ramifications of such misuse.

One prominent example is a YouTube account named WomanShotA.I, which featured a range of disturbing videos where women are shown begging for mercy before facing violent ends. Since its launch in June, the account accumulated approximately 200,000 views. It remained accessible until it was flagged and subsequently removed following an alert from tech journalism outlet 404 Media. These videos were created using Google’s AI tool, Veo 3, which processes human prompts to generate its content.

The titles of some videos reflect a chilling nature, with phrases such as “captured girls shot in head” and “female reporter tragic end” indicating severe violence. Leading the discourse on the implications of these occurrences is Clare McGlynn, a professor of law at Durham University, who specializes in issues related to gender-based violence and equality. Upon discovering the content channel, she expressed her immediate concern, stating, “It lit a flame inside of me that just struck me so immediately that this is exactly the kind of thing that is likely to happen when you don’t invest in proper trust and safety before you launch products.”

Professor McGlynn emphasised the urgent need for tech companies, including Google, to implement robust safety measures prior to product launches to prevent such troubling content from emerging. She criticized the prevailing tendency within the tech industry to prioritise rapid technological advancements over safety frameworks. “Google asserts that these types of materials violate their terms and conditions,” she noted, adding, “What that says to me is they simply didn’t care enough; they didn’t have enough guardrails in place to stop this being produced.”

YouTube, which is under Google’s umbrella, acknowledged the issue by stating that its generative AI operates based on user instructions and confirmed that the offending channel was terminated for failing to comply with community guidelines. This marks a recurring problem, as the channel had previously been removed for similar violations. Google’s policies regarding generative AI explicitly prohibit the creation or distribution of sexually explicit or violent content, yet questions remain regarding the extent of their oversight, as inquiries into the volume of similar videos created with the AI were not addressed.

The gravity of these developments has prompted responses from experts beyond the realm of law. Alexandra Deac, a researcher at the Child Online Harms Policy Think Tank, characterised the situation as a pressing public health concern. “The fact that AI-generated violent content of this nature can be created and shared so easily is deeply worrying,” she remarked, highlighting potential negative impacts on children and adolescents who come into contact with such material.

Deac reiterated that the proliferation of online violent and sexualised content is an issue that cannot be solely entrusted to parental vigilance, marking a concerning trend that has been observed by the UK’s Internet Watch Foundation. This organisation recently identified instances of AI-generated child sexual abuse material being disseminated via chatbot technologies, including scenarios that simulate graphic abuse towards minors.

Olga Jurasz, a law professor and director of the Centre for Protecting Women Online, echoed concerns regarding the broader cultural implications of AI-generated violent content. Jurasz argued that such videos perpetuate a harmful culture of sexism and misogyny, reinforcing damaging stereotypes about women. She remarked, “It is a huge problem when we see videos or images that are AI-generated that portray sexualised violence against women and sexualised torture,” and posited that these representations contribute to an environment where acts of violence may be seen as normalised or acceptable.

In response to the growing concerns surrounding online safety, a spokesperson from the Department for Science, Innovation, and Security stated the government’s commitment to combatting violence against women and girls, particularly in the digital domain. With the Online Safety Act enacted in March, rigorous measures were introduced to mandate social media platforms and AI services to safeguard users from harmful material, including that which is generated by AI.

As the government pushes for enhanced protections online, Prime Minister Sir Keir Starmer has emphasised a vision to elevate Britain as a leader in AI technology, reinforcing this ambition through notable investments into the UK sector, such as a recent £2 billion commitment from AI firm Nvidia. However, as advancements in artificial intelligence continue to evolve, the pressing challenge remains to balance innovation with the imperative of safety, ensuring that new technologies do not inadvertently perpetuate violence and abuse across digital landscapes.

Key Takeaways

– Google’s AI generator, Veo 3, has been exploited to create graphic videos of women being tortured and murdered, raising concerns about misogynistic abuse.
– A YouTube account, WomanShotA.I, posted videos that garnered nearly 200,000 views before being removed, highlighting a lack of content safeguards.
– Experts criticize the tech industry’s rush to innovate without adequate safety measures, indicating a need for stricter regulations.
– The UK’s Online Safety Act aims to protect users from illegal content and enforce responsibilities on social media platforms regarding harmful materials.

SummarizeShare35Share197SendSend
ADVERTISEMENT
Jade Anderson

Jade Anderson

Related Posts

Unqualified gas engineer fined after putting family at risk in Ulverston

Unqualified gas engineer fined after putting family at risk in Ulverston

by Michael Harland
January 30, 2026
0

Lee Lancaster, a 38-year-old from Ulverston, was sentenced after the unsafe installation of a gas boiler left a family of...

UK's new chemical safety regulations raise concerns over public health and environmental risks

UK’s new chemical safety regulations raise concerns over public health and environmental risks

by Tara Rowden
March 7, 2026
0

The UK's approach to chemical hazard classification has come under scrutiny as new regulations proposed post-Brexit threaten to undercut public...

Comments 2

  1. matthew ellis says:
    5 months ago

    This is deeply disturbing. Technology that creates realistic depictions of violence against women increases risk of harm, normalises abuse and can retraumatise survivors. Platforms must take stronger proactive steps to detect and remove such content and to prevent accounts that produce it from resurfacing. Tech companies should prioritise safety by improving content moderation, funding research into detection of synthetic media and cooperating closely with regulators and victim support organisations. The Online Safety Act is an important step, but it must be backed by clear enforcement, mandatory transparency reporting and resources for victims to report and get support quickly.

  2. Travis Nolan says:
    4 months ago

    This is deeply disturbing. Technology companies must treat safety as a core responsibility and invest quickly in effective moderation, clearer reporting paths and faster takedown processes. Regulators and platforms need to work together to close loopholes that allow harmful content to spread while preserving legitimate expression. Victims of online abuse require better support and legal remedies, and there should be stronger enforcement of the Online Safety Act to deter misuse. Ongoing transparency about how such content appears and what is being done to prevent it will be essential to rebuild trust.

Useful Documents

  • Understanding RIDDOR
  • 10 Workplace Safety Failures
  • A Complete Guide to Reporting Safety Incidents in the UK
  • Understanding RIDDOR
  • Fire Risk Assessment: Meeting the Regulatory Reform (Fire Safety) Order
  • COSHH Basics: A Practical Guide to Control of Substances Hazardous to Health
  • Working at Height in the UK: The Essentials (WAH Regulations 2005)
  • Lock out Tag out LOTO
ADVERTISEMENT
MHRA outlines new regulatory framework for medical software in UK
UK Health and Safety Latest

MHRA outlines new regulatory framework for medical software in UK

by Ellie Cartwright
March 19, 2026
0

In a significant update for developers and NHS personnel, the Medicines and Healthcare products Regulatory Agency (MHRA) has introduced a...

Read moreDetails
Investigation launched into army officer's death during training exercise

Investigation launched into army officer’s death during training exercise

March 19, 2026
MHRA outlines new regulatory framework for software as a medical device

MHRA outlines new regulatory framework for software as a medical device

March 19, 2026
Army officer dies during live firing training exercise in Northumberland

Army officer dies during live firing training exercise in Northumberland

March 19, 2026
UK Safety News

Copyright © 2025
UK Safety News

Navigate Site

  • About
  • Advertise
  • Policies
  • Useful Documents
  • Contact
  • Agent

Follow Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News

Copyright © 2025
UK Safety News

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.