Story Highlight
– UK bans “nudification” apps to combat online misogyny.
– New laws target creation and distribution of AI tools.
– Experts warn of potential harm from fake nude imagery.
– Government collaborates with tech companies for safety solutions.
– Child protection charities support banning nudification apps.
Full Story
The UK government has announced a decisive move to outlaw “nudification” applications as part of a broader initiative aimed at reducing violence against women and girls. This legislation, revealed on a recent Thursday, is designed to make it illegal to develop or distribute artificial intelligence (AI) tools that allow users to modify images to depict individuals without clothing.
The newly proposed crimes are expected to strengthen existing regulations regarding the misuse of sexually explicit digital content, which includes deepfake technology used without the individual’s consent. Technology Secretary Liz Kendall affirmed, “Women and girls deserve to be safe online as well as offline,” reinforcing the government’s zero-tolerance approach towards the exploitation and humiliation stemming from non-consensual sexually explicit images.
Currently, creating deepfake images that are sexually explicit without consent is considered a criminal act under the Online Safety Act. With the introduction of the proposed legislation banning nudifying applications, those who profit from or facilitate these technologies will face significant legal consequences. “Those who profit from them or enable their use will feel the full force of the law,” Ms Kendall stated, signalling a robust legal framework intended to safeguard individuals from technological abuse.
Nudification, often referred to as “de-clothing,” involves the application of generative AI to create realistic images that make it appear as though an individual has been stripped of their garments. Experts caution against the proliferation of such apps, highlighting the potentially devastating impact of fake nude images on victims, particularly concerning their use in producing materials constituting child sexual abuse (CSAM).
Dame Rachel de Souza, the Children’s Commissioner for England, has been a prominent advocate for the complete ban on nudification applications. In her report from April, she remarked, “The act of making such an image is rightly illegal – the technology enabling it should also be,” underlining the urgent need to curtail the technology that facilitates such harmful practices.
In tackling this issue, the government has pledged to collaborate with technology companies to devise strategies aimed at preventing intimate image abuse. This alliance includes ongoing partnerships with UK-based safety technology firm SafeToNet, which has developed AI solutions to identify and obstruct sexual content, as well as technologies that can disable cameras when inappropriate content is detected.
These technological advancements build on existing filters implemented by platforms like Meta, which aim to recognize and flag potential nudity in images, often as a means of protecting children from the risks of sharing intimate images themselves.
The announcement to ban nudifying applications follows numerous previous requests from child protection organisations urging the government to address the pressing issue of such technologies. The Internet Watch Foundation (IWF) operates a helpline called Report Remove, enabling individuals under 18 to confidentially report explicit images of themselves available online. Notably, the IWF reported that 19% of confirmed cases indicated that the imagery had been manipulated in some way.
Kerry Smith, the IWF’s chief executive, expressed support for the government’s proactive approach. “We are also glad to see concrete steps to ban these so-called nudification apps which have no reason to exist as a product,” she stated. Smith emphasized the heightened risk such applications pose to children, noting that imagery produced through these tools is frequently circulated in some of the most sinister areas of the internet.
While the NSPCC, a children’s charity, welcomed the government’s intentions, its director of strategy, Dr Maria Neophytou, voiced disappointment regarding the absence of equivalent measures targeting mandatory device-level protections. The NSPCC has been advocating for tech firms to adopt more efficient methods for identifying and curbing the dissemination of CSAM, particularly within private messaging services.
In its announcement, the government also committed to making it “impossible” for children to capture, share, or access nude images on their devices. This initiative aligns with the broader strategy to outlaw AI technologies aimed at creating or distributing CSAM.
The implications of these legislative changes extend beyond mere regulatory adjustments; they signify a growing recognition of the need for stronger protections against the misuse of technology that threatens the safety and dignity of individuals online, particularly the vulnerable. With increased scrutiny on the ethical use of AI, the government’s measures reflect a responsive approach to evolving digital threats, aiming to create a safer online environment for all, especially women and children.
As enforcement and regulatory frameworks are put in place, stakeholders await the tangible impacts of these measures on online safety and public wellbeing. Community organisations and child protection advocates remain vigilant, hopeful that such laws will not only deter the development and distribution of harmful applications but also foster an overall culture of respect and safety in digital spaces.
Our Thoughts
To prevent the misuse of nudification apps and similar technologies, several measures could have been implemented. Firstly, stricter regulations and enforcement regarding the development and distribution of AI tools should have been established under the Online Safety Act, which already criminalizes the creation of non-consensual explicit images. The government could have promoted the responsible use of technology by collaborating with tech companies earlier to develop comprehensive filtering and monitoring systems capable of detecting and blocking harmful content, thereby reducing the potential for exploitation.
Key safety lessons include the importance of proactive regulation in the fast-evolving tech landscape and the need for educational initiatives to raise awareness of the moral implications of using such technology. Further, mandatory safeguards, as suggested by child protection charities, could prevent harmful content from being created in the first place.
Relevant regulations include the Online Safety Act, which addresses intimate image abuse but may not fully encompass emerging technologies like nudification apps. Future incidents could be mitigated by establishing a framework for continuous monitoring and updating of legislation to address new technologies swiftly before they can be exploited.



















