From Pixels to Peril: How AI Advancements Exacerbate the Scourge of Child Sexual Exploitation
- theconvergencys
- 2 days ago
- 5 min read
By Oliver Brown Oct. 31, 2025

A tide of recent paradigm shifts amongst a throng of industries is arguably led by an aggressive surge of generative AI adoption across different sectors. With its usage unfathomably versatile, the technology holds unprecedented potentials to simultaneously advance productivity and lunge creativity. Nonetheless, this breadth of yields and unmatched productivity have concurrently come to further envenom the long-unresolved crime against humanity: “Cheese Pizza,” an online pseudonym for child pornography.
Child pornographic websites have existed for decades both in and outside the shade of the dark web. Arguably the biggest platform serving as the largest child sexual exploitation market is “Welcome to Video,” according to the press release by the US Department of Justice. Though organized and operated by a South Korean man, the website's customers have spread across 38 countries, including Germany, Saudi Arabia, and Brazil. A total of 337 subjects convicted of child pornography offenses attest to the widespread epidemic of child pornography, illustrating CSAM’s often unacknowledged prevalence that is partially due to the absence of advanced entrance security. With technological advancements, another underlying problem arises as CSAM continues to spread.
Behind the gaudy facade of digital art popularized by generative AI, the sexual exploitation of children through child sexual abuse materials (CSAM) has been thriving under a furtive open sesame of “Cheese Pizza,” a slang sharing an initialism with child porn to avoid automatic censoring on search engines.
Innocence Corrupted: The Disturbing Proliferation of AI-Facilitated Child Exploitation
Concerns about CSAM have escalated with the progression of AI image generators, which produce realistic images or videos when given simple prompts. This innovation is often used to create sexually abusive images of children, concerning law enforcement around the world. The National Center for Missing and Exploited Children discovered that approximately 4,700 reports of sexual exploitation images through AI generators were produced in 2023. A 48-year-old Tasmanian man was recently arrested for downloading abusive material of children generated by AI. While Detective Superintendent Frank Rayner has investigated abusive AI-generated content since its first report in 2022, he worried that the use of AI-generated CSAM would increase to be problematic to “law enforcement not only locally but throughout the world.” Unfortunately, his concern crystallized into truth as a man was arrested in South Korea for creating 360 sexually explicit images of minors through AI image generators.
The Guardian has also raised concerns over children being affected by AI in reality, as the use of deep fakes with real children can lead to generative images that may destroy the peace of real families and their children. For example, Law enforcement in Florida reported an arrest of a man who took photos of a young girl in his neighborhood and used AI to create child pornography with it. An existing abhorrent problem has now grown into a threat to innocent minors who aren’t just fake people created through AI.
Countering against AI: Global Crusade Against the AI-Driven Scourge of Child Exploitation
With the epidemic of CSAM getting out of hand, the international communities have joined forces to combat CSAM in the new era of technological advancements. The National Center for Missing and Exploited Children (NCMEC), a non-profit organization that has established alliances with law enforcement agencies, including Interpol and Europol, operates CyberTipline, in which it receives reports regarding the sexual exploitation of children. The Stanford Cyber Policy Center discovered hundreds of AI-generated CSAMs. Despite the overwhelming influx of reports, NCMEC and its collaborating law enforcement are yet committed to fighting the epidemic of CSAM to ensure the safety of children, calling for the participation of the legislators and AI engineers. In addition, the Internet Watch Foundation (IWF), Europe’s largest hotline for detecting and removing CSAM from the Internet, has reacted efficiently to the threat of CSAM by investigating over 11,000 images of AI-generated CSAM. The IWF also encouraged European policymakers to pass legislation that can prevent and address the problem caused by CSAM. However, since the restraints on these organizations in their investigations exist to cause a delay in fighting against CSAM, further collective action is necessary.
NCMEC’s efforts to collectivize the world to fight against this issue have been quite influential. US Congressman Nick Langworthy proposed the Child Exploitation & Artificial Intelligence Expert Commission Act in April 2024 to address AI's misuse in creating CSAM. The Federal Bureau of Investigation also recently released a public service announcement emphasizing that any realistic computer-generated images of CSAM are illegal. In the Philippines, the Council for the Welfare of Children (CWC) warned against the emergence of AI CSAM by implementing measures to provide the appropriate training of parents and teachers to prevent these crimes. Also, the International Centre for Missing and Exploited Children reported that creating any sort of CSAM with AI is criminalized in almost 138 countries. However, in 34 countries, CSAM is not legally defined, meaning that no law may be applicable to hold AI-generated CSAM as a crime. Now that AI has become an immense threat, countries must enforce stricter regulations under their laws regarding CSAM.
As a result of the initiatives by child safety non-profit organizations like Thorn and All Tech is Human, leading artificial intelligence companies like Google, OpenAI, and Microsoft have pledged to join the effort to prevent their AI tools from being abused to harm children and create CSAM. According to Thorn, the pledges from AI companies “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds.” The Wall Street Journal also noted that many companies have begun separating child-related content from data sets containing adult content to prevent its abuse by potential perpetrators. With AI companies’ progressive move of imposing stricter preventative measures, the dilemma of AI-generated CSAM would be more mitigated.
Furthermore, an unambiguous international standard is essential to prevent the proliferation of AI-generated CSAM. The “Draft Articles on Responsibility of States for Internationally Wrongful Acts” was developed by the International Law Commission as binding legal rules that are applicable in cyber-attacks by the victim states to attribute the conduct in question to the perpetrating state.
Also, it provides a detailed standard for determining whether wrongful acts are attributable to the state and whether the state should be held accountable for the consequences of cyber-attacks. So, while this demonstrates the significance of codified international rules, an internationally recognized definition of abusive AI content will be necessary to criminalize CSAM in all countries and provide a clear guideline for governments to enact relevant laws to prevent CSAM.
In conclusion, due to the limitations existing in mitigating the problem of AI-generated CSAM, the international community must come together to establish the legal standards regarding the potential abuse of AI-generated content by drafting an international agreement to move towards a clearer delineation between creative freedom and human rights infringement. This will be the first step in implementing a unified solution to the problem of abusive AI-generated content, enabling all states to be on the same page in fighting against CSAM. Although the current collective efforts have been relatively successful, this unification is absolutely vital in creating a safe digital world and preventing further tangible damage resulting from the abuse of AI.




Comments