AI-Generated CSAM & the Laws Currently Influencing Prosecution.
- Chloe Mills
- Oct 10
- 9 min read
By Chloe Mills
Child pornography, now referred to as child sexual abuse material (CSAM), represents a major area of crime across the nation. According to the National Center for Missing & Exploited Children, 36.2 million reports related to child sexual exploitation were made to their CyberTipline in 2023.[[1]] This represents a 12% increase from the previous year and a more than 300% increase since 2021.[[2]]
Although CSAM has existed for decades, we are now facing a new era of CSAM: artificial intelligence (AI)-generated CSAM. Today, technology has blurred the lines between what is real and what is AI. With this vast progression in technological capacity, the legal system is facing new and uncharted territory. Specifically, no Supreme Court rulings have directly addressed this issue, and attorneys general around the country are going so far as to call on Congress to study and update laws addressing CSAM.[[3]] Thus, with no clear statutes and parameters to guide prosecutors, they must learn to interpret past federal CSAM cases to fit AI-generated CSAM. The following discussion will examine the key Supreme Court cases on CSAM and explore how the government may use them to punish AI-generated CSAM creators.
Artificial Intelligence and the Creation of Child Sexual Abuse Material
Understanding how generative AI models are trained is vital to assessing both their capabilities and the risks they pose, including the creation of CSAM. Generative AI models are “trained” when the user inputs data into the model, so it learns desired patterns and relationships.[[4]] Training allows the user to create a personalized tool that performs tasks to achieve their objectives, such as generating new content.[[5]] During training, the user identifies errors and corrects them, so the AI produces more desirable results.[[6]] The data used for training may include images, text, audio, or video.[[7]]This data may come from real sources such as uploaded images, social media, or the internet or be synthetically generated.[[8]]
CSAM can be created in three primary forms through the use of generative AI.[[9]] The first form is abuse-trained CSAM, where the training data to generate CSAM was photographic CSAM.[[10]] The second is photorealistic CSAM that is indistinguishable from photographic CSAM; the model’s training data could involve photographic CSAM or not.[[11]] Finally, the third form is morphed-image CSAM which involves an identifiable child, regardless of whether the training data included photographic CSAM or whether the morphed image is photorealistic.[[12]] The differences in these forms can vastly change the strategy of prosecutors, as seen through the pivotal Supreme Court precedents regarding CSAM.
History of CSAM in the Court
To better understand prosecutors’ strategies regarding CSAM, it is important to discuss the history of CSAM in the Supreme Court and how Supreme Court precedent on CSAM may impose limitations on the prosecution of AI-generated material. The Court initially prosecuted CSAM under the obscenity doctrine. Under the First Amendment, obscenity is a category of unprotected speech as seen in Miller v. California.[[13]] In Miller, the Court set out a three-part test for obscenity which considers: “(a) whether ‘the average person, applying contemporary community standards’ would find that the work, taken as a whole, appeals to the prurient interest; (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (c) whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.”[[14]] However, there were flaws with this approach when applied to CSAM, as it was not always obscene under the newly created test. [[15]]
After Miller, the Court then decided the case of New York v. Ferber, where it examined whether a New York statute that did not require an obscenity analysis was constitutionally sound.[[16]] The Court held that CSAM involving actual children is categorically unprotected, regardless of obscenity.[[17]] For their reasoning, the Court cited the harm to the child, the importance of states’ interests in protecting their children, and the need for deterrence of economic gain from CSAM.[[18]]
Next, in 1996, Congress enacted the Child Pornography Prevention Act, a CSAM statute that expanded protection to include images that “appeared” to depict minors.[[19]] Proponents of the definition argued that this prohibited an all-new area of speech that was not obscene or CSAM under Ferber.[[20]] Subsequently, this issue came before the Supreme Court in Ashcroft v. Free Speech Coalition, and the Court held that “appeared” was far too broad and offended the First Amendment and the Due Process Clause of the Fifth Amendment.[[21]] The Court drew this conclusion upon the premise that no real children were directly harmed in the creation of this form of CSAM.[[22]] Under this analysis, the Court also mentioned that “virtual CSAM” was constitutionally protected as it is “not intrinsically related to the sexual abuse of children,” and “creates no victims by its production.”[[23]] The Court did mention morphed child pornography, stating that they were likely not afforded protection under the First Amendment, as a live child would be involved, but declined to rule on it.[[24]]
After the ruling in Ashcroft, advocates fought for the inclusion of images “indistinguishable from” minors to be added to the definition of CSAM.[[25]] This coincided with the emergence of increasingly sophisticated computers and the growth of the internet. In response to this growing concern, Congress passed the Prosecutorial Remedies and Other Tools to End the Exploitation of Children Today (PROTECT) Act of 2003, which added a new “child obscenity” statute.[[26]] The new statute included the language “a digital image, computer image, or computer-generated image that is indistinguishable from that of a minor engaging in explicit conduct.” [[27]] Further, Congress defined “indistinguishable from” to mean that an ordinary person viewing the material would conclude that the material is of an actual minor engaged in sexually explicit conduct.[[28]] This caused an uproar among critics, who surmised that the Court is going back on its ban in Ashcroft, arguing that the “appears to be” clause which was struck down in Ashcroft is not different from “indistinguishable from” in the Act.[[29]]
The Supreme Court revisited the issue in United States v. Williams.[[30]] The Court reviewed the portion of the PROTECT Act that criminalizes anyone who, “advertises, promotes, presents, distributes, or solicits through the mails, or in interstate or foreign commerce by any means . . . purported material in a manner that reflects the belief, or that is intended to cause another to believe, that the material or purported material is, or contains an obscene visual depiction of a minor engaging in sexually explicit conduct; or a visual depiction of an actual minor engaging in sexually explicit conduct.”[[31]] The Supreme Court ruled that the statute did not offend the Constitution, reasoning that if the speaker intends for the recipient to believe that it is real CSAM involving real children, then it is a crime.[[32]] Thus, the Court accepted the statute’s use of an intent element in punishing solicitation or distributing of CSAM, regardless of whether the proffered material depicted actual children; the Court stated a jury would need to determine the element of intent.[[33]]The Court also stated, during their analysis, that virtual CSAM was protected if there were no real children involved and it was not solicited or presented as such.[[34]] The Court did not directly invalidate the “indistinguishable from that of a minor” statutory language in the PROTECT Act; however, outside of the punishment of solicitation, the Court reaffirmed the real-child harm requirement set out in Ferber for CSAM-related crimes.[[35]]
Application to AI-generated CSAM
When performing an analysis for prosecution, the first question would be: does this AI-generated CSAM contain an identifiable minor, or was it created using such? If yes, then prosecutors begin with Ferber and Ashcroft, which held that CSAM involving actual children is categorically unprotected by the First Amendment, regardless of obscenity.[[36]]With morphed-image CSAM, there is a focus on the output because the child used to create the image is identifiable in that image.[[37]] Notably, states are already having success using this rationale, where actual children are being superimposed into pornography.[[38]] For abuse-trained CSAM, the question is whether the input to create the CSAM involved a real child or not.[[39]] Because the Court cited the importance of protecting children and the responsibility of states in keeping children safe, they would likely deem this CSAM and unprotected speech.[[40]] On the other hand, it could be argued that if the program was only trained on CSAM but did not produce an identifiable minor, no direct harm to a child occurred.[[41]]
The next analysis would be under Williams where the image is “indistinguishable from” that of a minor engaging in sexual conduct.[[42]] Photorealistic AI-generated CSAM that was not trained by images of actual child abuse would need to be evaluated under this analysis.[[43]] This type of CSAM prohibition might be found unconstitutional because there is no child harmed in the creation of the CSAM.[[44]] As stated in Williams, virtual CSAM of this kind is constitutionally protected free speech unless it is proffered as real child CSAM.[[45]] Due to this holding, the only viable way to determine whether this form of AI-generated CSAM is unprotected would be to conduct the analysis in Miller.[[46]] If the content satisfies the elements of the Miller test, and most CSAM would, then it would be constitutionally unprotected; however, it leaves open a loophole for CSAM that cannot, under current precedents, be constitutionally prohibited.[[47]]
Conclusion
As seen, AI-generated CSAM is a complex and widely discussed topic. In the coming years, laws will inevitably be enacted to control this phenomenon or, on the other hand, to protect it. Given the technological advances in AI-generated images, perhaps the Court in the future will recognize the harm to children of CSAM, even where actual children were not involved in the production of the CSAM. After all, a culture flooded with realistic images of children being sexually assaulted---images that children themselves are likely to view, given the easy access to internet pornography---could be viewed as harmful to children as a group, as it could culturally legitimate or normalize the rape of children. It will be interesting to see how legislators, prosecutors, and courts handle this increasingly relevant issue.
[1] 2024 CyberTipline Report, Nat’l Ctr. for Missing & Exploited Children (2024), https://www.missingkids.org/gethelpnow/cybertipline/cybertiplinedata.
[2] Id.
[3] 54 State Attorneys General Call on Congress to Study AI and its Harmful Effects on Children, Nat’1 Ass’n of Attorneys Gen. (Sept. 5, 2023)
https://www.naag.org/wp-content/uploads/2023/09/54-State-AGs-Urge-Study-of-AI-and-Harmful-Impacts-on-Children.pdf (discussing a letter urging Congress to study and update laws on AI child exploitation signed by 54 states and U.S. territories).
[4] Derek Brault, AI Model Training: What It Is and How it Works, Mendix (2025), https://www.mendix.com/blog/ai-model-training/.
[5] Id.
[6] Id.
[7] Id.
[8] Id.
[9] Riana Pfefferkorn, Addressing Computer-Generated Child Sex Abuse Imager: Legal Framework and Policy Implications, The Digital Social Contract: A Lawfare Paper Series (Feb. 24),
[10] Id.
[11] Id.
[12] Id.
[13] Miller v. California, 413 U.S. 15, 15 (1973).
[14] Id. at 24.
[15] See New York v. Ferber, 458 U.S. 747, 761 (1982) (explaining that under the Miller test, whether a work appeals to the prurient interest of the average person has no connection to whether a child has been physically or psychologically harmed during the production of the materials, and that it is irrelevant to the child victim whether the material has a literary, artistic, political, or social value.)
[16] Id. at 749
[17] Id. at 764.
[18] Id. at 761.
[19] Pfefferkorn, supra note 9, at 4
[20] Id. at 5.
[21] Ashcroft v. Free Speech Coalition, 535 U.S. 234, 239 (2002).
[22] Id. at 242.
[23] Id. at 239.
[24] Id. at 242.
[25] Pfefferkorn, supra note 9, at 6; See id.
[26] Prosecutorial Remedies and Other Tools to End the Exploitation of Children Today Act of 2003, Pub. L. No. 108-21, 117 Stat. 650 (2003).
[27] Id.
[28] Pfefferkorn, supra note 9, at 6.
[29] Id. at 7; see Ashcroft, 535 U.S.
[30] United States v. Williams, 553 U.S. 285, 288 (2008).
[31] Id. at 303.
[32] Id.
[33] Id.
[34] Id. at 297.
[35] See New York v. Ferber, 458 U.S. 747, 749 (1982).
[36] See Id.; Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002)
[37] Id.
[38]William Arnold, AI and CSAM: A Look at Real Cases, Cellebrite, (July 17, 2024), https://cellebrite.com/en/ai-and-csam-a-look-at-real-cases/ (discussing United States v. Mecham, 950 F.3d 257, 257 (5th Cir. 2020))(“Clifford Mecham superimposed the faces of actual children onto explicit photographs of adults, making it appear as if minors were engaged in sexual activity. . . . Drawing on Ashcroft v. Free Speech Coalition, the court stated, ‘no child is involved in the creation of virtual pornography,’ and questioned whether morphed CSAM was close enough to real CSAM to be considered unprotected speech. Ultimately, the state court ruled that since the pornography depicted an actual child, it fell outside First Amendment protections.”).
[39] Pfefferkorn, supra note 9, at 11.
[40] Ferber, 458 U.S., 749
[41] Pfefferkorn supra note 9, at 11.
[42] United States v. Williams 553 U.S. 285, 313 (2008).
[43] Pfefferkorn supra note 9, at 12.
[44] Id.
[45] Williams, 553 U.S. at 313
[46] Pfefferkorn supra note 9, at 12.
[47] Id.


