Creative Commons License

Connecticut Attorney General William Tong speaks on April 23, 2026 about legislation that would allow residents to sue companies for spreading AI-generated sexual images. Credit: Emilia Otte / CT Mirror

Connecticut Attorney General William Tong and members of the legislature’s Judiciary Committee want to make it possible for state residents to bring legal action against people and companies who facilitate the spread of nonconsensual AI-generated sexual images. 

Lawmakers are currently considering a bill, HB 5312, that would allow an individual who has been the subject of an AI-created sexual image to bring a lawsuit against the person that publicizes it. It would also allow the state attorney general to bring lawsuits against companies that don’t promptly remove such images, when requested, from their platforms.

“ I think everybody knows what we’re talking about because of the proliferation of images on websites, videos on websites — thousands, millions, probably billions of images that are on the internet without the consent of the subject,” Tong said during a press conference on Thursday. 

Tong referenced an expose on CNN that revealed a network of websites and group chats on platforms like Telegram in which men uploaded videos of drugging and raping their wives and advised others how to do the same.

State Rep. Craig Fishbein, R-Wallingford, who has worked with Tong on the legislation, said he’d heard stories on a podcast about girls whose pictures were taken at a party and then manipulated and distributed throughout their school.

“This is good-sense practice, to put tools in the crux of our attorney general’s arm so that he can use it to save not only children, but adults also,” Fishbein said.

Under the proposal, a person has two years from the time they discover the image has been spread to bring forward a lawsuit. That person can also request anonymity during the proceedings.  

Tong is part of a coalition of 35 attorneys general who sent a letter to xAI calling on the company to stop Grok, its AI chatbot, from producing nonconsensual sexual images, take down the images it has already created and suspend users who have created those images. 

Last year, Congress passed, and the president signed into law, the Take It Down Act, which makes it a federal crime to spread nonconsensual AI-generated sexual images and requires online platforms to remove these images within 48 hours of being flagged. 

“The rate at which our world is changing and making it more unsafe for people who are victims of this type of activity and crime — the curve … is changing at an exponential pace,” Tong said.

A bill passed by the state legislature last year made it a class D misdemeanor to transmit an AI-generated sexual image, and a class C misdemeanor if the person sent the image to multiple people on a virtual platform.  

But Tong said adding on the right to sue would allow people to gain compensation for the destruction of having a sexual image or video of them spread online. 

“ There’s not only embarrassment and humiliation. There are often severe financial ramifications. People lose their jobs, they lose their kids, they lose their housing,” Tong said.

Rep. Steven Stafstrom, D-Bridgeport, said the bill was an attempt to make the law catch up with evolving technology.

“ Unfortunately, oftentimes these pictures and videos can be really indistinguishable from real life. I mean, I think we’ve all had the occasion of scrolling social media and seeing an image, or having one sent to us, and saying, ‘Wait, what? Huh?’ And then you realize it’s an AI generated image,” Stafstrom said. 

The bill would require social media or other online platforms to remove any such image within 48 hours of being asked to do so. Companies that fail to remove an image or images within two days could be charged up to $25,000 per day in fines, and could be sued by the attorney general. 

Beth Hamilton, executive director of Connecticut Alliance to End Sexual Violence, said she hopes allowing these actions to move forward in court, and the threat of a $25,000 daily fine, would be enough to persuade companies to remove these images. She said people are often unable to convince companies to take down images no matter how many times they make requests. 

Hamilton said the majority of cases she sees aren’t necessarily AI-generated. More frequently, someone will provide a consensual image that is then manipulated or spread around without the person’s consent. 

“ I think technology-facilitated abuse continues to be something that is really rampant. And the challenge, I think, on the other side of that, is that oftentimes it’s not taken as seriously as other types of abuse,” Hamilton said. She added that survivors of this type of abuse often experience trauma and PTSD similar to what they might experience if they had been a victim of sexual violence. 

This is not the only bill Tong and lawmakers have been pushing for that is designed to promote online safety and regulate Artificial Intelligence.

On Wednesday, the Senate passed a bulky bill focused on AI regulation that would, among other things, require chatbots that can simulate human-like interactions to include a method for detecting expressions of self-harm or suicidal thoughts, and direct people to mental health resources. The chatbots would be prohibited from mimicking a romantic relationship, encouraging harmful behavior or engaging in a sexually explicit relationship. 

The same bill also would place parameters around minors’ use of social media, including required age verification and parental permission for sites that use personalized algorithms, mental health warnings that pop up every three hours and limits on notifications to between the hours of 8 a.m. and 9 p.m.

Emilia Otte is CT Mirror's Justice Reporter, where she covers the conditions in Connecticut prisons, the judicial system and migration. Prior to working for CT Mirror, she spent four years at CT Examiner, where she covered education, healthcare and children's issues both locally and statewide. She graduated with a BA in English from Bryn Mawr College and a MA in Global Journalism from New York University, where she specialized in Europe and the Mediterranean.