Creative Commons License

Sen. Paul Cicarella, R-North Haven, and a GOP lawyer, Melanie Dykas, review late revisions to the AI bill with the measure's sponsor, Sen. James Maroney, D-Milford. Credit: mark pazniokas / ct mirror

When Connecticut lawmakers exited the state Capitol at the end of the 2025 session, they left behind some unfinished business, particularly around the state’s plan for regulating companies’ use of artificial intelligence, ensuring data privacy and establishing consumer protections around emerging technologies. 

For a second year in a row, legislators were unable to agree on the direction of state AI policy, with pro-regulation lawmakers in the state Senate and the more regulation-shy Lamont administration disagreeing over the best course of action.   

In the months since, the question of what Connecticut should do about AI has only become more pressing. In December, the Trump administration issued an executive order in the hopes of discouraging states from regulating the technology. Meanwhile, a growing number of businesses are incorporating artificial intelligence into their operations, and investment in the global AI market has reached hundreds of billions of dollars.

Without federal legislation, state legislatures — in Connecticut and elsewhere — are facing pressure to address everything from the ethics of AI use to the environmental impact of data centers and concerns over a dot com-like “bubble.”

And as “generative AI” — programs that use datasets and already-available information to power technologies like ChatGPT, Google’s Gemini, and Microsoft’s Copilot — is increasingly used in everyday life, the task facing regulators is only getting more complicated.

So far, few states have reached common ground on how to write the rules.

Pro-regulation lawmakers have proposed a wave of new measures, arguing that guardrails on the rapidly-changing technology will provide necessary protection to constituents worried about losing their privacy and intellectual property.  

Opponents say the ever-growing list of AI “dos and don’ts” could have a chilling effect on local economies, curbing AI adoption and encouraging technology companies and innovation-focused businesses to move to friendlier markets.

In Connecticut, the debate is unfolding just as state economic development officials launch multiple efforts to invest in artificial intelligence and emerging technologies, likening the initiative to a second industrial revolution. 

With the 2026 legislative session quickly approaching, state lawmakers believe that the coming months provide a chance to define how Connecticut will approach the technology moving forward. Leaders of last year’s regulation efforts say the state can’t afford to miss its next chance.

“There’s definitely a debate over how strong our AI laws should be,” said Senate Majority Leader Bob Duff, D-Norwalk. “But I will tell you that if you talk to average people on the streets, they’re very concerned about AI and how it’s going to impact them.”

Lawmakers are gearing up for another swing at AI regulations

The Connecticut General Assembly’s record on passing AI-related measures is mixed. In recent years, state lawmakers have been able to push through a number of proposals, including data privacy regulation, new funding for AI training and education programs, and the criminalization of deepfake revenge porn. 

Efforts to pass comprehensive legislation have been harder to get over the finish line.

Take Senate Bill 2, a wide-ranging proposal that sought to regulate how businesses use artificial intelligence in various ways, calling for the Department of Economic and Community Development to create a “regulatory sandbox” and seeking to limit the effects of algorithm-based discrimination. The bill was supported by Democratic leadership in the state Senate, and first emerged in 2024 after a state task force released a 255-page report on AI. 

Gov. Ned Lamont opposed the bill, arguing that the measure would contribute to a fractured landscape of state AI regulations. State officials also suggested that lawmakers were acting too early, potentially scaring off future innovation in Connecticut. 

Ultimately, S.B. 2 was amended to remove many of the business-related provisions and completely pulled references to algorithmic discrimination. While the amended bill passed the Senate with bipartisan support, the measure did not receive a vote in the House before the end of last year’s session. 

For supporters of the legislation, the failure was frustrating, especially after last-minute amendments shifted the bill away from some of its original intent for the sake of broader appeal. “The bill had changed and become, I would say, more scaled back in the protections,” said state Sen. James Maroney, D-Milford, the author of Senate Bill 2 and a leading voice in the legislature on data privacy and AI. 

“By the end of last year, [S.B. 2] was more of a disclosure bill, to use if AI was being used to make an important decision about your life,” he said. 

In a December interview with the Connecticut Mirror, Maroney outlined his views on the state’s AI needs. He noted that he is far from an opponent of artificial intelligence, instead casting his desire for regulation as supporting the guardrails that will help structure the state’s future innovation efforts. 

He said last year’s proposal would have provided those guardrails by accomplishing a multifaceted goal: “protecting” state residents, “promoting” responsible AI development, and “empowering” state government to use AI in ways that will benefit constituents. 

Senate lawmakers intend to continue their efforts to “protect, promote, and empower” this year, planning a package of data privacy and consumer protection reforms alongside support for AI training and workforce development. One such bill has already been announced: a ban on facial recognition software in retail stores. 

Both Maroney and Duff, the bill’s expected sponsors, said the measure was inspired by news that Wegmans Food Markets, a popular grocery chain, is using facial recognition software at some of its locations, including in its New York City grocery store. While the company said it’s not sharing the data with any third parties, the news still sparked concern over the use and storage of biometric data.

“The facial recognition and the biometrics and voice recognition, I think, are issues that are really much different than a camera looking for a shoplifter,” Duff said. 

The bill’s sponsors say they hope to enact the ban before facial recognition software becomes widely used in the state. Earlier in January, reporting from CT Insider found that ShopRite, a New Jersey-based grocery chain with several locations in Connecticut, was using facial recognition software in several local stores.

Senate Majority Leader Bob Duff, D-Norwalk Credit: Shahrzad Rasekh / CT Mirror

As new legislation comes into shape, businesses in CT are wary

A spokesperson for the governor said he wants to focus on regulations that “protect the privacy and safety of Connecticut residents.”

“Governor Lamont continues to be supportive of any measures that protect the safety of residents when using AI, as well initiatives to upskill AI research and job training,” Rob Blanchard, Lamont’s spokesperson, said in an emailed statement. “While the federal landscape surrounding AI regulation continues to evolve, the Governor will continue to prioritize safety and education.” 

State lawmakers who support regulating AI and data privacy told the Connecticut Mirror that their efforts are about ensuring state residents can engage with artificial intelligence on their own terms. In their view, regulation is both commonsense and necessary, and does not have to result in serious negative impacts for local businesses.

Some business leaders see things differently. The Connecticut Business and Industry Association, the state’s largest trade group, has been critical of efforts to strongly regulate AI use, arguing that at a time when the economy is stagnant, energy and other costs continue to impact companies, and small business owners voice concern and frustration over the state’s business climate, new AI policy could hinder innovation. 

The adoption of new regulations on businesses, “puts us at much more of a risk of being a less business-friendly state, and can really impact investment in the state, and the ability for small businesses to want to operate here in the state,” said Chris Davis, CBIA’s vice president of public policy. “That can really hinder willingness to take advantage of the beneficial sides of artificial intelligence, the efficiencies that improve productivity and increase tax revenue for the state and really grow our economy.” 

Davis said his concerns largely boil down to three points. First, there is a concern that proposed regulations in the state are blurring the lines between artificial intelligence and data privacy, creating a consistent “creep” of new regulations. 

Next is the question of how the state might enact and enforce policies, particularly algorithmic discrimination and the use of impact assessments to track business employment outcomes.

Research has found that because of how AI gathers and uses already available information, some of which can contain biased and inaccurate data, AI systems can produce outputs that reinforce discrimination against marginalized communities. That can cause harm to people based on their age, race, and gender. Debate around the topic is currently making its way through the courts as a lawsuit, Mobley v. Workday, which challenges some AI-based hiring systems as being discriminatory.

Concerns over AI bias were a component of last year’s legislative debate, with some lawmakers arguing that failing to address algorithmic bias would leave a massive “hole” in any state legislation.

Addressing algorithmic bias has proven to be a major focus in statehouses; measures in more than 20 states were introduced in 2025. 

The push to address algorithmic discrimination through specific and repeated assessment was of particular concern to businesses in Connecticut, Davis said, because it suggested that “every business is discriminating unless they can somehow prove that they’re not.” Davis said federal policy and state law — the Connecticut Fair Employment Practices Act, in particular — already require businesses not to discriminate.

Ultimately Connecticut lawmakers removed references to algorithmic discrimination from last year’s bill. 

Davis’ final concern is the direct result of the other two: that by creating a wave of new regulations and then requiring businesses to keep track of how they are complying with them, the state could inadvertently limit AI growth by creating a system that is overly complicated, expensive and mired in paperwork. 

Some of these concerns, along with a growing business interest in having input on new state policies, are part of why CBIA recently launched a Technology Council, a group that will review and offer business industry perspectives on proposed state technology policy. The group is expected to be active in the coming year. 

Davis declined to discuss the pending facial recognition bill or other possible legislation that could emerge in the session, noting CBIA would prefer to comment after bills are introduced. Still, he said he hopes lawmakers will avoid enacting anything too rigid so that businesses have flexibility.

“We’re in a situation where we need to be able to find ways to be more productive and more efficient here in the state,” he said. “And AI has that opportunity.” 

Three men speak on a panel on a stage with a blue background.
Chris Davis, vice president of public policy for the Connecticut Business and Industry Association, speaks with State Treasurer Erick Russell and former state Sen. John McKinney at the organization’s 2024 Economic Summit and Outlook. Credit: Courtesy of / CBIA

States are leading the way on AI regulation. The federal government wants to change that.

Asked about the ideal form of AI regulation for the business community, Davis said business and industry concerns are largely rooted in the piecemeal nature of state action. If each state adopts differing levels of regulation around AI and data privacy measures, that would make it difficult for businesses and consumers alike to navigate issues across state lines.

More specifically, there is concern that Connecticut could end up on the stricter side of the regulatory divide, and that companies looking for looser standards might move somewhere else. This is part of why S.B. 2 proved controversial last year, and was a factor in why the pro-business Lamont voiced his preference for other states to take the lead on adopting AI regulations.

The earliest adopters of comprehensive AI regulation have also run into their own troubles. 

Colorado — for one — has emerged as a sort of national test case. In 2024, the state enacted the Colorado Artificial Intelligence Act, a comprehensive regulatory measure that addressed algorithmic discrimination. The law was the first broad measure approved at the state level, and the Colorado bill has been viewed as a model that could potentially influence other states looking to adopt regulations. 

The fairly new law continues to be a source of controversy ahead of its expected implementation later this year, with supporters and opponents remaining at odds as the state braces for higher than expected implementation costs. Colorado lawmakers are currently looking to revise the law in the current 2026 session. 

The continued discussion and delays in Colorado offer an early lesson: that lawmakers in other states will need to establish a variety of technical standards, from concise regulatory definitions, to easily navigated financial frameworks, and clearly-structured review processes for businesses if they hope to adopt comprehensive AI regulation. 

At this point, many states seem more interested in adopting smaller, more incremental bills over large legislative packages. According to the National Conference of State Legislatures, almost every state considered an AI or consumer privacy bill in 2025, with further action expected in statehouses this year. 

For now, Connecticut also seems likely to take a more targeted approach in 2026, and early discussions at the start of the session seem likely to focus on data privacy.

“Whenever you bring up privacy issues, there’s a lot of things that we can talk about,” Duff said.

As Connecticut lawmakers work through these questions, the federal government is looking to have its own say on state AI efforts. The Trump administration’s December executive order warned states away from AI regulations, arguing that a patchwork of regulation could negatively affect interstate commerce. The administration instead supported a “carefully crafted national framework”, a singular federal standard establishing national rules on AI and related consumer protections.

The order also threatened to pull back leftover broadband deployment funds from states that have passed “onerous” laws around AI. 

The executive order arrived months after a previous effort to curtail state AI efforts failed in Congress, with lawmakers removing a proposed ten year moratorium on state AI regulations from an earlier version of the president’s One Big Beautiful Bill Act over the summer. 

President Donald Trump signs an executive order relating to AI in the Oval Office of the White House, Thursday, Jan. 23, 2025, in Washington. Credit: Ben Curtis / AP Photo

Still, federal efforts to cut off state legislation may not have much of a chance. “A lot of the things in the executive order are — I don’t want to say they’re not enforceable, but they don’t actually do that much,” said Gowri Ramachandran, the director of elections and security for the Brennan Center for Justice, a legal and policy think thank housed at New York University’s School of Law. She notes that state AI laws are likely on solid legal ground, adding that the administration lacks the power to directly take legal action against state measures. 

In Connecticut, lawmakers supporting new regulations say that in the absence of federal leadership, it is up to states to help put boundaries on artificial intelligence technologies. “Based on past precedent, there will not be a national standard,” said Maroney, who joined Duff and other state lawmakers in signing a letter criticizing the president’s AI executive order last month. “We haven’t seen any federal laws between 1998 and last year.” 

And as AI technology stands to see increased adoption in the near future, legislators say waiting any longer would be a mistake. 

“By not addressing or regulating in some way, shape, or form artificial intelligence, we make the same mistake that we did 30 years ago, when we did not put any kind of regulation or boundaries around the internet,” Duff said. “That’s a mistake to our society, to our country.” 

P.R. Lockhart is CT Mirror’s economic development reporter. She focuses on the relationship between state economic policy, businesses activity, and equitable community development. P.R. previously worked as an economic development reporter in West Virginia for Mountain State Spotlight, where she covered inequality, workforce development, and state legislative policy. Her career began in Washington D.C. with fellowship and staff writer roles with Mother Jones and Vox. P.R. graduated with a degree in psychology and a certificate in policy journalism and media studies from Duke University.