The prospect of new regulations on artificial intelligence has been a hot topic at the Connecticut General Assembly for years, but advocates hope a new approach this year will finally generate results.
At the core of the debate is a fundamental tension: giving industry enough space to innovate with the emerging technology while protecting the privacy, intellectual property and civil rights of state residents.
That tension hampered previous bills. But as national regulation stalls, AI technology continues to advance. And the widening gap between the burgeoning industry and the lack of safeguards around it is fueling some urgency among legislators.
“AI is growing and accelerating and entering every aspect of life,” Senate President Pro Tem Martin Looney, D-New Haven, said during a panel at an event hosted by the Connecticut Business and Industry Association last month. “The absence of federal action leaves it to the states to be moved to fill that vacuum.”
The exact way to do that is still under debate.
This year, despite the shorter session, a bipartisan group of lawmakers is taking a different approach from years past.
Instead of introducing a single broad bill that attempts to regulate all of Connecticut’s AI infrastructure at once, legislators are targeting a cluster of specific AI-related issues including data privacy, consumer protection, online safety of minors and expanding AI knowledge and capacity among the state’s emerging workforce.
“Something has to be done, because people do use it negatively, and that negatively impacts our constituents,” said Sen. Paul Cicarella, R-North Haven, the ranking senator on the legislature’s General Law Committee, and a regular participant in AI discussions. “It’s something we really have to get our heads around.”
But striking that balance could be easier said than done. Multiple bills have been raised on the topic — including two requested by Gov. Ned Lamont — with several clearing committees this year.
Some drew objections from tech policy groups and business organizations worried about new regulation. Others were praised for supporting AI developers and adopters, with critics arguing that more could be done to protect state residents.
Meanwhile, the Trump administration’s pledge to target state laws that conflict with the administration’s national AI framework looms in the background, with some states scrapping proposals in response.
With one month left, the focus at the Capitol now shifts to a challenging task: condensing a variety of AI-related proposals into a streamlined package. And after years of failed attempts, the question is if a General Assembly that has long been divided on AI policy can finally compromise.
A new approach
Much of the attention is on the General Law Committee, which has taken the lead on advancing AI-related bills in recent years. Work on those bills has been spearheaded by committee co-chair Sen. James Maroney, D-Milford.
Maroney authored legislation that fell short in prior sessions, including a compromise bill that died shortly after clearing the Senate last year. His efforts have largely been comprehensive “omnibus” style bills that attempt to regulate a variety of AI aspects in one piece of legislation.
Speaking to the Connecticut Mirror in December, Maroney noted that the committee would return to the topic in 2026, adding that while the ultimate goal of passing AI regulations that protects state residents would remain, the exact method would change.
“When you say AI, everyone kind of thinks of ChatGPT versus, you know, machine learning or algorithmic decision making,” he said. “The area where we want to look at is more the use than the technology.”
The approach was quickly made clear in the early days of the session, when Maroney, joined by Attorney General William Tong, announced new legislation aimed at online safety, data privacy and AI chatbot protections, with an explicit focus on Connecticut children and teens.
The proposals, inspired by pieces of regulation in other states, are included in two different bills in the General Law Committee. The first, Senate Bill 4, establishes new regulations for data brokers in the state, modifies some definitions under the Connecticut Data Privacy Act and creates a process for consumers to have their personal data deleted from data broker and data service provider registries.
The second bill, Senate Bill 5, focuses on a variety of artificial intelligence regulations, including guidelines around AI companion chatbots and employment-related decision making programs.
The bill also sets up several AI-related educational efforts and programs, including an Artificial Intelligence Policy Office. A bill with several overlapping workforce and AI “regulatory sandbox” provisions, Senate Bill 86, was requested by Lamont at the start of the session.
While there are sticking points, particularly around some of the data privacy provisions in SB 4, Cicarella said that there has been an intentional effort to limit the effect proposed AI regulations could have on businesses.
“If you look at the first couple of years of the legislation, it’s changed drastically each year,” he said. “I am definitely more supportive of the legislation before us this year than in years past.”
The difference in strategy has also been noticed by organizations like the Connecticut Business and Industry Association, which has been critical of the all-in-one approach adopted in previous years.
“It’s a shift in the conversation,” said Chris Davis, CBIA’s vice president of public policy. “What we have now this year is a more targeted approach about specific aspects of AI and how we can provide a scenario where businesses can comply and also provide that consumer protection.”

But even with the narrowed focus, the bills, SB 5 in particular, still raised similar concerns for CBIA and other policy groups worried that state lawmakers keep trying to enact a sweeping set of regulations.
“It’s kind of a Frankenstein approach, to be honest,” said Brianna January, the former northeast director for state and local government relations for Chamber of Progress, a national technology trade and policy association that is urging states to adopt a light touch on regulations. “Instead of just regulating potential harms of AI and the emerging industry in Connecticut, we see this bill trying to tackle a lot of things.”
That concern was echoed at a March public hearing on SB 5, with written testimony from organizations like NetChoice calling for the AI knowledge and workforce-related portions of the bill to be advanced alone.
“An unconstitutional law protects no one. A duplicative law confuses everyone,” Patrick Hedger, NetChoice’s director of policy, wrote in a letter submitted to the General Law Committee last month.
A focus on ‘unintended consequences‘
Cicarella believes that legislation must avoid creating “unintended consequences” for AI users in the state.
The phrase comes up repeatedly in national discussions of AI policy. In Connecticut, the concern among some lawmakers, Lamont and pro-AI advocates is that with technology moving so quickly, regulation runs the risk of being too broad or confusing. If the enacted regulation feels daunting, some worry that companies will avoid Connecticut in favor of states with fewer rules.
Three areas have drawn the most attention: privacy, money and redundancy.
The first has been a concern largely in discussions of the companion chatbot provisions included in SB 5 and the online safety provisions of the second Lamont-requested bill, House Bill 5037. That bill, which would regulate how minors use social media, includes restrictions on the times of day that social media apps can send minors notifications, mandates that social media companies track the number of minors using their services, and requires platforms to display mental health warnings when a minor logs in and then at specific intervals throughout an online session.
Both bills aim to place more guardrails around technologies used by Connecticut youths, with the Lamont administration arguing that HB 5037 in particular is necessary to protect children online.
“I agree with all the restrictions we’re putting on the social bots,” Lamont said in February. “I think that’s my focus: protect the kids.”
In a landmark ruling last month, a jury in Los Angeles found that the design of social media platforms created by Meta and Google risked youths’ mental health and encouraged social media addiction.
Still, the legislative proposals have raised free speech and privacy issues. Some tech policy organizations note that the bills will require age verification of all social media users to be effective. To accomplish that task, more data about users across the state would need to be collected, just as legislators seek to further strengthen the state’s data privacy laws and help consumers pull their information offline.
Lawmakers counter that any information gathered during the age verification process will be deleted immediately after a verification attempt is made.
Money, both for organizations looking to comply with AI regulations and for the state agencies tasked with enforcement, could also pose a problem.
Smaller companies have been concerned that complying with regulations will be too expensive. Critics argue that regulation could ultimately price out these smaller businesses and technology adopters from using AI in the state.
This looms especially large in discussions of regulation around what are known as “automated employment-related decision systems,” a category of AI tools used to assist with workplace decisions.
In the Labor and Public Employees Committee, lawmakers advanced Senate Bill 435, a bill that would require significant disclosures around when these tools are used, while giving workers and job applicants the ability to view and request a range of information about how the tools were involved in worker assessments and hiring processes. The topic is also covered in SB 5.
And for state agency heads gaining new enforcement-related tasks under the proposed legislation, those new responsibilities and programs aren’t accounted for in the governor’s midterm budget, prompting questions of exactly how they would be funded.
In the Commerce Committee, lawmakers introduced Senate Bill 417, a bill calling for DECD to plan a program to support and expand the number of small businesses using and creating AI software.
When the bill received a public hearing last month, agency Commissioner Daniel O’Keefe, a former tech investor and strong proponent of AI technology in the state, noted that DECD currently lacks both the funding and the additional staff to implement the program.
The bill was still passed out of committee with unanimous support.
While Maroney maintains that his committee’s bills come with minimal costs, several of the AI measures do not yet have fiscal notes, leaving an open question about if the state can afford to implement them.
Beyond that, the final problem highlights a more fundamental concern from some opponents of regulation: that as the state builds out its AI infrastructure, new laws and a variety of new state programs could create a number of redundancies among various groups working on AI in the state.
Critics also note that AI-based infractions could already be addressed by a variety of state laws, pointing to a February AI memo from the Connecticut Attorney General’s office to support its argument that new AI-specific rules and penalties are unnecessary.
Federal policy framework adds new wrinkle
With just weeks left and multiple bills in line, lawmakers have limited time to determine which will be put up for a vote and what the final versions of those bills will look like.
Much of that work falls on Maroney in particular, with lawmakers unsure of AI technology or policy often turning to him to provide guidance. Several lawmakers deferred to Maroney when the Connecticut Mirror asked them to comment on aspects of AI legislation introduced this year.
The senator did not respond to multiple requests for comment on this year’s bills.
In focusing on AI and data privacy, Maroney has placed himself at the forefront of one of the most contentious — and active — policy debates in the country. According to multistate.ai, an online tracker for proposed state regulations, more than 1,500 AI-related bills have been introduced so far in states this year, already surpassing the entirety of what was introduced in 2024.
The topic has become more prominent as the Trump administration seeks to preempt state AI laws, arguing that the federal government must take the lead by establishing a national framework. Last year, the president supported a proposal in Congress that would set a 10 year moratorium on state AI regulations. The measure, which was included as part of an earlier version of the president’s One Big Beautiful Bill Act, was ultimately removed.
In December, the White House released a long-expected executive order on artificial intelligence that threatened to take action against states that adopted “onerous” regulations.
And in March, the administration issued its national AI framework, focusing on protections for children, calling for support for small businesses and encouraging Congress to adopt legislation to address energy costs. The proposal, which has been called “empty” by some policy experts due to vague language and a lack of penalties, gives tech companies considerable freedom.

The measure faces an uncertain future in Congress, and legislators in states including Connecticut have continued to work on regulations despite the president’s call to stop.
Even so, the threat of presidential action has further bolstered arguments from some tech policy groups worried about a patchwork of state AI laws.
“A lot of these questions that bills are trying to really answer, which is like, what are the limits of safety and what are the limits of security implications and things like that, those are really national questions that need to be answered,” January said.
In an interview before the release of the national framework, she explained that comprehensive AI regulations in particular could face an uncertain future. Omnibus efforts have stalled in recent years in Connecticut and other states, and even when comprehensive proposals are adopted, as they were in Colorado in 2024, questions over implementation have led to delays and revisions.
For some experts, a better approach could be something like New York’s RAISE Act, a measure passed last year that focuses on the most serious AI-related violations, including large-scale loss of life or significant property damage “through either: (a) creation or use of … weapons, or (b) AI engaging in conduct with limited human intervention that would constitute a crime requiring intent, recklessness, or gross negligence if committed by a human.” The law, which went into effect in March, focuses on a defined group of “large developers” who have created “frontier models,” larger AI programs of a specific computational size and cost. The law includes sizable civil penalties, up to $30 million, for violations.
The New York measure shares several key definitions and requirements with California’s Transparency in Frontier Artificial Intelligence Act, which was adopted last November. The new laws will be closely watched by tech policy groups and other states to see if they offer a viable framework for AI policy.
As Connecticut proceeds with its own multifaceted approach, the looming question is if this year’s changes are enough to get something over the finish line. Even with the hurdles before them, supporters of AI regulation hope the answer is yes.
That goal has been emphasized by Maroney during discussions of SB 4 and SB 5 during the session, including at a General Law committee meeting last month.
“This is a caucus priority,” he said.


