Credit: mikemacmarketing, via Wikimedia Commons

Across the country, people are watching as Connecticut’s SB 2 moves toward deliberation in the House, and potentially the governor’s desk. With SB 2, Connecticut could become the home to the first U.S. comprehensive legislative framework governing how private businesses build and deploy high-risk artificial intelligence (AI) systems, setting a standard for other states to follow. 

In today’s evolving technological landscape, AI presents great power but requires great responsibility. AI offers the potential to advance society through applications such as early cancer detection and post-disaster aid while promising increased efficiency and scalability for individuals and businesses. However, the expanding role of AI in critical sectors like healthcare, employment, education, and finance risks is also capable of exacerbating existing biases and inequalities.

Tatiana Rice

With an effective governance framework, including proper testing, guardrails, and oversight, Connecticut can harness AI’s potential to enhance society while mitigating the inherent biases often present in human decision-making processes.

If passed and signed by Gov. Ned Lamont, Senate Bill 2 (SB 2) would provide a risk-based approach to AI regulation that may well set a precedent for the nation. Authored by State Sen. James Maroney, SB 2 embodies a nuanced understanding of the risks and benefits associated with AI systems that may be used to make consequential decisions that impact the lives of Connecticut residents.

As law, SB2 would empower individuals with rights to know when they are being subject to an automated decision and to challenge adverse outcomes while also requiring both the developers and deployers of AI systems to be transparent about their activities and exercise reasonable care to prevent unlawful algorithmic discrimination.

Yet, SB 2 distinguishes itself not only through its content but also through the rigorous development process that included incorporating perspectives from Connecticut agencies, bipartisan policymakers, and a diverse array of stakeholders. Starting in 2023, Connecticut took a crucial first step by enacting legislation governing state agency use of AI systems and initiating a working group to engage stakeholders and experts to craft recommendations for private-sector AI use. At the same time, Maroney also convened a multi-state, bipartisan policymaker working group that collaborated with experts in AI, computer science, civil society, civil rights, academia, and beyond to gain a deeper understanding of what a regulatory approach to AI governance may address and encompass.

But the collaborative process underscoring SB 2 didn’t end when the bill was introduced. Since its introduction, dozens of individual experts, researchers, civil society organizations, businesses large and small, healthcare entities, and trade associations, amongst others, have contributed to the evolution of the text that passed by the Senate in April. The bill further draws upon and promotes interoperability with leading AI governance safety frameworks, corporate best practices, guidance from the Biden administration and federal agencies, and lessons from the European Union AI Act. 

This thorough process underscores that SB 2 stands as a meticulously crafted response to the complexities of AI regulation, and is a testament to the power of inclusive policymaking— a model for how states can navigate the complex terrain of AI regulation with foresight and deliberation.

While some critics express concern about the potential disadvantages some businesses may face under such regulation, it’s important to note that Connecticut’s leadership in AI regulation is not an isolated endeavor; it is part of a larger movement toward responsible technology governance. Across the country, over 50 state bills to regulate AI are under consideration, with entities such as the Federal Trade Commission (FTC) and various state attorneys general underscoring the necessity and impending reality of comprehensive AI regulation. New regulatory frameworks for private sector use of AI are as inevitable as they are necessary. 

By embracing SB 2, Connecticut can assert its leadership in responding to the next wave of technological revolution, setting national standards that prioritize balanced oversight, consumer protection, and ethical AI development -–reflecting the state’s core values. Moreover, this comprehensive and thoroughly examined proposal provides businesses the certainty they need to navigate the dynamic technological landscape.

As with any meaningful legislation, many people will argue that SB2 will stifle innovation, and others will say that it fails to go far enough in offering robust protections. No doubt work will be needed to continue to ensure that small and large businesses alike have the resources they need to comply with the law’s requirements, and lawmakers will need to commit to revisiting this issue over time as the technology develops. But, for the possibly millions of people across Connecticut who will be impacted this year by an AI system, the real question is what entities and individuals will stand to gain –or lose– in the long run. 

With success within reach, will Connecticut seize its opportunity? 

Tatiana Rice is Deputy Director of the Future of Privacy Forum’s U.S. Legislation Artificial Intelligence and Biometrics section.