Artificial Intelligence

Sweeping Artificial Intelligence Bill Stalls in House » CBIA


Despite sweeping legislation regulating artificial intelligence in the private sector passing the Senate last month, the bill died when it was not called for a vote in the House by the General Assembly’s May 8 deadline.

The decision not to run the bill came shortly after Gov. Ned Lamont told reporters he would veto the legislation if it reached his desk for signature.

SB 2 would have represented the first statute in the country regulating AI applications developed and deployed by private industry.

The bill, which passed the Senate on a 24-12 party line vote, also criminalized deceptive synthetic media in elections and the nonconsensual dissemination of synthetic intimate information, and created a number of AI-related workforce development programs.

Sen. James Maroney (D-Milford), the bill’s chief proponent, developed the regulatory framework working closely with a bipartisan group of legislators in other states.

Colorado had raised a nearly identical piece of legislation, which is currently pending before that legislature. It’s unclear where Colorado Gov. Jared Polis stands on the current iteration of the bill.

Working Group

SB 2 was a continuation of the work conducted by the Connecticut Artificial Intelligence Working Group established last year.

The working group—comprised of legislators, industry representatives, scientists, and academics—was created as a result of Public Act 23-16.

The group was tasked to make policy recommendations regarding ethical and equitable use of AI by state government and private industry, and assess the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights.

While the task force established consensus around many policy recommendations that made its way into SB 2 (i.e. synthetic media in elections, workforce development, deepfake porn), it failed to reach consensus on regulations governing private sector development and deployment.

Despite the working group not reaching consensus on private sector regulations, the bill contained sweeping reporting requirements.

Despite the working group not reaching consensus on private sector regulations, sections 1-7 in the latest version of the bill that passed the Senate contained sweeping measures that established a number of reporting requirements for developers and deployers that utilize “high risk [AI]”—defined as “any [AI] system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”

In turn, “consequential decision,” a definition which has changed multiple times through new amendments, meant “any decision that has a material or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of, (A) any criminal case assessment, any sentencing or plea agreement analysis or any pardon, parole, probation or release decision, (B) any education enrollment or opportunity, (C) any employment or employment opportunity, (D) any financial or lending service, (E) any essential government service, (F) any healthcare service, or (G) any housing, insurance, or legal service.”

When developers “develop” and deployers “deploy” these high-risk systems, the bill required a number of reporting requirements to the state and the entity or consumer utilizing said system. 

Should the developer or deployer fail to use “reasonable care” to protect consumers from “any known or reasonably foreseeable risk of algorithmic discrimination,” that entity (1) must notify respective parties utilizing the system; and (2) may be subject to an enforcement action by the Attorney General to cure such discrimination within 60 days.

Broad Concerns

These sections drew widespread concern from industry, consumer advocates, lawmakers, the Department of Economic and Community Development, and Lamont himself.

At the bill’s public hearing earlier this year, CBIA supported a number of workforce development initiatives in the bill, but urged lawmakers to work with industry to craft regulations that would not hamper economic growth:

As to the definitions, regulations and reporting requirements that were not consensus items amongst the AI Working Group (Sections 1-7), CBIA has concerns about the unintended consequences of these sections that could chill AI innovation in our state if adopted in its proposed form. CBIA is committed to working with this committee and other stakeholders to achieve an AI regulatory framework that builds on best practices currently being developed across the world, and that fosters innovation, grows our economy, and protects Connecticut residents.”

Lamont said Connecticut was better off working with a consortium of other states to address AI development and deployment. 

Shortly after the Senate passed SB 2, the U.S. Chamber of Commerce sent a letter to House Speaker Matt Ritter (D-Hartford) urging caution on adopting such far-reaching regulations:

”Existing laws and regulations already cover many AI activities. Where gaps exist, policymakers should seek ensure that new policies are tailored to the risk and don’t unnecessarily hamper innovation … Given the significant complexities associated with AI, and the vast scope of SB 2, we call on you to ensure there is more time for review and analysis.”

The Connecticut Post reported on Tuesday that should SB 2 have reached the Governor’s desk, he would veto it.

Lamont told the newspaper that Connecticut was better off working with a consortium of other states to address AI development and deployment across the private sector. 

Maroney, who led a multi-year effort to pass the Data Privacy Act two years ago, is expected to raise this issue again next session.


For more information, contact CBIA’s Wyatt Bosworth (860.244.1155) | @WyattBosworthCT.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.