Use this button to switch between dark and light mode.

Disclosure Becomes Legislators’ Latest Tool for Regulating AI

July 16, 2024 (4 min read)

In a sign of the times, states have begun pursuing bills that require disclosure of the use of artificial intelligence.

In March, Utah Gov. Spencer Cox (R) signed SB 149, making the state the first in the nation to require individuals who use generative AI to interact with others to “clearly and conspicuously” disclose when they are doing so.

Two months later, Colorado Gov. Jared Polis (D) signed SB 205, sweeping legislation to regulate the use of AI in the Centennial State. Among the bill’s many provisions aimed at combating algorithmic discrimination are requirements that websites post disclosures about any automated systems that use AI to make “high risk” or “consequential” decisions, like screening for job openings.

In a letter to the legislature, Polis said he signed the bill “with reservations,” noting that he was “concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.” But he also said he hoped that enacting SB 205 would spark further conversation, “especially at the national level,” and noted that the bill “is among the first in the country to attempt to regulate the burgeoning artificial intelligence industry on such a scale.”

Around the same time, New York Rep. Clyde Vanel (D) and Sen. Kristen Gonzales (D) introduced companion measures AB 10103 and SB 9450, which would implement first-of-their-kind regulations requiring generative AI systems to notify users that the content they produce “may be inaccurate and/or inappropriate.” The measures would make violators subject to penalties of up to $100,000.

The proposal didn’t quite make it through the Legislature before the session adjourned in early June. But the sponsors tried to ram it through in just a month, underscoring the sense of urgency around the issue.

Indeed, since the beginning of the current legislative biennium, at least 40 measures dealing with disclosures or disclaimers about the use of AI have been considered in 15 states, according to data compiled by the National Conference of State Legislatures. Such measures have been enacted in five of those states.

As Shel Holtz, senior director of communications for the commercial construction contractor Webcor, recently noted on LinkedIn, “Disclosure is one of the hottest topics in the world of business adoption of AI.” And the discussion is just getting started at the legislative level.

State Lawmakers Focus on AI Disclosures 

At least 15 states have considered legislation this session dealing with disclosures or disclaimers about the use of artificial intelligence, according to data compiled by the National Conference of State Legislatures. Five of those states have enacted such measures. 

Lawmakers Return to Tried-and-True Approach

State lawmakers are pursuing AI disclosure bills due to a litany of concerns around the rapid development of the technology, including the explosion of so-called “deep fakes,” as well as high-profile cases of generative AI systems producing embarrassingly inaccurate content, like Google's chatbot Gemini depicting people of color in Nazi uniforms.

These problems are particularly concerning to legislators given that it’s an election year and fears are sky high about misinformation impacting the results in November.

In short, the technology is advancing so quickly that legislators are responding with a tried-and-true regulatory technique: mandatory disclosure when something potentially troublesome is employed, with that potentially troublesome something in this case being AI, a technology that in many ways is beyond the grasp of many legislators.

“Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what A.I. is,” U.S. Rep. Jay Obernolte (R-CA) told The New York Times last year. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of A.I. will not come from evil robots with red lasers coming out of their eyes.”

Regulating AI through disclosure legislation isn’t without its share of complexity.

“So if you say to an entity, ‘You must disclose if you’re using artificial intelligence to come to this decision,’ the first step in that process is [defining] what artificial intelligence is,” California Sen. Tom Umberg (D) said on a recent Government Technology podcast. “What does artificial intelligence mean? What does transparency mean? What does bias mean? What privacy interests are at issue? Secondarily is transparency. We need to have some sort of way to mark products that are derivative of artificial intelligence. So...you start with definitions, and then I think you start with transparency so folks know when artificial intelligence is being used to make a decision in law enforcement, employment or in health care.”

But in the absence of federal action on AI, states are eager to step into that vacuum. And disclosure laws offer them a way to mitigate some of the risk of AI without hindering its development.

—By SNCJ Correspondent BRIAN JOSEPH

Visit our webpage to connect with a LexisNexis® State Net® representative and learn how the State Net legislative and regulatory tracking service can help you identify, track, analyze and report on relevant legislative and regulatory developments. 

Subscribe

News & Views from the 50 States

Free subscription to the Capitol Journal keeps you current on legislative and regulatory news.