Use this button to switch between dark and light mode.

ChatGPT Draws State Lawmakers’ Attention to AI

March 24, 2023 (6 min read)

ChatGPT, an artificial intelligence-driven computer program designed to facilitate interaction between users and their digital devices, has drawn a great deal of public attention since being introduced last year. Some state lawmakers are taking notice too and calling for more regulation of AI.

ChatGPT Puts AI in Spotlight

Artificial intelligence has been around for over half a century. A summer workshop at Dartmouth University in 1956 has been called the birthplace of AI as a field of research. Federal research and development of the technology begun in the 1960s through the Defense Advanced Research Projects Agency (DARPA) gave rise to cybersecurity tools and drones. And for some time now we’ve been using AI to help figure out what movies to watch, navigate from place to place on the road and instantly find what we’re looking for online, along with a host of other things.

AI has drawn plenty of attention over the years, some of it extremely negative. But the seamless integration of the technology into so much of our everyday life has allowed it—between regular news reports about car accidents involving self-driving vehicles—to largely fade into the background of public perception.

That seemed to change late last year with the release of ChatGPT, a chatbot capable of engaging in remarkably human-like conversations and carrying out complex tasks ranging from writing poetry to building apps.

Two months after its debut in November, the chatbot had more than 30 million users and was getting about five million visits a day, as The New York Times® reported, adding that Instagram took almost a year to reach 10 million users. It’s been embraced by news publishers and marketing firms and banned by school districts, fearing “a flood of AI-generated homework,” as the Times put it.

Congress Largely Inactive on AI

It wasn’t just the general public that took notice of ChatGPT. In late January U.S. Rep. Ted Lieu (D-California) wrote a guest essay in The New York Times saying that as a holder of a computer science degree he was “enthralled by A.I. and excited about the incredible ways it will continue to advance society,” and as a congressman he was “freaked out by A.I., specifically A.I. that is left unchecked and unregulated.”

A few days later Lieu introduced a resolution (HRES 66) calling on Congress to establish a nonpartisan commission to make recommendations about how to regulate artificial intelligence. To drive home the point that the time for such action is now, he used ChatGPt to draft the entire measure by giving it a simple prompt: “You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI.”

The measure hasn’t seen any further action since being referred to the House Committee on Science, Space, and Technology. And it wouldn’t be too surprising if the measure failed to make further progress. As the Times reported “legislation introduced in recent years to curb A.I. applications like facial recognition have withered in Congress.”

The explanation U.S. Rep. Jay Obernolte (R-California), another one of the few members of Congress with a degree in computer science, gave the publication was that most of his colleagues “do not even know what AI is.”

“Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what A.I. is,” Obernolte said. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of A.I. will not come from evil robots with red lasers coming out of their eyes.”

The tech industry has also lobbied against government regulation that would hinder development of AI, calling mostly for voluntary regulation instead.

“We aren’t anti-regulation, but we’d want smart regulation,” said Jordan Crenshaw, a vice president of the U.S. Chamber of Commerce, which lobbied against facial recognition bills in 2020 along with over 30 companies, including Amazon and Meta.

More Legislative Action on AI in States

More legislative progress has been made on AI at the state level. Last year at least 17 states introduced general AI bills or resolutions, and four states—Colorado, Illinois, Vermont and Washington—enacted such legislation, according to the National Conference of State Legislatures. Colorado, Illinois and Vermont also set up commissions or task forces to study the technology.

So far this year lawmakers in at least 28 states have introduced bills or resolutions relating to AI, according to the LexisNexis Lexis Nexis® State Net® legislative tracking system. The bulk of the measures create AI task forces, commissions or government agencies.

But Massachusetts Sen. Barry Finegold (D) has introduced “An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT” (SB 31).

Among other things Finegold’s bill would require companies that operate AI models like ChatGPT to disclose information about those models, including their “intended use, design process and methodologies;” conduct routine risk assessments; and program their models to include watermarks on the text they generate to deter plagiarism.

The aim of the measure, Finegold told The Boston Globe®, is “to put up safeguards-slash-guardrails that can help this technology grow without having negative consequences.”

“There are so many things this could be used for that could be used in a negative manner,” Finegold added. “But used in the right context, it can be very powerful.”

SB 31 currently appears to be the only measure in the country that refers specifically to ChatGPT. But another Massachusetts bill (HB 1974) that was also drafted by ChatGPT would regulate the use of AI in the provision of mental health services.

“With the rise of telehealth and these rapid advances in technology, AI-driven mental health care is no longer just a futuristic concept,” the bill’s sponsor, Rep. Josh Cutler (D), said in a statement posted on LinkedIn. “While there may be beneficial applications, any use of AI in this field must be carefully scrutinized to ensure that patients are always protected.”

Legislation dealing with the use of AI in mental healthcare is also under consideration in Texas (HB 4695).

A bill introduced in Illinois (HB 3563), meanwhile, would create a task force specifically for generative AI. And measures have been introduced in a handful of states that would require the disclosure of AI use in publicly displayed images or videos (Illinois HB 3943); advertising (New York AB 216); social media (Illinois HB 3285 and New York SB 895); and political campaigns (Washington HB 1442).

A pair of bills in Texas (HB 1896 and HB 2700) also target the use of AI in sexually explicit content. And companion bills introduced in Minnesota (HB 1370/SB 1394) would establish a cause of action for disseminating “deep fake” sexual images without the consent of the depicted individual and make using deep fake technology to influence an election a crime.

The continuing improvement of AI and continued federal inaction on the issue are likely to only spur further state efforts to regulate the use of the technology.

—By KOREY CLARK 

The information in this article is powered by State Net. Please visit our webpage for more information on the bills mentioned in this article or if you would like to speak with a State Net representative about how the State Net legislative and regulatory tracking solution can help you react quickly to relevant legislative and regulatory developments.

Over Half of States Considering AI Bills

Lawmakers in at least 28 states have considered bills or resolutions relating to AI in 2023, according to the State Net legislative tracking system. So far Georgia is the only state that has enacted such legislation, including a bill (HB 18) providing funding for an artificial intelligence manufacturing project.

Subscribe

News & Views from the 50 States

Free subscription to the Capitol Journal keeps you current on legislative and regulatory news.