Use this button to switch between dark and light mode.

Could Algorithm Audits Curb AI Bias?

February 18, 2022 (9 min read)

Like it or not, artificial intelligence and the computer algorithms it uses are determining the outcome of more and more of your life’s most significant events. Like whether you get a job. Or a home loan. Or get into college. Or get a vaccine, or perhaps have your health care completely taken away. Or even go to jail, and for how long.

“If you had a human bureaucratic process 15 years ago, it probably is being run by an algorithm now,” says data scientist and author Cathy O’Neil, a member of the Public Interest Tech Lab at the Harvard Kennedy School.

Proponents say such artificial intelligence can level the playing field and offer a more balanced evaluation of prospective employees – and home buyers and students and on and on – than any human being can.

But as the use of algorithms increases, so does concern that too many of these automated programs have their own inherent biases that exacerbate or even promote racism and other forms of discrimination.

Given this possibility, one might presume the use of algorithms is carefully and widely monitored and regulated.

That would be wrong, though it is not for a lack of trying.

Regulating AI Algorithms a Hard Sell in States

In recent years, a growing number of states, municipalities and even the federal government have sought to impose at least some form of oversight on the use of algorithms, looking to root out those biases.

The most popular choice for that oversight among many observers are mandatory algorithm audits or impact assessments that look to determine what, if any, biases are built into the system and what negative impacts those biases are creating. In theory, that would lead to those systems either no longer being used or, at a minimum, being overhauled.

But in the face of staunch opposition from business and Big Tech groups, getting such a proposal over the finish line has proven to be problematic at best. Such bills have failed in at least four states in recent years. 

Efforts to create state commissions or study groups to gather data on algorithm impacts have fared only slightly better. In 2018, New York City adopted the first such regulation in the nation, creating a model for states to follow. But a bill to create a statewide commission in the Empire State died in 2019.

Vermont adopted a temporary commission in 2018, but it concluded its work in 2020. Efforts so far to recreate it as a permanent group have failed.

The National Conference of State Legislatures reports that efforts to create similar commissions have also failed in a number of states, including Massachusetts, California, Connecticut, Hawaii, Virginia and Missouri, though some have also created study groups and commissions via executive order.

The news has not been all bad for algorithm regulation advocates, however.

In 2019, Idaho lawmakers considered a bill (2019 HB 118) that would have banned the use of algorithms in pretrial risk assessment tools used by courts to determine whether a person will be granted bail. But lawmakers ultimately gutted that portion of the bill, instead opting to require algorithm developers to make public “all documents, data, records, and information used by the builder to build or validate the pretrial risk assessment tool.” The law also allows defendants to review the calculations and data that went into their risk score.

California, meanwhile, adopted a measure last year (2021 AB 1228) that bars Golden State courts from requiring the use of any algorithm-based risk assessment tool in setting conditions of prisoner release.

Colorado also passed a measure last year (2021 SB 169) that bars insurers from using algorithmic models that discriminate against people “based on an individual’s race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.”

In 2020, Maryland took a major step in regulating how Old Line State employers use AI in hiring, passing HB 1202, a measure that bars employers from using facial recognition tools during an interview without the job applicant’s consent.

Illinois has also addressed AI in hiring, adopting a measure (2021 HB 53) that requires employers to obtain permission from job applicants to be evaluated by AI tools, and for employers to submit data on the use of AI to the state for bias analysis.

The Big Apple Looks to Take a Bite Out of AI Bias

To date, New York City is the only government entity to adopt legislation that will bar employers from using algorithms to evaluate job applicants unless they also conduct annual bias audits that can prove those systems are not discriminatory on the basis of race or gender.

Violators face fines of up to $1,500, though the onus is on the algorithm creators to show their systems are not biased. The law will also require algorithm builders to disclose their process in creating the program and allow job seekers to choose to be evaluated solely by a human being.

It takes effect on Jan. 2, 2023.

The District of Columbia could soon follow suit. In December, District Council Chairman Phil Mendelson and D.C. Attorney General Karl Racine introduced a proposal that would impose similar restrictions on the use of algorithms in hiring, loan applications and housing. Violators would face fines of up to $10,000 and possible civil litigation.

In a statement, Racine called algorithms “more discriminatory and unfair than big data wants you to know,” adding that the proposed ordinance “would end the myth of the intrinsic egalitarian nature of AI.”

There is also new federal legislation on the auditing front. In early February, Sens. Cory Booker (D-NJ) and Ron Wyden (D-OR) were joined by Rep. Yvette Clarke (D-NY) in introducing the Algorithmic Accountability Act of 2022, a bill that would require employers to “conduct impact assessments for bias, effectiveness and other factors, when using automated decision systems to make critical decisions.”

The proposal is an updated version of a similar bill they sponsored in 2019. That proposal died in committee.

Are Audits Really the Answer to Reining in Bad AI?

In the early days of computing, scientists so clearly recognized the importance of good input data that they gave it its own term: garbage in, garbage out, or GIGO.

So while the chorus to impose algorithm audits and assessments is growing louder, so is the call to make sure they are done correctly.

For researchers like University of Iowa professors Jovana Davidovic and Shea Brown, that means establishing rules for what an audit actually is, what it should consider and who should be involved in the process. Not doing so, they say, could lead to results as bad or worse than are already happening.

“There are currently no rules at all for audits, and there’s no real mandate, at least in this country, for having independent audits,” Brown says. “Until there are really clear standards for how audits are done, everyone who does one is going to approach it with their own biases and experiences.”

That, he says, is one of the major flaws in the New York City law.

“The NYC law requires bias assessments but doesn’t specify what that entails,” he says. “You can’t know one thing unless you look at the whole system and how it is developed and structured.”

O’Neil, who also is the founder and CEO of O’Neil Risk Consulting and Algorithmic Auditing, or ORCAA, a company that performs such services for private and public agency clients, lays it out in starker terms.

“There is no standard for being racist, so the people building these algorithms don’t know how to build nonracist systems,” O’Neil says.  

Davidovic offers a specific instance of where the devil is in the details.

“For example, if you’re doing testing for bias in facial recognition under only one kind of lighting condition, you might not find any bias. But if you’re testing under all kinds of lighting conditions, you could discover bias,” she says.

Brown and Davidovic say stakeholder engagement is critical for an audit to successfully find all those blind spots.

“Part of the process is making sure you interview every single stakeholder being impacted by an algorithm,” Davidovic says. “That means involving the client, but also the developers and the sales team and the people who manage clients and the users and the people on who it is used.”

To say that companies are not lining up to voluntarily undertake such an encompassing project is an understatement.

“Most of the people that come to us do so for a reason,” says O’Neil, whose clients include Airbnb, Siemens and HireVue, as well as a growing number of state and local agencies. “They’ve been getting in trouble for biased algorithms and they want us to clear their name. But until this NYC law passed, we didn’t have much hope for companies coming to us unless they have to.”

There is also a cost factor involved.

“These companies saw hiring algorithms as a real money saver and a time saver,” she says. “They thought, ‘oh great, we don’t have to hire nearly as many HR people as we used to. We can get rid of two-thirds of our HR team and just use algorithms. How efficient.’ But now they’re going to have to ensure that the algorithm they are using is abiding by fair hiring practices.”

Even so, she is not convinced that new laws requiring audits are the answer.

“It’s really not necessary in many cases,” she says. “Hiring, credit, insurance and housing are already highly regulated industries with a lot of existing anti-discrimination law. It’s not necessary to create new laws, it’s just necessary for the regulators in question to decide to enforce those laws.”

She believes the near future of algorithm regulation is likely to remain in that domain – the enforcement of current laws rather than new ones.

Perhaps, but it is also possible that as this issue gains more notoriety it will spawn even more legislative proposals at all levels of government.

Brown says that is not such a bad thing. While he prefers to see a federal statute, there are benefits to having the states take a whack at it as well.

 “For example, the Illinois law is affecting some of our clients because they realize that some of their customers are going to be in that state, and so they have to change the way they operate to react to that law,” he says. “It is the same now with New York City. Everybody is going to recruit in NYC. There are companies in Germany asking us how to manage this New York City law. It’s universal.”

In that regard, O’Neil says the real impact of all these efforts could just be to force companies to acknowledge the detrimental impact blindly using algorithms can cause.

“I’m sure it’s uncomfortable for the companies in question, but the good news is it is not going to show just one company what has been under wraps, but it will force the entire industry to come clean,” she says. “Which is great because it will help the rest of the country figure out what the standards should be. What can we expect? We don’t even know.”

 --By RICH EHISEN

 

States Seeking AI Oversight in 2021-2022

At least nine states have introduced legislation in the 2021-2022 biennium that would impose some form of oversight on the use of artificial intelligence or algorithms, with the aim of rooting out biases in them. Three of those states, California, Colorado and Illinois have enacted such measures.

 

Subscribe

News & Views from the 50 States

Free subscription to the Capitol Journal keeps you current on legislative and regulatory news.