While artificial intelligence made Headlines with ChatGPTBehind the scenes, the technology runs quietly permeated everyday life – Reviewing resumes, applying for rental housing, and in some cases even determining medical coverage.
While a number of AI systems have been found to be discriminatory, tipping the scales in favor of certain races, genders or incomes, there is little government control.
Lawmakers in at least seven states are making major legislative changes Regulating Bias in Artificial IntelligenceThis fills a gap left by congressional inaction. These proposals are some of the first steps in a decades-long discussion about balancing the benefits of this nebulous new technology with the well-documented risks.
“AI actually affects every part of your life, whether you know it or not,” said Suresh Venkatasubramanian, a professor at Brown University who co-authored the White House draft AI Bill of Rights.
“Well, you wouldn’t care if they all worked fine. But they don’t.”
Success or failure will depend on lawmakers solving complex problems while negotiating with an industry that is worth hundreds of billions of dollars and growing at a rate best measured in light years.
According to BSA The Software Alliance, which advocates on behalf of software companies, only about a dozen of the nearly 200 AI-related bills introduced in statehouses last year were signed into law.
These bills, along with the over 400 AI-related bills being debated this year, were largely aimed at regulating smaller parts of AI. This includes nearly 200 targeted deepfakes, including suggestions about them Bar pornographic deepfakeslike Taylor Swift’s flooded social media. Others are trying to curb chatbots like ChatGPT to ensure they don’t spit out instructions on how to build a bomb, for example.
These differ from the seven state bills that would apply across industries to regulate AI discrimination — one of technology’s most perverse and complex problems — being debated from California to Connecticut.
Those studying AI’s propensity to discriminate say states are already behind in setting guardrails. The use of AI for decision-making – what the draft legislation calls “automated decision-making tools” – is ubiquitous but largely hidden.
An estimated 83% of employers use algorithms to help with hiring. According to the Equal Employment Opportunity Commission, that figure is 99% for Fortune 500 companies.
Yet the majority of Americans are unaware that these tools are being used, Pew Research surveys show, let alone whether the systems are biased.
An AI can learn bias from the data it is trained on. This is typically historical data, which may contain a Trojan horse of past discrimination.
Amazon scrapped its hiring algorithm project after discovering it favored male applicants nearly a decade ago. The AI was trained to evaluate new resumes by learning from previous resumes – mostly male applicants. Even though the algorithm didn’t know the gender of applicants, it still downgraded resumes containing the word “women” or listing women’s colleges, in part because those weren’t included in the historical data it learned from.
“If you let AI learn from decisions that existing managers have made in the past, and if those decisions in the past have favored some people and disadvantaged others, then that’s exactly what the technology will learn,” said Christine Webber, the lawyer in a class action lawsuit alleging that an AI system that evaluates rental applicants discriminates against those who are Black or Hispanic.
Court documents describe one of the lawsuit’s plaintiffs, Mary Louis, a Black woman, who applied to rent an apartment in Massachusetts and received a cryptic response: “The third-party provider we use to screen all prospective tenants has rejected your tenancy.” .”
According to court documents, when Louis produced two landlord references to prove she had paid rent early or on time for 16 years, she received another response: “Unfortunately, we do not accept appeals and cannot override the results of the tenant review.”
The bills address this lack of transparency and accountability in part, following California’s failed proposal last year – the first comprehensive attempt to regulate AI bias in the private sector.
The bills would require companies using these automated decision-making tools to conduct “impact assessments,” including descriptions of how AI factors into a decision, the data collected and an analysis of discrimination risks, as well as an explanation of the company’s security measures. Depending on the bill, those assessments would be provided to the state or regulators could request them.
Some of the bills would also require companies to tell their customers that AI is being used in decision-making and give them the opportunity to opt out, with certain restrictions.
Craig Albright, senior vice president of U.S. government relations at BSA, the industry lobbying group, said its members were generally in favor of some proposed steps, such as impact assessments.
“Technology is advancing faster than the law, but there are actual benefits to the law catching up. Because then (companies) understand what their responsibilities are, consumers can have greater confidence in the technology,” Albright said.
But it was a lackluster start for the legislation. A Washington state bill already failed in committee, and a California proposal introduced in 2023, on which many of the current proposals are based, also failed.
California Assembly member Rebecca Bauer-Kahan, with the support of some tech companies like Workday and Microsoft, has overhauled her failed legislation last year after it removed the requirement that companies routinely file their impact assessments. Other states where bills are being introduced or are expected to be introduced include Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont.
While these bills are a step in the right direction, said Venkatasubramanian of Brown University, the impact assessments and their ability to uncover biases remain vague. Without better access to the reports – which many bills limit – it is also difficult to determine whether a person has been discriminated against by an AI.
A more intensive but accurate way to detect discrimination would be to require bias audits – tests to determine whether an AI is discriminatory or not – and publish the results. The industry is defending itself here with the argument that this would reveal trade secrets.
Most legislative proposals have no requirements to routinely test an AI system, and almost all still have a long way to go. Still, it’s the beginning of lawmakers and voters wrestling with a technology that is becoming, and will continue to be, ubiquitous.
“It covers everything in your life. For that reason alone, you should take care of it,” Venkatasubramanian said.