Earlier this month, Lina Khan, chairwoman of the US Federal Trade Commission (FTC), wrote a rehearsal in The New York Times affirming the agency’s commitment to regulate AI. But there was one AI application that Khan didn’t mention that the FTC urgently needs to regulate: automated hiring systems. These range in complexity from tools that simply scan resumes and rank them to systems that give applicants the green light and weed out applicants deemed unsuitable. Increasingly, American workers are required to wear them if they want to be hired.
In my recent book, The quantified workerI argue that the American worker is being reduced to numbers by artificial intelligence technologies in the workplace, most notably automated hiring systems. These systems reduce applicants to a score or rank, often ignoring the gestalt of their human experience. Sometimes they even classify people by race, age, and gender, a practice that is legally prohibited as part of the employment decision-making process.
Ironically, many of these systems are marketed as bias free or guaranteed to reduce the likelihood of discriminatory hiring. But because they are so loosely regulated, these systems have been shown to deny equal employment opportunity based on protected categories such as race, age, sex, and disability. In December 2022, for example, Truckers union sued Meta, alleging that Facebook “selectively displays job ads based on the gender and age of users, with older workers being much less likely to see ads and women much less likely to see ads for jobs, especially in industries that historically exclude women. This is misleading. Furthermore, it is unfair to both job applicants and employers. Employers buy automated hiring systems to reduce their liability for employment discrimination, and the providers of those systems are legally required to substantiate their claims of efficiency and fairness.
The law puts automated recruitment systems under the purview of the FTC, but the agency has yet to issue specific guidelines on how providers of these systems should advertise their products. You should start by demanding auditing to ensure that automated recruiting platforms deliver on the promises they make to employers. The providers of these platforms should be required to provide clear audit trails demonstrating that their systems reduce bias in employment decision-making as advertised. These audits should be able to show that the designers followed the Equal Employment Opportunity Commission (EEOC) guidelines when creating the platforms.
Additionally, in collaboration with the EEOC, the FTC could establish the Fair Automated Recruitment Mark, which would be used to certify that automated recruitment systems have passed the rigorous audit process. As a go-ahead, the mark would be a useful signal of quality to consumers, both applicants and employers.
The FTC should also allow job applicants, who are consumers of AI-enabled online application systems, to file complaints under the Federal Credit Reporting Act (FCRA). Previously, it was thought that the FCRA only applied to the big three credit bureaus, but a close reading shows that this law can be applied as long as a report has been created for any “economic decision.” Under this definition, applicant profiles created by automated online recruiting platforms are “consumer reports,” which means that the entities that generated them (such as online recruiting platforms) would be considered credit reporting agencies. Under the FCRA, anyone who is the subject of one of these reports can ask the agency that made it to see the results and demand corrections or amendments. Most consumers don’t know they have these rights. The FTC should launch an educational campaign to inform applicants about these rights so they can make use of them.
—————————————————-
Source link