Skip to content

Courier service Uber Eats’ fight against AI bias shows that justice under UK law is hard won

On Tuesday, BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payment from Uber after “racially discriminatory” facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to get Jobs as a food delivery driver for Uber. platform.

The news raises questions about how adequate UK legislation is to address the growing use of artificial intelligence systems. In particular, the lack of transparency around automated systems launched on the market, with the promise of increasing user safety and/or service efficiency, may risk causing individual harm on a large scale, even when repair is achieved. for those affected by AI-driven bias it may take years.

The lawsuit followed a series of complaints about failed facial recognition checks since Uber implemented the real-time ID verification system in the UK in April 2020. Uber’s facial recognition system, based on Microsoft’s facial recognition technology, requires the account holder to submit a live selfie compared to an archived photo of themselves to verify their identity.

Failed identity checks

According to Manjang’s complaint, Uber suspended and then terminated his account following a failed ID verification and subsequent automated process, claiming to find “continuous mismatches” in photos of his face that he had taken in order to access the platform. Manjang filed legal claims against Uber in October 2021, with support from the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

Years of litigation followed, with Uber failing to get Manjang’s claim dismissed or a deposit ordered to pursue the case. The tactic appears to have contributed to prolonging the litigation, with the ECHR Describing the case as still in “preliminary stages” in the fall of 2023, and noting that the case shows “the complexity of a claim related to artificial intelligence technology.” A final hearing had been scheduled for 17 days in November 2024.

That hearing will now not take place after Uber offered, and Manjang accepted, a settlement payment, meaning the fuller details of exactly what went wrong and why will not be made public. Terms of the financial agreement have also not been disclosed. Uber didn’t provide details when we asked, nor did it offer comment on what exactly went wrong.

We also contacted Microsoft for a response on the outcome of the case, but the company declined to comment.

Despite reaching a settlement with Manjang, Uber does not publicly accept that its systems or processes were flawed. Its statement on the settlement denies that messaging accounts can be terminated solely as a result of AI testing, saying facial recognition checks are backed by “robust human review.”

“Our real-time ID verification is designed to help keep everyone who uses our app safe and includes robust human review to ensure we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” the company said. company in a statement. . “Automated facial verification was not the reason for Mr. Manjang’s temporary loss of access to his messaging account.”

However, it’s clear that something went seriously wrong with Uber’s identity checks in Manjang’s case.

Employee information exchange (WIE), an organization defending the digital rights of platform workers that also supported Manjang’s complaint, managed to obtain all of his selfies from Uber, through a Subject Access Request under the data protection law of the United Kingdom, and was able to prove that all the photos he had submitted for his facial recognition check were in fact photographs of himself.

“Following his termination, Pa sent numerous messages to Uber to rectify the issue, specifically requesting that a human review his shipments. Each time Dad was told ‘we were unable to confirm that the photos provided were actually yours and due to continued discrepancies, we have made the final decision to end our association with you,’” WIE relates when discussing his case in a broader report analyzing “data-driven exploitation in the collaborative economy.”

Based on the details of Manjang’s complaint that have been made public, it seems clear that both Uber’s facial recognition checks and the human review system that had been established as a supposed safety net for automated decisions failed in this case.

Equality law plus data protection

The case calls into question how adequate UK law is when it comes to regulating the use of AI.

Manjang was ultimately able to obtain a settlement with Uber through a legal process based on equality law, specifically, a discrimination claim under the United Kingdom’s Equality Act 2006, which lists race as a protected characteristic.

Baroness Kishwer Falkner, chair of the EHRC, criticized the fact that the Uber Eats courier had to launch a legal claim “to understand the opaque processes that affected his work,” she wrote in a statement.

“AI is complex and presents unique challenges for employers, attorneys and regulators. “It is important to understand that as the use of AI increases, the technology can lead to discrimination and human rights abuses,” he stated. wrote. “We are particularly concerned that Mr. Manjang was not informed that his account was in the process of being deactivated, nor provided any clear and effective route to challenge the technology. “More needs to be done to ensure that employers are transparent and open with their workforces about when and how they use AI.”

UK Data Protection Law is the other relevant legislation in this case. On paper, it should provide powerful protections against opaque AI processes.

The selfie data relevant to Manjang’s claim was obtained using data access rights contained in the UK GDPR. Had it not been able to obtain such clear evidence that Uber’s identity checks had failed, the company might not have chosen to settle at all. Proving that a proprietary system is flawed without allowing people to access relevant personal data would further raise the odds in favor of platforms with much richer resources.

Gaps in law enforcement

Beyond data access rights, the UK’s GDPR powers are supposed to provide individuals with additional safeguards, including against automated decisions with legal or equally significant effect. The law also requires a legal basis for the processing of personal data and encourages system implementers to be proactive in assessing potential harm by conducting a data protection impact assessment. This should force more checks against harmful AI systems.

However, enforcement is necessary for these protections to take effect, including a deterrent effect against the deployment of biased AI.

In the case of the United Kingdom, the law enforcement body, the Information Commissioner’s Office (ICO), did not intervene and investigate complaints against Uber, despite complaints about its failed identification checks dating back to 2021.

Jon Baines, senior data protection specialist at law firm Mishcon de Reya, suggests that “a lack of proper enforcement” by the ICO has undermined legal protections for people.

“We should not assume that existing legal and regulatory frameworks are incapable of addressing some of the potential harms of AI systems,” he tells TechCrunch. “In this example, it strikes me… that the Information Commissioner would certainly have jurisdiction to consider, both in the individual case and more generally, whether the processing being carried out was lawful under the UK GDPR.

“Things like: Is the processing fair? Is there a legal basis? Is there an Article 9 condition (given that special categories of personal data are processed)? But also, and crucially, was there a robust data protection impact assessment prior to the implementation of the verification app?

“So yes, the ICO should absolutely be more proactive,” he adds, questioning the regulator’s lack of intervention.

We have contacted the ICO about the Manjang case and asked it to confirm whether or not it is investigating Uber’s use of AI for identity checks in light of the complaints. A spokesperson for the watchdog did not respond directly to our questions, but sent a general statement emphasizing the need for organizations “to know how to use biometric technology in a way that does not interfere with people’s rights.”

“Our last biometric guide “It is clear that organizations must mitigate the risks that come with using biometric data, such as errors in accurately identifying individuals and biases within the system,” the statement also said, adding: “If anyone has concerns about how have handled your data, you can report these concerns to the ICO.”

Meanwhile, the government is in the process of watering down the data protection law through a post-Brexit data reform bill.

Furthermore, the government also confirmed at the beginning of this year Will not introduce specific legislation on AI safety at this time, despite Prime Minister Rishi Sunak making Striking claims about AI safety being a priority area for its administration.

Instead, he stated a proposal – set out in his March 2023 AI White Paper – where you intend to rely on existing laws and regulatory bodies that extend oversight activity to cover AI risks that may arise in your area. One change in approach it announced in February was a small amount of additional funding (£10m) for regulators, which the government suggested could be used to investigate the risks of AI and develop tools to help them examine the risks. AI systems.

No timeline was provided for the disbursement of this small additional fund. Multiple regulators are in the frame here, so if there is an equal division of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments, The UK Secretary of State wrote to him last month. By asking them to publish an update on their “strategic approach to AI”, they could each receive less than £1 million to top up budgets and address rapidly increasing AI risks.

Frankly, it seems like an incredibly low level of additional resources for already overstretched regulators if AI safety is truly a government priority. It also means there is still no cash or active oversight for AI harms that fall between the cracks of the UK’s existing regulatory patchwork, such as Critics of the government’s approach have previously noted.

A new AI safety law could send a stronger signal of priority, similar to the EU’s risk-based AI harms framework already in place. Accelerating its adoption as a hard law by the bloc.. But it would also be necessary for there to be the will to enforce it. And that signal must come from above.