To give female academics and others focused on AI their well-deserved (and overdue) time in the spotlight, TechCrunch is launching a interview series focusing on notable women who have contributed to the AI revolution. We will publish several articles throughout the year as the rise of AI continues, highlighting key work that often goes unnoticed. Read more profiles here.
Sandra Wachter is a professor and senior researcher in data ethics, artificial intelligence, robotics, algorithms and regulation at the Oxford Internet Institute. She was also a member of the Alan Turing Institute, the UK’s national institute for data science and artificial intelligence.
While at the Turing Institute, Watcher evaluated the ethical and legal aspects of data science, highlighting cases where opaque algorithms have become racist and sexist. She also looked at ways to audit AI to address misinformation and promote equity.
Questions and answers
Briefly, how did you get started in AI? What attracted you to the field?
I can’t remember a time in my life when I didn’t think that innovation and technology have incredible potential to improve people’s lives. However, I also know that technology can have devastating consequences on people’s lives. And so I always felt driven (mostly because of my strong sense of justice) to find a way to ensure that perfect middle ground. Enable innovation while protecting human rights.
I always felt that the law has a very important role to play. The law can be that middle ground that protects people but allows innovation. Law as a discipline came very naturally to me. I like challenges, I like to understand how a system works, see how I can play with it, find loopholes and then close them.
AI is an incredibly transformative force. It is implemented in finance, employment, criminal justice, immigration, health and art. This can be good and bad. And whether it is good or bad is a question of design and policy. Naturally, I was drawn to it because I felt that the law can make a significant contribution to ensuring that innovation benefits as many people as possible.
What work are you most proud of (in the field of AI)?
I think the work I’m most proud of currently is a paper co-authored with Brent Mittelstadt (a philosopher), Chris Russell (a computer scientist), and myself as a lawyer.
Our latest work on bias and equity.”The Injustice of Fair Machine Learning”, revealed the harmful impact of enforcing many “group justice” measures in practice. Specifically, justice is achieved by “lowering” or making everyone worse off, rather than by helping disadvantaged groups. This approach is highly problematic in the context of EU and UK anti-discrimination laws, as well as being ethically concerning. in a wiring part We discuss how harmful lowering the bar can be in practice: in healthcare, for example, enforcing group equity could mean missing more cancer cases than strictly necessary while making a system less accurate in general.
For us, this was terrifying and is something that is important to know for people in technology, policy and, really, for all human beings. In fact, we have engaged with UK and EU regulators and shared our alarming results with them. I deeply hope this gives policymakers the leverage to implement new policies that prevent AI from causing such serious harm.
How to address the challenges of the male-dominated technology industry and, by extension, the male-dominated artificial intelligence industry?
The interesting thing is that I never saw technology as something that “belongs” to men. It wasn’t until I started school that society told me that technology has no place for people like me. I still remember when I was 10, the curriculum dictated that girls had to knit and sew while boys built bird houses. I also wanted to build a birdhouse and requested to be transferred to the boys class, but my teachers told me that “girls don’t do that.” I even went to the school principal to try to overturn the decision, but unfortunately I failed at that time.
It’s very difficult to fight a stereotype that says you shouldn’t be part of this community. I wish I could say that things like this don’t happen anymore, but unfortunately this is not true.
However, I’ve been very lucky to work with allies like Brent Mittelstadt and Chris Russell. I was privileged to have incredible mentors like my Ph.D. Supervisor and I have a growing network of like-minded people of all genders who are doing everything they can to lead the way forward and improve the situation for everyone interested in technology.
What advice would you give to women looking to enter the field of AI?
Above all, try to find like-minded people and allies. Finding your people and supporting each other is crucial. My most impactful work always comes from talking to open-minded people from other backgrounds and disciplines to solve common problems we face. Accepted wisdom alone cannot solve new problems, so women and other groups that have historically faced barriers to entry into AI and other technological fields have the tools to truly innovate and offer something new.
What are some of the most pressing issues facing AI as it evolves?
I think there are a wide range of issues that need serious legal and political consideration. To name a few, AI is plagued with biased data that leads to discriminatory and unfair results. AI is inherently opaque and difficult to understand, but it is tasked with deciding who gets the loan, who gets the job, who goes to prison, and who is allowed to go to college.
Generative AI has related problems, but it also contributes to misinformation, is plagued by hallucinations, violates data protection and intellectual property rights, puts people’s jobs at risk, and contributes more to climate change than the industry. of aviation.
No time to lose; We should have addressed these issues yesterday.
What are some of the issues that AI users should consider?
I think there is a tendency to believe in a certain “AI is here to stay, get on board or be left behind” narrative. I think it’s important to think about who is driving this narrative and who is benefiting from it. It is important to remember where the real power lies. The power is not in those who innovate, but in those who buy and implement AI.
Therefore, consumers and businesses should ask themselves: “Does this technology really help me and in what way?” Electric toothbrushes now have “AI” built in. For whom is this? Who needs this? What is being improved here?
In other words, ask yourself what’s broken and what needs fixing and whether AI can really fix it.
This type of thinking will change the power of the market and hopefully innovation will take a direction that focuses on utility to a community rather than simply profit.
What’s the best way to build AI responsibly?
Have laws that require responsible AI. Here too, a very unhelpful and false narrative tends to dominate: that regulation stifles innovation. This is not true. Regulation suffocates harmful innovation. Good laws encourage and nurture ethical innovation; That’s why we have safe cars, planes, trains and bridges. Society does not lose if regulation prevents
creation of AI that violates human rights.
Traffic and safety rules for cars were also said to “stifle innovation” and “limit autonomy.” These laws prevent people from driving without a license, prevent cars from entering the market that do not have seat belts and air bags, and punish people who do not drive according to the speed limit. Imagine what the auto industry’s safety record would be like if we didn’t have laws to regulate vehicles and drivers. AI is currently at a similar inflection point, and intense industry lobbying and political pressure mean it is still unclear which path it will take.
How can investors better drive responsible AI?
I wrote an article a few years ago called “How fair AI can make us richer.” I deeply believe that AI that respects human rights and is impartial, explainable and sustainable is not only the right thing to do from a legal, ethical and moral point of view, but can also be profitable.
I really hope investors understand that if they drive responsible research and innovation, they will also get better products. Bad data, bad algorithms, and bad design choices lead to worse products. Even if I can’t convince you that you should do the ethical thing because it’s the right thing to do, I hope you see that the ethical thing is also more profitable. Ethics should be seen as an investment, not an obstacle to overcome.