Skip to content

Women in AI: Heidy Khlaaf, Director of Security Engineering at Trail of Bits

To give female academics and others focused on AI their well-deserved (and overdue) time in the spotlight, TechCrunch is launching a interview series focusing on notable women who have contributed to the AI ​​revolution. We will publish several articles throughout the year as the rise of AI continues, highlighting key work that often goes unnoticed. Read more profiles here.

Heidy Khlaaf is director of engineering at cybersecurity company Trail of Bits. She specializes in evaluating software and artificial intelligence implementations within “safety-critical” systems, such as nuclear power plants and autonomous vehicles.

Khlaaf received his PhD in computer science. from University College London and his bachelor’s degree in computer science and philosophy from Florida State University. She has led security and safety audits, provided consultation and assurance case reviews, and contributed to the creation of standards and guidelines for security-related applications and their development.

Questions and answers

Briefly, how did you get started in AI? What attracted you to the field?

I was drawn to robotics from a young age and started programming at age 15 because I was fascinated by the prospects of using robotics and artificial intelligence (as they are inexplicably linked) to automate workloads where they are needed most. Just like in manufacturing, I saw robotics being used to help seniors and automate dangerous manual work in our society. However, I received my Ph.D. in a different subfield of computer science, because I believe that having a solid theoretical foundation in computer science allows for informed and scientific decisions about where AI may or may not be suitable and where obstacles may lie.

What work are you most proud of (in the field of AI)?

Utilize my strong background and experience in security engineering and safety-critical systems to provide context and critique where necessary in the new field of AI “safety.” Although the field of AI security has attempted to adapt and cite well-established security techniques, several terminologies have been misinterpreted in terms of their use and meaning. There is a lack of consistent or intentional definitions that compromise the integrity of the security techniques that the AI ​​community is currently using. I am particularly proud of “Towards comprehensive risk assessments and assurance of AI-based systems” and “A Hazard Analysis Framework for Code Synthesis Large Language Models” where I deconstruct false narratives around AI security and assessments, and provide concrete steps to close the security gap within AI.

How to address the challenges of the male-dominated technology industry and, by extension, the male-dominated artificial intelligence industry?

Recognizing how little the status quo has changed is not something we discuss often, but I think it’s actually important for me and other technical women to understand our position within the industry and have a realistic view on the changes needed. Retention rates and the proportion of women in leadership roles have remained largely the same since I joined the field, and that was more than a decade ago. And as TechCrunch has rightly pointed out, despite the tremendous advances and contributions of women within AI, we remain on the margins of the conversations that we ourselves have defined. Recognizing this lack of progress helped me understand that building a strong personal community is much more valuable as a source of support than relying on DEI initiatives that unfortunately have not achieved change, given that the bias and skepticism towards technical women is still quite high. widespread in technology.

What advice would you give to women looking to enter the field of AI?

Don’t appeal to authority and find a line of work you truly believe in, even if it contradicts popular narratives. Given the power that AI labs have politically and economically right now, there is an instinct to take as fact whatever AI “thought leaders” say, when it is often the case that many AI claims are marketing speeches. that exaggerate the capabilities of AI to benefit. a final result. However, I see significant hesitation, especially among young women in the field, to express skepticism in the face of claims made by their male peers that cannot be substantiated. Impostor syndrome has a strong hold on women within the technology sector and leads many to doubt their own scientific integrity. But it is more important than ever to question claims that exaggerate the capabilities of AI, especially those that are not falsifiable according to the scientific method.

What are some of the most pressing issues facing AI as it evolves?

Regardless of the advances we see in AI, they will never be the only solution, technologically or socially, to our problems. There is currently a trend to introduce AI into every possible system, regardless of its effectiveness (or lack thereof) in numerous domains. AI should augment human capabilities rather than replace them, and we are witnessing a complete disregard for the pitfalls and failure modes of AI that are causing real, tangible harm. Recently, an AI system Trigger Recently, an officer shot a child.

What are some of the issues that AI users should consider?

How unreliable AI is. AI algorithms are notoriously flawed, and high error rates are seen in applications that require precision, accuracy, and security. The way AI systems are trained incorporates human bias and discrimination into their results, which become “de facto” and automated. And this is because the nature of AI systems is to provide results based on statistical and probabilistic inferences and correlations from historical data, and not on any kind of reasoning, factual evidence or “causation.”

What’s the best way to build AI responsibly?

Ensure that AI is developed in a way that protects people’s rights and safety by making verifiable claims and holding AI developers accountable to them. These claims must also have a regulatory, safety, ethical or technical application and must not be falsifiable. Otherwise, there is a significant lack of scientific integrity to properly evaluate these systems. Independent regulators should also evaluate AI systems based on these claims, as is currently required for many products and systems in other industries, for example, those evaluated by the FDA. AI systems should not be exempt from standard audit processes that are well established to ensure the protection of the public and consumers.

How can investors better drive responsible AI?

Investors should collaborate and fund organizations seeking to establish and promote audit practices for AI. Currently, most of the funds are invested in the AI ​​labs themselves, with the belief that their security teams are sufficient for the advancement of AI testing. However, independent auditors and regulators are key to public trust. Independence allows the public to have confidence in the accuracy and integrity of assessments and the integrity of regulatory results.