Advertising

Women in AI: Spotlight on Heidy Khlaaf, Safety Engineering Director at Trail of Bits

Women in AI: Spotlight on Heidy Khlaaf, Safety Engineering Director at Trail of Bits

In the male-dominated tech and AI industries, it is often the contributions of women that go unrecognized. To give these remarkable women their well-deserved spotlight, TechCrunch is launching a series of interviews focusing on women who have made significant contributions to the AI revolution. Over the course of the year, TechCrunch will highlight the work of these women and shed light on their achievements.

One woman who has made a notable impact in the field of AI is Heidy Khlaaf, an engineering director at the cybersecurity firm Trail of Bits. Khlaaf specializes in evaluating software and AI implementations within “safety critical” systems such as nuclear power plants and autonomous vehicles. With her expertise in safety engineering and safety-critical systems, Khlaaf provides context and criticism in the emerging field of AI safety.

Khlaaf holds a computer science Ph.D. from the University College London and a BS in computer science and philosophy from Florida State University. She has led safety and security audits, provided consultations and reviews of assurance cases, and contributed to the creation of standards and guidelines for safety- and security-related applications and their development.

In an interview, Khlaaf discusses her start in AI and what attracted her to the field. She explains that her fascination with robotics and AI at a young age led her to pursue a career in this field. She saw the potential for robotics and AI to automate workloads where they are most needed, such as in manufacturing and helping the elderly. Khlaaf also emphasizes the importance of having a strong theoretical foundation in computer science to make educated decisions about the suitability of AI and to identify potential pitfalls.

One of Khlaaf’s proudest achievements in the AI field is using her expertise in safety engineering to provide context and criticism in the field of AI “safety.” She highlights the lack of consistent definitions and misconstrued terminology in AI safety, which compromises the integrity of the safety techniques used in the AI community. Khlaaf’s work aims to bridge the safety gap within AI and deconstruct false narratives about safety and AI evaluations.

As a woman navigating the challenges of the male-dominated tech and AI industries, Khlaaf recognizes the little progress that has been made in terms of representation and leadership positions for women. She emphasizes the importance of building a strong personal community for support and understanding the changes required within the industry. Khlaaf believes that relying solely on diversity, equity, and inclusion initiatives is not enough to create meaningful change, as bias and skepticism towards technical women are still pervasive in these industries.

When asked about advice for women seeking to enter the AI field, Khlaaf encourages them not to appeal to authority and to find work that they truly believe in, even if it contradicts popular narratives. She warns against taking AI claims made by “thought leaders” as fact and urges women to vocalize skepticism against unsubstantiated claims made by their male peers. Imposter syndrome often holds women back from challenging these claims, but it is important to question exaggerated capabilities of AI, especially those that cannot be falsified under the scientific method.

In terms of the pressing issues facing AI as it evolves, Khlaaf emphasizes that AI should augment human capabilities rather than replace them. She raises concerns about the trend of shoehorning AI into every possible system without considering its effectiveness or failure modes. She cites an example where an AI system recently led to an officer firing at a child, highlighting the potential harm caused by disregarding AI’s pitfalls.

AI users should be aware of how unreliable AI can be. Khlaaf points out that AI algorithms are notoriously flawed with high error rates, and they often embed human bias and discrimination within their outputs. She explains that AI systems provide outcomes based on statistical and probabilistic inferences, rather than reasoning or factual evidence.

To responsibly build AI, Khlaaf suggests constructing verifiable claims and holding AI developers accountable to them. These claims should be scoped to regulatory, safety, ethical, or technical applications and must not be falsifiable. Independent regulators should assess AI systems against these claims to ensure public and consumer protection. Khlaaf believes that AI systems should not be exempt from standard auditing processes that are established in other industries.

Investors can push for responsible AI by engaging with and funding organizations that establish and advance auditing practices for AI. Currently, most funding is invested in AI labs themselves, assuming that their safety teams are sufficient for AI evaluations. However, independent auditors and regulators are crucial for public trust and ensuring the accuracy and integrity of assessments.

Heidy Khlaaf’s work in the field of AI safety and engineering is just one example of the remarkable contributions women are making in the AI revolution. As more women like Khlaaf are recognized and given their well-deserved spotlight, it is hoped that the tech and AI industries will become more inclusive and diverse, leading to greater advancements and breakthroughs in the field.