Advertising

Women in AI: Meet Claire Leibowicz, Expert in AI and Media Integrity at PAI

Women in AI: Claire Leibowicz, Expert in AI and Media Integrity at PAI

In the fast-paced world of artificial intelligence (AI), it’s important to recognize the remarkable women who have made significant contributions to the field. TechCrunch is shining a spotlight on these women with a series of interviews, and one individual who stands out is Claire Leibowicz, the head of the AI and media integrity program at the Partnership on AI (PAI).

The Partnership on AI is an industry group backed by major tech giants like Amazon, Meta, Google, and Microsoft, and it is committed to the responsible deployment of AI technology. Leibowicz not only leads the AI and media integrity program but also oversees PAI’s AI and media integrity steering committee.

Leibowicz’s journey into the AI field may seem unconventional. Her interest in human behavior led her to explore questions about trust, conflict, and belief systems. She began her academic career in cognitive science research, but soon realized that technology was shaping the answers to these questions. This realization brought her into computer science classrooms, where she discovered the importance of interdisciplinary perspectives in understanding the social impact of technologies like AI.

Leibowicz believes that AI has the potential to impact various fields, from education to healthcare to art. This intellectual variety intrigued her and presented an opportunity to make a meaningful impact on society.

One of Leibowicz’s proudest achievements is her work on synthetic media. Six years ago, before generative AI became widely known, PAI started exploring the possibilities of multistakeholder AI governance. They collaborated with organizations from civil society, industry, and media to shape Facebook’s Deepfake Detection Challenge, a competition for building models that can detect AI-generated media. This collaborative effort highlighted how experts from different backgrounds can contribute to addressing complex issues like deepfake detection.

In 2021, PAI published a normative set of guidance called “Responsible Practices for Synthetic Media,” which has garnered support from 18 organizations with diverse backgrounds. These institutions have committed to providing transparency reports about how they navigate the synthetic media field. Leibowicz finds projects like these, which provide tangible guidance and demonstrate its implementation across institutions, to be incredibly meaningful.

As a woman working in the male-dominated tech and AI industries, Leibowicz has faced challenges but has also found support from both male and female mentors throughout her career. She emphasizes the importance of finding people who support and challenge her, as well as focusing on shared interests and discussing the questions that drive the AI field.

Leibowicz encourages women seeking to enter the AI field to prioritize technical training while also recognizing the value of expertise from other fields like civil rights and politics. Balancing representation in technical roles and embracing diverse perspectives is essential for creating a more inclusive AI ecosystem.

When it comes to the pressing issues facing AI as it evolves, Leibowicz highlights questions surrounding truth, trust, and the control of information. With the rise of AI-generated content, it becomes increasingly difficult to discern what is real and what is not. Leibowicz believes that incorporating varied perspectives and ensuring that AI governance represents the interests of stakeholders from around the world, including the public, is crucial.

For AI users, Leibowicz advises maintaining a healthy skepticism about the novelty of AI problems while acknowledging that AI can exacerbate existing issues. It’s important to be aware of the hyperbolic and inaccurate messaging around AI and understand that it is not a revolutionary force but rather an augmenting one.

To responsibly build AI, Leibowicz emphasizes the importance of involving diverse institutions from civil society, industry, media, academia, and the public. Responsible AI development requires collaboration and consideration of various perspectives. For example, responsible development and deployment of synthetic media involve not only technology companies but also journalists, human rights defenders, and artists, each with their own unique concerns.

Investors also play a crucial role in pushing for responsible AI. Instead of adhering to the “move fast and break things” mantra, Leibowicz suggests adopting a mentality of “move purposefully and fix things.” Investors should allow more time and space for portfolio companies to incorporate responsible AI practices without stifling progress.

Claire Leibowicz’s journey in the AI field has been driven by a passion for understanding human behavior and its intersection with technology. Her work at PAI highlights the importance of collaboration, diverse perspectives, and responsible AI practices. As AI continues to evolve, it is individuals like Leibowicz who are shaping its future and ensuring that it serves the public interest.