Advertising

Spotlight on Women in AI: Interview with Ewa Luger, Co-Director of the Institute of Design Informatics

Introduction:
TechCrunch is launching a series of interviews highlighting remarkable women who have contributed to the AI revolution. In this interview, Ewa Luger, co-director at the Institute of Design Informatics and co-director of the Bridging Responsible AI Divides (BRAID) program, discusses her work and provides insights into the challenges and future of AI.

Exploring Social, Ethical, and Interactional Issues in AI:
Ewa Luger’s research focuses on social, ethical, and interactional issues in data-driven systems, particularly AI systems. She is particularly interested in design, the distribution of power, spheres of exclusion, and user consent. Her work has been highly recognized, with her most-cited paper being on the user experience of voice assistants. However, her ongoing work with the BRAID program, designed to connect arts and humanities knowledge to policy, regulation, industry, and the voluntary sector, is what she is most proud of. Luger believes that the arts and humanities have been overlooked in the AI field and aims to change that.

Addressing the Gender Gap in Tech and AI:
Luger acknowledges the challenges faced by women in the male-dominated tech industry. She highlights that these issues are not limited to industry but are also prevalent in academia. Luger emphasizes the need for better gender balance and cultural changes to support women in reaching their full potential. She shares her experience of having to push herself out of her comfort zone and set firm boundaries to navigate these challenges.

Advice for Women Entering the AI Field:
Luger encourages women to go for opportunities that allow them to level up, even if they don’t feel 100% qualified. Research shows that men tend to go for roles they think they could do, while women only go for roles they feel competent in. Luger also stresses the need for increased gender representation in AI research hubs and the importance of creating a more inclusive environment.

Pressing Issues in AI:
From Luger’s perspective, the most pressing issues in AI are related to the immediate and downstream harms that may occur due to a lack of careful design, governance, and use of AI systems. She highlights the environmental impact of large-scale models as a heavily under-researched issue. Luger also discusses the challenge of reconciling the speed of AI innovations with the regulatory climate’s ability to keep up. Additionally, she emphasizes how the democratization of AI, while positive in some aspects, has led to challenges in copyright, attribution, and its impact on democratic systems.

Issues AI Users Should Be Aware Of:
Luger points out that trust is a significant issue for AI users. She mentions the use of large language models by students to generate academic work and warns about incorrect citations and lost nuances. Veracity and authenticity are also concerns as AI models become more sophisticated, making it difficult to distinguish between human and machine-generated content. Luger advises users to check the source and develop the necessary literacies to make informed judgments.

Building AI Responsibly:
According to Luger, responsible AI requires algorithmic impact assessments, regulatory compliance, and processes that actively seek to do good rather than just minimizing risk. She emphasizes the need for diverse representation in AI design and training systems architects to be aware of moral and socio-technical issues. Luger also highlights the importance of governance, co-design, stress-testing systems, and providing mechanisms for opt-out, contestation, and recourse.

Investors’ Role in Promoting Responsible AI:
Luger acknowledges that responsibility can sometimes be overshadowed by capital gain in the AI industry. However, she believes that being responsible should be the baseline expectation. She emphasizes the need for alignment of values and incentives among investors to prioritize responsible AI. Luger also highlights the importance of considering the impact on marginalized groups who may not have the resources to contest negative outcomes.

Conclusion:
Ewa Luger’s insights shed light on the social, ethical, and interactional aspects of AI. She highlights the need for greater gender balance and cultural changes in the tech industry. Luger also emphasizes the pressing issues in AI, the importance of user awareness, and responsible AI practices. Her expertise and experience contribute to the ongoing discussion on how to navigate the challenges and build a responsible AI ecosystem.