Skip to main content
To KTH's start page

Safe(r) AI

Time: Wed 2026-02-25 15.00

Location: Flexi studio (4618) and Zoom

Video link: https://kth-se.zoom.us/j/68242240137

Participating: Henriette Cramer

Export to calendar

Abstract:
Against the backdrop of a growing push for EU tech autonomy, there still appear to be significant differences in AI conversations in research and practice communities. Both safety teams in industry and societal organizations are increasingly under pressure, and while principles for safer(r) AI have gained widespread recognition, challenges remain in their practical application. This includes methodological gaps when translating literature to specific domains, navigating organizational challenges, and disconnects between professional communities. Popular methods such as AI red teaming lack standardized approaches, creating uncertainty for teams working on proper evaluation and development.
In this session, we'll discuss challenges practitioners face when implementing safety measures and potential consequences for education and collaborative research. Drawing on insights from the San Francisco Bay Area and European contexts, we'll explore how regional differences shape conversations.

Bio: Henriette Cramer is a researcher and practitioner focused on AI quality and safety, and is based in San Francisco (papermoon.ai). She has extensive experience in recommender systems, search, (ro)bot, voice and ad product and research. In her prior corporate role as a Director at Spotify, she led the company’s Responsible AI strategy and set up data teams, and was part of the Nordic. She led research for Spotify voice interaction ("Play me something"), delivered at-scale instrumentation and metrics at Yahoo, and conducted research at the Swedish Institute of Computer Science. She has multiple patents, 60+ research publications, and a PhD focused on Trust in AI from the University of Amsterdam.
 linkedin.com/in/henriettecramer / papermoon.ai / https://scholar.google.com/citations?user=2e1_pcgAAAAJ