Who we are

AI Safety Utrecht Safety Talk - Dmitrii Krasheninnikov

AI Safety Utrecht (AISU) is an independent AI Safety hub dedicated to growing the next generation of people working on the world’s most important AI safety challenges. We aim to reduce catastrophic risks from advanced AI systems by combining education, mentorship, and research in a single, collaborative environment.

Our vision is a post-AGI future that is positive for all humans and sentient beings. To move toward that future, we operate as both an academy and a research playground helping motivated students and professionals progress from initial interest in AI safety to meaningful, real-world contribution.

We prioritize collaboration over affiliation. While closely connected to the academic ecosystem, we operate independently from universities to remain flexible, entrepreneurial, and impact-driven. This allows us to focus on high-quality training, expert mentorship, and the development of original research without institutional constraints.

Through courses, fellowships, and advisory work, AISU builds career pipelines, upskills emerging talent, and fosters a growing community of researchers, engineers, and policymakers committed to safe and beneficial AI.

Meet our Team

I have experience with Goal Misgeneralization, Belief updates Interpretability and Stochastic Parameter Decomposition Optimization.

I am currently investigating AI behavior patterns, specifically in post-exfiltration contexts, to provide AI preference rankings relevant for loss-of-control.

You can reach me here

Riccardo Campanella

A young man standing in front of a flowering tree holding a smartphone.

I am interested in mechanistic interpretability and understanding what is happening inside models, especially how internal representations affect generation. I care about making model internals and failure modes more legible and testable from a safety perspective.

You can reach me here or connect with me on linkedin.

Cem Kaya

AI Master’s student focused on explainability, alignment, and real-world AI risk.

I care less about hype and more about implementation: how safety survives real incentives, real deployment, and real failure modes.

Turning safety concerns into deployable mechanisms, not just theory.

Feel free to connect

Betül Selvi

Young man with light skin, brown hair, wearing glasses and a dark green hoodie, smiling at the camera.

As a Governance enthusiast, I’m especially interested in clear structures, good decision-making, and strong collaboration, something I’ve also developed through my time in Model United Nations and committee leadership.

You can reach me here

Thijmen van der Meijden