Keynotes

*** More information on keynotes available soon ***

IMX Keynotes

Tupac Martir

Tupac Martir

Tupac Martir is an award-winning, multidisciplinary digital artist whose work redefines the boundaries of lighting design, performative reality, and the integration of cutting-edge technology across music, theatre, fashion, opera, and immersive exhibitions.

With over two decades of experience at the intersection of technology, science, and art, Martir’s recent practice explores the creative potential of Large Language Models (LLMs), Virtual Reality (VR), and Virtual Production (VP).

His pioneering projects—such as Unique and Haita (the latter premiered at the 2025 Bogota Audiovisual Market)—seamlessly merge entertainment, media, and the visual arts. In Unique, Martir used narrative technology (AI and XR) for a real-time content adaptation performance that explored the topic of identity.

In 2024, Tupac received the Profile Award for Outstanding Achievement in Innovation for his lighting design on Find Your Eyes, a celebrated work by British choreo-photolist Benji Reid.

His expertise also extended to the Virtual Production team for Amazon Prime’s Fallout series, which earned significant critical acclaim and two Emmy Awards in 2024.

Most recently, in December 2025, his immersive mixed-reality experience, Cadence of Altered Illusions (CAI), was showcased and validated at UnitedXR Europe as part of the EU Transmixr project.

Title: Technology as a character

When: June 10th or 11th

Abstract: In a world that turns faster to a new digital reality, the performance world needs to see the possibilities that exist within it, not as a space but rather as a new media for the audiences. The ability of making new pieces that are not just a recreation of a physical or already made piece, but rather using the new tools to create a new type of performance. Tupac will explain some of the things they are creating at Satore that are revolutionising how technology can become an active actor within the performance, not just an enabler.

 

Nonny de la Peña

Portrait_Placeholder

*** Bio coming soon ***

Title: The body is along for the ride: the power and considerations of embodiment in constructing immersive stories

When: June 10th or 11th

Abstract: Nonny de la Peña, PhD, will discuss some of the key considerations about the experience of the body in constructing extended reality stories including the embodied edit, duality of presence and spatial narratives. Looking back at her early career as a journalist, she will discuss how journalism’s best practices informed her thinking throughout her now years-deep virtual and augmented reality career. She will also discuss work using new techynologies including 3D and 4D gaussian splatting for both headset and WebXR deployment.

 

Co-located workshop keynotes

Aljosa Smolic

Smolic_TCD_2016_square

Aljosa Smolic is Professor in the Computer Science Department of the Lucerne University of Applied Sciences and Arts in Switzerland and Co-Head of the Immersive Realities Research Lab. Before he was Professor of Creative Technologies at Trinity College Dublin heading the research group V-SENSE, Senior Research Scientist and Group Leader at Disney Research Zurich, and Scientific Project Manager and Group Leader at Fraunhofer HHI. He is also a Co-Founder of the company Volograms, which commercializes volumetric video technology. Prof. Smolic’s expertise is in the broad area of visual computing (covering image/video processing, computer vision, computer graphics) with a focus on immersive XR technologies. He published 300+ scientific papers and book chapters, holds 35+ patents and received several awards and recognitions for his research.

Title: AI-based Volumetric Content Creation for Immersive XR Experiences and Production Workflows

When: June 9th, within the ISIM 2026 workshop

Abstract: Capture and 3D reconstruction of real-world objects, scenes, environments, and people for creation of digital 3D assets is an important task in XR and media production. Classical methods of visual computing are known for instance as photogrammetry for static content or volumetric video for dynamic content. Recently, AI-based methods like Neural Radiance Fields (NeRF) and Gaussian Splatting disrupted the scientific field and created a lot of interest in the production industry. However, integration of such content with standard production workflows is not straight-forward, due to the specific nature of the data. This talk will highlight examples of AI-based volumetric content creation and their application in XR and media production workflows, as developed in different projects at HSLU.