Video4IMX 2025

Video4IMX 2025

2nd International Workshop on Video for Immersive Experiences

Workshop Website: 2nd International Workshop on Video for Immersive Experiences​

The aim of the Video4IMX workshop is to address the increasing importance and relevance of classical (linear 2D), interactive (non-linear), 360 and volumetric video assets in the creation of immersive experiences, in connection with the use of state of the art Generative AI models. 

Richly granular and semantically expressive descriptive metadata about video assets is necessary for their discovery, adaptation and re-use in immersive content experiences, both in automated ways (e.g. automated insertion of journalist’s video recordings inside an immersive experience for a breaking news story) and semi-automated (e.g. creatives can search for and re-use videos as part of a theatrical or cultural immersive experience). 

The descriptive metadata needs to be extracted (adapted to the particular characteristics of interactive, 360 and volumetric video), modelled (according to shared vocabularies and knowledge models) and managed (in appropriate storage tools with expressive query support) before it can be meaningfully used to discover and organise video assets for new, innovative data-driven immersive content experiences. 

There should also be means to adapt, summarise or remix video content according to the usage purpose in the immersive experience, even to the extent that various modalities could be input to generative AI models to generate e.g. video from text or image, or 3D objects or scenes from video. 

The workshop will solicit the latest research and development in all areas around the extraction, modelling and management of descriptive metadata for video as well as approaches to adapt or convert video according to its purpose and use in an immersive experience. It aims to support the growth of a community of researchers and practitioners interested in creating an ecosystem of tools, specifications and best practices for video discovery, adaptation, summarization or generation, particularly in the context of video (re-)use in immersive experiences. 

Topics for the workshop include, but are not limited to:

  • Extraction and modelling of descriptive metadata about traditional 2D video, 360 video and volumetric video (decomposition, semantic representation, categorization, annotation, emotion/mood extraction etc.);
  • Tools and algorithms for the (semi-automatic) adaptation, summarisation or remixing of any type of video assets (traditional, interactive, 360, volumetric), particularly for re-use in immersive content;
  • Generative AI (foundational vision models, vision language models) for visual understanding, extraction of descriptive metadata from traditional, interactive, 360 or volumetric video;
  • Generative AI for creation of video assets out of other modal inputs such as textual prompts or image sets;
  • Generative AI for transformation of or between any type of video, such as generating (possibly multimodal) video summaries, or converting an input video into immersive content (3d objects or scenes);
  • Artificial intelligence and machine learning for volumetric video content analysis, understanding and retrieval to facilitate XR content generation.
  • Methods for explainable AI for visual content understanding and for immersive multimedia applications (e.g. game design)
  • Examples and use cases for usage of video (esp. 360 or volumetric) or immersive content generated from video in immersive experiences
  • Evaluations of user experience with video (esp. 360 or volumetric) or immersive content generated from video as part of an immersive experience
  • Multimedia tools and algorithms for multi-modal immersive simulations

The workshop chairs have experience with successful workshops held at IMX 2019 and 2021 (“DataTV”) where a range of topics related to data-driven personalised of television were presented, as reported in the workshop proceedings at http://datatv2019.iti.gr/ and http://datatv2021.iti.gr, and which also led to a Special Issue on Data Driven Personalisation of Television Content in the Multimedia Systems journal (https://link.springer.com/article/10.1007/s00530-022-00972-0). We have already successfully held the first Video4IMX workshop at ACM IMX 2024 (https://video4imx2024.iti.gr/).

Workshop Format

We will be happy to support a hybrid half-day event in accordance with IMX 2025 policies in this matter. While the workshop organisers will be on site, speakers and participants can be given the option between on site or online participation. We will seek a good balance in this matter. We have experience of organising workshops with an online component.

Type(s) of submissions/contributions that will be solicited

Video4IMX foresees two types of submission. Both submission types will be handled by the same dedicated EasyChair page. Full papers will have an oral presentation at the workshop and short papers may be presented as either a poster or a demo at the workshop:

Full papers

These are to be between 7000 and 9000 words in the SIGCHI Proceedings Format with 150 word abstract, describing original research which has been completed or is close to completion and which covers at least one of the workshop topics. Accepted papers will be presented in the oral session.

Short papers

These are 3500-5500 words in the SIGCHI Proceedings Format with 150 word abstract. Papers are to describe works in progress or demos, to be included in the poster and demo session. The submitters will be asked to provide links to the work that will be presented and outline in the short paper why this is relevant to a topic of the workshop, as well as identify if the submission is for a poster or a demo to be shown at the workshop. We expect new concepts and early work-in-progress to be reported here.

Organizers:

  • Lyndon Nixon, MODUL Technology, Austria. Contact: nixon@modultech.eu
  • Vasileios Mezaris, CERTH-ITI, Greece.
  • Stefanos Vrochidis, CERTH-ITI, Greece.

 

Video4IMX 2025 Workshop TPC Chairs:

Lyndon Nixon (lyndon.nixon@modul.ac.at)

Vasileios Mezaris (bmezaris@iti.gr)