SESSION: MUSIC, ART & HEALTH
Visual Respiratory Feedback in Virtual Reality Exposure Therapy: A Pilot Study
As the use of Virtual Reality (VR) expands across fields, new kinds of interaction methods are introduced. This study presents the Visual Heights VR experience that integrates natural breathing as an input method to provide visual respiratory feedback. Incorporating spatial audio, haptic feedback and breath visualisation, the experience aims to be highly immersive. This experience was made to be used as part of a controlled pilot study to see the effect of respiratory feedback on the user’s anxiety levels. The user’s anxiety is assessed by their heart rate, brain electrical activity, skin conductance and respiratory rate. These biosignals are recorded within the experience; captured by external hardware. The pieces of hardware used were Galvanic Skin Response to measure skin conductance, photoplethysmogram to measure heart rate; Electroencephalogram to measure the electrical activity in the brain, and a prototype device that records airflow on an axis from -1 to 1 for respiratory rate. It was found that the aforementioned prototype was not sufficient for calculating the respiratory rate. Results of the controlled study showed that the Visual Heights VR experience delivered the expected positive correlation between skin conductance and perceived height (r=.491, p < .05, N=1543) which suggests it is plausible to be used as a material for further research. As the integration of user’s physiological signals and breathing for visual feedback can contribute to therapeutic uses of VR, research with bigger sample sizes will be conducted to better investigate the relationship between visual respiratory feedback and anxiety using the Visual Heights VR experience.
Harassment Experiences of Women and LGBTQ Live Streamers and How They Handled Negativity
Live streaming is a form of interactive media that potentially makes streamers more vulnerable to harassment due to the unique attributes of the technology that facilitates enhanced information sharing via video and audio. In this study, we document the harassment experiences of 25 live streamers on Twitch from underrepresented groups including women and/or LGBTQ streamers and investigate how they handle and prevent adversity. In particular, live streaming enables streamers to self-moderate their communities, so we delve into the methods of how they manage their communities from both a social and technical perspective. We found that technology can cover the basics for handling negativity, but much emotional and relational work is invested in moderation, community maintenance, and self-care.
Musical Haptic Wearables for Synchronisation of Visually-impaired Performers: a Co-design Approach
The emergence of new technologies is providing opportunities to develop novel solutions that facilitate the integration of visually-impaired people in different activities of our daily life, including collective music making. This paper presents a study conducted with visually-impaired music performers, which involved a participatory approach to the design of accessible technologies for musical communication in group playing. We report on three workshops that were conducted together with members of an established ensemble of solely visually-impaired musicians. The first workshop focused on the identification of the participants’ needs during the activity of playing in groups and how technology could satisfy such needs. The second and third workshops investigated, respectively, the activities of choir singing and instrument playing in ensemble, focusing on the key issue of synchronisation that was identified in the first workshop. The workshops involved prototypes of musical haptic wearables, which were co-designed and evaluated by the participants. Overall, results indicate that wireless tactile communication represents a promising avenue to cater effectively to the needs of visually-impaired performers.
Extending Music Notation as a Programming Language for Interactive Music
This work describes a novel approach for composing and performing interactive music by extending the traditional staff notation to the programming language domain. The proposed syntax aims to describe the interaction between humans and computers in live-electronics music performance. Thus, both performers and machines will understand this new notation, creating a cohesive music representation for performance that is both human-readable and technology-independent. This paper starts by describing some critical issues related to live-electronics that make it challenging to build repertoire around this genre. Next, the proposed approach is detailed, along with some syntax examples. Finally, the last section describes the evaluation of the proposed approach, including a description of the software implementation and a set of short interactive-pieces.
Coping, Hacking, and DIY: Reframing the Accessibility of Interactions with Television for People with Motor Impairments
We conduct an examination of the accessibility challenges experienced by people with upper body motor impairments when interacting with television. We report findings from a study with N=41 people with motor impairments (spinal cord injury, cerebral palsy, muscular dystrophy) and document their challenges and coping strategies for using the TV remote control, but also their television watching experience and expectations of suitable assistive technology for television. Our results show that, despite several accessible remote control products available on the market, the majority of our participants preferred to DIY and hack, and to adopt coping strategies to be able to use conventional TV remote controls. We contrast their experience against that of a control group with N=41 people without impairments. We reflect about the DIY culture and people with motor impairments, and we propose future work directions to increase the accessibility of interactions with television.
SESSION: SOCIAL
AR-TV and AR-Diànshì: Cultural Differences in Users’ Preferences for Augmented Reality Television
As Augmented Reality television gains momentum, it is important to understand whether cultural differences among viewers favor different expectations and preferences for immersion in such new television environments. A previous study documented the preferences of 172 participants from various European countries for twenty application scenarios for ARTV, such as virtual objects coming out of the TV screen into the room. In this work, we conduct an empirical generalization of this previous study to understand potential cultural differences in users’ preferences for and expectations of ARTV. To this end, we report insights from data collected from a sample of 147 participants from China, which we compare against the preferences expressed by the participants from Europe from the original study. Our findings reveal similarities, but also differences in terms of expectations of ARTV across the two cultural groups. We draw implications for future research on culturally-aware augmentations of the television watching experience.
Moderation Visibility: Mapping the Strategies of Volunteer Moderators in Live Streaming Micro Communities
Volunteer moderators actively engage in online content management, such as removing toxic content and sanctioning anti-normative behaviors in user-governed communities. The synchronicity and ephemerality of live-streaming communities pose unique moderation challenges. Based on interviews with 21 volunteer moderators on Twitch, we mapped out 13 moderation strategies and presented them in relation to the bad act, enabling us to categorize from proactive and reactive perspectives and identify communicative and technical interventions. We found that the act of moderation involves highly visible and performative activities in the chat and invisible activities involving coordination and sanction. The juxtaposition of real-time individual decision-making with collaborative discussions and the dual nature of visible and invisible activities of moderators provide a unique lens into a role that relies heavily on both the social and technical. We also discuss how the affordances of live-streaming contribute to these unique activities.
To Use or Not to Use: Mediation and Limitation of Digital Screen Technologies within Nuclear Families
Today’s home environment is affected by multiple screen technologies designed for personal and home use, making family members audience of the omnipresent technologies. We investigate how the past decades’ increasingly technology saturated home environment influences home practices and parents’ mediation of their rules of conduct for children’s access and use. We conducted a two-part interview study with parents from different nuclear families, and found parental mediation of screen technologies to have become a complex and emotional process with continuous mediation of when to use or not use screens. Despite a shared goal of decreasing the role of screen technology, the parents differentiated between rules, regulations, and limitations, which could provide tensions within the family and between different families if attitudes and/or practices were not consistent. As such, we argue internal family rules and regulations to be a continuous negotiation between parents and children, where personal principles and external expectations impact a family’s code of conduct. Our study contributes to a better understanding of screen technology practices, leading to design guidelines for screenbased home technology.
Hugging from A Distance: Building Interpersonal Relationships in Social Virtual Reality
This paper focuses on how the emerging social VR systems, as new and unique social interaction spaces that afford high-fidelity and multidimensional physical presence, may support building interpersonal relationships in a more nuanced, immersive, and embodied way. Based on 30 interviews, our investigation focuses on 1) the main reasons why people build and foster interpersonal relationships in social VR; 2) various novel activities through which users can foster relationships in social VR; and 3) the complicated influences of social VR mediated relationships on users’ online and offline social lives. We contribute to better understanding mediated interactive experiences by shedding light on the novel role of social VR in transforming how people meet, interact, and establish connections with others compared to other forms of media. We also provide potential directions to inform the design of future social VR systems to better afford healthy, fulfilling, and supportive interpersonal relationships.
SESSION: AI & DATA
Evaluating AI assisted subtitling
Recent advances in artificial intelligence (AI) have led to an increased focus on automating media production. One relevant application area for AI is using speech recognition to create subtitles and closed captions for videos. The AI methods based on machine learning are still not sufficiently reliable in terms of producing perfect or acceptable subtitles. To compensate for this unreliability, AI can be used to build tools that support, rather than replace, human efforts and to create semi-automated workflows. In this paper, we present a prototype for including automated speech recognition for subtitling in an existing production-grade video editing tool. We devised an experiment with 25 participants and tested the efficiency and effectiveness of this tool compared to a fully manual process. The results show that there is a significant increase in both effectiveness and efficiency for novices in subtitling. Furthermore, the participants found the augmented process to be more demanding. We identify some usability issues and design choices that pertain to making augmented subtitling easier.
Human Data Interaction in Data-Driven Media Experiences : An Exploration of Data Sensitive Responses to the Socio-Technical Challenges of Personal Data Leverage
While explication of socio-technical challenges posed by personal data leverage in media research has been of interest recently, effective responses that alleviate them are yet to be studied. This paper reports the use of a Cross Media Profiler prototype supported by a Personal Data Store, that combines personal data from different media services for more holistic media recommendations. This prototype is used to probe and explore the integration of Human Data Interaction principles in media experiences, as a response to these challenges. Our focus groups reveal that users prefer the media service when supported by the PDS while highlighting improvements around transparency and control. The work leads to two outcomes : design recommendations for future media experiences to embody increased sensitivity to the implications of personal data leverage and a critique of the HDI agenda through the lens of data driven media, which scopes out potential for future IMX intervention.
Stay Tuned! An Investigation of Content Substitution, the Listener as Curator and Other Innovations in Broadcast Radio
This paper demystifies listeners’ wishes with respect to broadcast radio innovation (with a specific focus on radio-mediated music consumption). Our study encompasses an ideation workshop with radio experts, an exploratory survey and a mixed methods empirical evaluation. The empirical evaluation uses two concrete concepts (i.e., letting listeners on-the-fly replace radio content with preferred content and fostering participatory radio production by involving listeners as radio content curators) as a lens to zoom in on the questionable desirability of radio innovation. It is learned that a significant consumer group exists who will stay loyal to broadcast radio even if it does not evolve substantially, whereas others need disruptive incentives to start listening to radio (again). From our results we distill design recommendations to educate the radio production community about how best to approach radio innovation.
Data-driven Approaches for Discovery and Prediction of User-preferred Picture Settings on Smart TVs
We discover user-preferred picture settings on smart TVs and investigate whether it is possible to predict the users’ picture setting preferences through machine learning methods. We first perform K-means clustering on large-scale smart TV usage log data to understand how users fine-tune the factory default picture settings. Clustering results recognize 3–4 user groups who have reasonably different preferences toward the default settings. By characterizing these user preferences, we come up with new user-preferred picture settings. We perform an in-depth analysis of the newly discovered picture settings regarding diverse TV device characteristics. We also perform lab experiments to demonstrate how these new settings deliver different picture quality than the default. Next, we construct a deep learning-based classifier that learns and predicts the picture setting preferences of the users. The final trained model shows 86% accuracy in predicting users’ decisions to choose a specific picture setting out of four available options.
SESSION: METHOD & FRAMEWORK
Mixing Modalities of 3D Sketching and Speech for Interactive Model Retrieval in Virtual Reality
Sketch and speech are intuitive interaction methods that convey complementary information and have been independently used for 3D model retrieval in virtual environments. While sketch has been shown to be an effective retrieval method, not all collections are easily navigable using this modality alone. We design a new challenging database for sketch comprised of 3D chairs where each of the components (arms, legs, seat, back) are independently colored. To overcome this, we implement a multimodal interface for querying 3D model databases within a virtual environment. We base the sketch on the state-of-the-art for 3D Sketch Retrieval, and use a Wizard-of-Oz style experiment to process the voice input. In this way, we avoid the complexities of natural language processing which frequently requires fine-tuning to be robust. We conduct two user studies and show that hybrid search strategies emerge from the combination of interactions, fostering the advantages provided by both modalities.
Context-Aware Question-Answer for Interactive Media Experiences
Media content has become a primary source of information, entertainment, and even education. The ability to provide video content querying as well as interactive experiences is a new challenge. To this end, question answering (QA) systems such as Alexa and Google Assistant have become quite established in consumer markets but are limited to general information and lack context awareness. In this paper, we propose Context-QA, a light-weight context-aware QA framework, to provide QA experiences on multimedia content. The context awareness is achieved through our innovative Staged QA Controller algorithm that keeps the search for answers in the context most relevant to the question. Our evaluation results show that Context-QA improves the quality of the answers by up to 49% and uses up to 56% less time compared to the conventional QA model. Subjective tests show Context-QA improved results over conventional QA models, with 90% reporting enjoying this new media form.
Attention Guidance Technique Using Visual Subliminal Cues And Its Application On Videos
Attention is known to be shifted reflexively by subliminal cues in static environments, but their effect when applied in dynamic environments remains unclear. This study examines the effect of subliminal cues in both static and dynamic environments and presents a novel technique of applying subliminal cues within videos. Experiment 1 confirmed the effect of subliminal cues in guiding covert spatial attention in a static environment. Experiment 2 investigated the effect of subliminal cues in guiding overall gaze distribution in video context by manipulating the frequency of subliminal cues to bias the viewer’s gaze towards a specific side. There was no main effect of cue frequency, but additional findings showed the possibility that the effect of subliminal cues occurred differently between gender, and other factors such as gaze orientation bias influenced the viewer’s gaze distribution. These results provide insights on application of subliminal cues in video contexts and render the directions for future studies.
Camera Distances and Shot Sizes in Cinematic Virtual Reality
This paper describes the impact of different camera distances in cinematic virtual reality. For our findings, we took a closer look at proxemics, the study on how humans behave regarding space and distances. We explored the four proxemics distances (intimate, personal, social, public) and adapted them to camera distances in cinematic virtual reality. For the user study, a stereoscopic movie with a speaking person was produced for four different distances. The results show that different distances cause different feelings in viewers similar to shot sizes in traditional movies. In our scenario – a person in a museum is speaking about an exhibit – the personal distance was preferred by the participants. As an outcome of this work, the proxemics distances were put in relation to well-known shot sizes used in traditional filmmaking.
SESSION: AI/ Autonomous Systems/ IoT
Going Beyond Second Screens: Applications for the Multi-display Intelligent Living Room
This work aims to investigate how the amenities offered by Intelligent Environments can be used to shape new types of useful, exciting and fulfilling experiences while watching sports or movies. Towards this direction, two ambient media players were developed aspiring to offer live access to secondary information via the available displays of an Intelligent Living Room, and to appropriately exploit the technological equipment so as to support natural interaction. Expert-based evaluation experiments revealed some factors that can influence the overall experience significantly, without hindering the viewers’ immersion to the main media.
Appropriate Timing and Length of Voice News Notifications
Voice news notifications pushed without a user’s explicit request enable the user to consume the news while they are performing various other activities. Thus, the user can keep up with the news naturally on a daily basis. However, the notifications can be intrusive if the timing of delivery is inappropriate. To estimate the appropriate timing, we prototyped a system that detects breakpoints in users’ daily routines at home based on IoT sensor data and then sends voice news notifications through a speaker centrally located in a smart home environment. The system was installed in the residence of four participants for field testing over a duration of nearly two weeks. Subsequently, the participants were interviewed regarding their subjective evaluation of the system. The results suggest that (1) voice news notifications at appropriate timings can increase the number of user opportunities in accessing news updates without unduly distracting the user, and (2) the acceptable voice news notification clip length differs depending on the activities subsequent to the breakpoints. We believe that our voice notification system can be applied to enable users to receive news updates in a low-intrusive manner without distracting them from their daily activities.
Smartphone-based Content Annotation for Ground Truth Collection in Affective Computing
This paper presents a real-time emotion-annotation tool using a personal mobile device. To this end, an application based on the Valence-Arousal model was developed, following two different approaches (”Two-step Sequential Annotation” and ”One-step Matrix Annotation”). The application was tested through, an experiment where users performed annotations with each version of the app and then filled a feedback questionnaire. This questionnaire contained statements used to assess the usability and mental workload. A total of 16 (9 female) participants aged between 21 and 34 years old engaged in this experiment. Overall the results were quite encouraging in both versions, taking into account that this is still a work in progress. The ”Single-step Matrix Annotation” was considered the preferred version.
Building a ‘Sicko’ AI: AIBO: An Emotionally Intelligent Artificial Intelligent GPT-2 AI Brainwave Opera
Using the GPT-2 algorithm developed by OpenAI in February 2019, a skewered or ‘sicko’ AI character was created as a live time entity running in the Google cloud. The algorithm allows imitations of human dialogue that produce fake and often realistic interactions emanating from computer cloud-based agents. The character was created as one of two characters in the emotionally intelligent artificial intelligence brainwave opera AIBO (Artificial Intelligent Brainwave Opera). The spoken word opera rhetorically inquired “Can an AI be fascist?” and “Can an AI have epigenetic or inherited traumatic memory?” through the interplay of human and non-human characters. This paper discusses certain aspects involved in creating the GPT-2 cloud-based character AIBO.
SESSION: Computer-Mediated Communication and Interaction
Conversational User Interfaces As Assistive Interlocutors For Young Children’s Bilingual Language Acquisition
Children in international and cross-cultural families in and outside of the US often learn and speak more than one language. Challenges can arise for these children in terms of communicating with other children and being able to fully participate in school and society using the primary country language, in developing relationships with distant relatives in other languages, and with the lack of opportunities for practising additional languages within a small community of speakers. Recent research shows that some parents use screen media content to acquaint their children with their parent’s native language, and to also help them become proficient in the language of communication in the country that they reside in. We leverage the qualities of screen media in aiding children with language learning, and try to translate those qualities into the design of a CUI for children to explore the potential of designing conversational user interfaces which can double as assistive language aids. By reviewing the relevant literature about the role of screen media content in young children’s language learning, and interviewing a subset of parents raising multilingual children, we present a preliminary list of objectives to guide the design of conversational user interfaces for young children’s bilingual language acquisition.
Measuring the User Satisfaction in a Recommendation Interface with Multiple Carousels
It is common for video-on-demand and music streaming services to adopt a user interface composed of several recommendation lists, i.e., widgets or swipeable carousels, each generated according to a specific criterion or algorithm (e.g., most recent, top popular, recommended for you, editors’ choice, etc.). Selecting the appropriate combination of carousel has significant impact on user satisfaction. A crucial aspect of this user interface is that to measure the relevance a new carousel for the user it is not sufficient to account solely for its individual quality. Instead, it should be considered that other carousels will already be present in the interface. This is not considered by traditional evaluation protocols for recommenders systems, in which each carousel is evaluated in isolation, regardless of (i) which other carousels are displayed to the user and (ii) the relative position of the carousel with respect to other carousels. Hence, we propose a two-dimensional evaluation protocol for a carousel setting that will measure the quality of a recommendation carousel based on how much it improves upon the quality of an already available set of carousels. Our evaluation protocol takes into account also the position bias, i.e., users do not explore the carousels sequentially, but rather concentrate on the top-left corner of the screen.
We report experiments on the movie domain and notice that under a carousel setting the definition of which criteria has to be preferred to generate a list of recommended items changes with respect to what is commonly understood.
Accessibility of Interactive Television and Media Experiences: Users with Disabilities Have Been Little Voiced at IMX and TVX
We conduct an overview of the landscape of scientific research falling at the intersection of television, immersive media, and accessible interactive technology. To this end, we consider for our analysis a number of 449 papers published at IMX, TVX, and EuroITV between 2003 and 2020, of which we report a total of 19 papers (4.23%) addressing users with disabilities and only 9 (2.00%) actually involving people with disabilities as participants in user studies. We analyze the topics and research contributions of these papers, and draw conclusions about the extent to which accessibility research has been present in the IMX/TVX community.
PRIM Project: Playing and Recording with Interactivity and Multisensoriality
Multisensory Interaction (M.I.) is a promising research field acting on both perception and cognition. Among benefits, including the “design for all” approach, it is expected to increase humans’ cognitive performance (such as learning and cognitive stimulation) as well as user’s experience. To our knowledge, there is no convincing tool allowing researchers to create easily multisensory scenarios, exercises or experimental interaction situations. This paper introduces the PRIM project which aims at designing a new and original tool for designing multisensory interactive interaction situations.
Graphic Novel Subtitles: Requirement Elicitation and System Implementation
Consuming subtitled video content relies on a viewers ability to match up and understand a number of visual inputs simultaneously. This can create challenges in immersion due to the overall readability of subtitles and the speed at which they are presented. In this paper we introduce Graphic Novel Subtitles as an alternative media consumption method that is based on combining video keyframes with subtitle text to create a comic-type experience. We carry out a requirement elicitation survey with 34 participants in order to explore this concept in more detail and identify key features that we present as system requirements.We then introduce a system that can automatically generate a graphic novel from video and subtitle files, and discuss our future evaluation plans.
MixMyVisit – Enhancing the Visitor Experience Through Automatic Generated Videos
Cultural places like museums have been looking for means to enrich the visitor experience during and after the visit. This paper reports on a proposal to contribute to this field presenting a system that automatically creates personalized memory videos of the visit, by identifying the paths of visitors in cultural spaces. The MixMyVisit project combines low-cost devices with NFC technology, allowing a simple way for identifying visitors’ paths, a bot for interaction with the visitor through a textual chat implemented in a social network, the ability for the visitor to share its own captured contents (photos or videos), a server-side video engine supported by ffmpeg components and an online responsive video editor. The features and main technical developments are presented on the paper. The results of an evaluation carried by two experts provide positive insights towards such a system and the team expects to get a final validation in a field trial to be carried out.
Foldables and 2-in-1s: Understanding and Supporting the Needs of Hybrid Device Users
Foldables and 2-in-1 devices provide users with new ways to adapt their device to their mobile context, allowing users to switch device configuration, screen size and shape, and input mechanism in seconds. This has strong implications for the design of interactive media applications that need to adapt to the user’s changing context and device parameters. In this study, we explored how users opportunistically make use of these new device affordances in their everyday lives through interviews with 15 diverse hybrid device owners, identifying use cases and pain points. Users reported regularly moving through multiple configurations or input modes while using a single product, or completing a single task. Based on our learnings, we provide recommendations for practitioners designing for hybrid mobile devices.
Hybrid Workflow Process for Home Based Rehabilitation Movement Capture
Telehealth rehabilitation systems aimed at providing physical and occupational therapy in the home face considerable challenges in terms of clinician and therapist buy-in, system and training costs, and patient and caregiver acceptance. Understanding the optimal workflow process to support practitioners in delivering quality care in partnership with assistive technologies is significant. We describe the iterative co-development of our hybrid physical/digital workflow process for assisting therapists with the setup and calibration of a computer vision based system for remote rehabilitation. Through an interdisciplinary collaboration, we present promising preliminary concepts for streamlining the translation of research outcomes into everyday healthcare experiences.
SESSION: Health and Education
Designing an Educational Virtual Reality Application to Learn Ergonomics in a Work Place
In this paper we describe the process of designing an educational Virtual Reality (VR) application for university medical students to learn how to set up a work place ergonomically correct. We focus on commercially available hardware and a commercial game engine. This was a collaboration between medical experts and VR experts to achieve the ideal results. This use case builds on research of 3D user interfaces (UI) to create an UI system that novice VR-users can use intuitively. Next steps include a user study to compare immersive and non-immersive versions, as well as learn effects in VR.
Exploring Perceptions of Bystander Intervention Training using Virtual Reality
This paper presents a virtual reality (VR) application that allows users to view a series of 360 degree videos, depicting bystander intervention scenarios, from a bystander perspective. Bystander intervention is a commonly used training on how to prevent and de-escalate potentially harmful or violent scenarios [5]. This application enables users to witness, from a first-hand perspective, a successful bystander intervention strategy being used by another person. This paper discusses motivations for creating such an application by giving an overview of the state of the art in bystander intervention training methods. It also discusses the application flow and design of the created system. Additionally, a preliminary user study was conducted to gain initial feedback and user perspectives on the system.
Augmented Reality-Based Remote Family Visits in Nursing Homes
During the COVID-19 pandemic, many nursing homes had to restrict visitations. This had a major negative impact on the wellbeing of residents and their family members. In response, residents and family members increasingly resorted to mediated communication to maintain social contact. To facilitate high-quality mediated social contact between residents in nursing homes and remote family members, we developed an augmented reality (AR)-based communication tool. In this study, we compared the user experience (UX) of AR-communication with that of video calling, for 10 pairs of residents and family members. We measured enjoyment, spatial presence and social presence, attitudes, behavior and conversation duration. In the AR-communication condition, residents perceived a 3D projection of their remote family member onto a chair placed in front of them. In the video calling condition, the family member was shown using 2D video. In both conditions, the family member perceived the resident in the video calling mode on a 2D screen. While residents reported no differences in their UX between both conditions, family members reported higher spatial presence for the AR-communication condition compared to video-calling. Conversation durations were significantly longer during AR-communication than during video calling. We tentatively suggest that there may be (unconscious) differences in UX during AR-based communication compared to video calling.
Using Video Games to Help Visualize and Teach Microbiological Concepts: A study analyzing the learning implications of a video game in comparison to a traditional method
Microbiology is the study of the invisible world that is ever present in and around us. Oftentimes, it can be difficult to communicate and visualize microbiological concepts to students due to many limitations. Static images in textbooks and simple representations of microscopic organisms and processes attempt to combat this, but it can be restricting. To address this, we seek to understand if interactive learning biology video games can lead to better concept retention and comprehension than traditional assignments, such as a reading article with static images. To analyze this, we use Infection Defense, a free, appealing, easily accessible video game about the immune system, a highly relevant and important microbiological concept. Concurrently, we also seek to analyze participant interest before and after playing the game, compared to standard methods. It is reasonable that the use of a video game to learn can lead to enhanced comprehension and engagement in scientific concepts.
SESSION: Live Streaming, Videos, Authoring Tools
Co-creation Stage: a Web-based Tool for Collaborative and Participatory Co-located Art Performances
In recent years, artists and communities have expressed the desire to work with tools that facilitate co-creation and allow distributed community performances. These performances can be spread over several physical stages, connecting them on real-time towards a single experience with the audience distributed along them. This enables a wider remote audience consuming the performance through their own devices, and even grants the participation of remote users in the show. In this paper we introduce the Co-creation Stage, a web-based tool that allows managing heterogeneous content sources, with a particular focus on live and on-demand media, across several distributed devices. The Co-creation Stage is part of the toolset developed in the Traction H2020 project which enables community performing art shows, where professional artists and non-professional participants perform together from different stages and locations. Here we present the design process, the architecture and the main functionalities of the tool as well as the results of the first user evaluation with Opera houses and artists.
Taking That Perfect Aerial Photo: A Synopsis of Interactions for Drone-based Aerial Photography and Video
Personal drones are more and more present in our lives and acting as “flying cameras” is one of their most prominent applications. In this work, we conduct a synopsis of the scientific literature on human-drone interaction to identify system functions and corresponding commands for controlling drone-based aerial photography and video, from which we compile a dictionary of interactions. We also discuss opportunities for more research at the intersection of drone computing, augmented vision, and personal photography.
Timeline: An Authoring Platform for Parameterized Stories
The Timeline platform aims to afford interactors a novel way of experiencing complex, multi-sequential stories and causal relationships while capitalizing on the affordances of interactive digital narratives. Along with the authoring tool, this paper discusses exploratory techniques and best practices that hope to promote new methods of multi-sequential storytelling, i.e. replay stories centered on multiple instantiations, that emphasize parallelism and orienting milestones.
Personal Viewing History Collection Method for Video Streaming Services in User-Centric Model
The personal viewing history of broadcast and internet content can be used to improve the quality of online services for the user; consequently, enhancing a person’s experience in their daily life. This study proposes a collection method of the personal viewing history of online video streaming services in a user-centric model, where the user owns and controls their data. By obtaining event data from a video element of HTML5 in a video streaming player on a web browser, users can store their viewing history separately from the service provider’s system. Because this method employs standardized web technologies, it can be applied to multiple viewing devices and player implementations. Additionally, a prototype of a video streaming player extension is developed. Implementation of the proposed method and verification tests are conducted on the prototype. The results obtained suggest that the proposed method is applicable to existing HTML5 compliant web browsers.
Understanding Rules in Live Streaming Micro Communities on Twitch
Rules and norms are critical to community governance. Live streaming communities like Twitch consist of thousands of micro-communities called channels. We conducted two studies to understand the micro-community rules. Study one suggests that Twitch users perceive that both rules transparency and communication frequency matter to channel vibe and frequency of harassment. Study two finds that the most popular channels have no channel or chat rules; among these having rules, rules encouraged by streamers are prominent. We explain why this may happen and how this contributes to community moderation and future research.
Content Wizard: demo of a trans-vector digital video publication tool
In order to optimise the distribution of video assets online, media organizations need tailor their offerings for specific digital channels and better understand the interests of their audiences at particular points in time, which are often influenced by contemporary new stories and trends on social media. For this purpose, the research project ReTV has developed a Web-based tool termed ’Content Wizard’ which demonstrates an end-to-end, semi-automated workflow for video content creation, adaptation and distribution across digital channels. Digital assets can be selected based on predicted future trending topics, re-purposed according to the different digital channels they will be published upon and scheduled for the optimal future publication date. The result is an innovative video publication workflow that meets the marketing needs of media organisations in this age of transient online media spread across multiple channels.
SESSION: Showcase: Mixed-Reality (VR/AR)
Towards User Generated AR Experiences: Enable consumers to generate their own AR experiences for planning indoor spaces
Communication with a customer or future user during the planning and design phase is crucial in applications such as interior design and furniture retailing. Augmented Reality (AR) has the potential to make these communication processes highly effective and provide a better experience for the customer. Current AR authoring solutions are quite complex and require manually creating scenes or rely on objects prepared with even more complex applications such as CAD tools. However, both design experts and their customers often lack the IT skills to use these tools. In addition, many practical cases involve changing reality rather than just adding to it, thus requiring the use of Diminished Reality (DR) technologies. This paper presents a comprehensive analysis of the requirements of both professionals and consumers (gathered using user surveys and individual interviews) for a lightweight and automated authoring process of AR and DR experiences, deriving a set of requirements that can be aligned with state of the art technologies and identifying a number of challenges for AR research.
Liquid Hands: Evoking Emotional States via Augmented Reality Music Visualizations
Music performances have transformed in unprecedented ways with the advent of digital music. Plenty of music visualizers enhance live performances in various forms, including LED display boards and holographic illustrations. However, the impracticability of live performances due to the CoVID-19 outbreak has led to event organizers adopting alternatives in virtual environments. In this work, we propose Liquid Hands, an Augmented Reality (AR) music visualizer system, wherein three-dimensional particles react to the flow of music, forming a visually aesthetic escapade. With hand-particle interactions, Liquid Hands aims to enrich the music listening experience in one’s personal space and bridge the gap between virtual and physical concerts. We intend to explore the emotions our system induces by conducting a pilot study, in which we measure the user’s psychological state through Electroencephalography (EEG). We hypothesize that the proposed system will evoke emotions akin to those exhibited in live music performances.
Towards Immersive and Social Audience Experience in Remote VR Opera
Opera is a historic art that struggles to be approachable to modern audiences. In partnership with the Irish National Opera (INO), this work considers how VR may be used to develop a new form of immersive opera. To this end, we ran three open-ended focus groups to consider how creative, multisensory, and social VR technology may be employed in digital opera. Our findings assert the importance of creating an immersive experience by safely giving audiences agency to interact, to democratize personal and social experiences, and to consider different ways of representing their bodies, their social rituals, and the virtual social space. Using these findings, we envision a new form of VR opera that couples physical traditions with digital affordances.
Towards XR Communication for Visiting Elderly at Nursing Homes
Due to the current pandemic, the elderly in care homes are greatly affected by the lack of contact with their families, resulting in various mental conditions (e.g., depression, feelings of loneliness) and deterioration of mental health for dementia patients. In response, residents and family members increasingly resorted to mediated communication to maintain social contact. To facilitate high-quality mediated social contact between residents in nursing homes and remote family members, we developed an Augmented Reality (AR)-based communication tool. The proposed demonstrator improved this situation by providing a working communication tool that enables the elderly to feel being together with their family by means of AR techniques. A complete end-to-end-chain architecture is defined, where the aspects of capture, transmission, and rendering are thoroughly investigated to fit the purpose of the use case. Based on an extensive user study comprising user experience (UX) and quality of service (QoS) measurements, each module is presented with the improvements made and the resulting higher quality AR communication platform.
Interactive Characters for Virtual Reality Stories
Virtual Reality (VR) content production is now a flourishing industry. The specifics of VR, as opposed to videogames or movies, allow for a content format where users experience, at the same time, the narrative richness characteristic of movies and theatre plays with interactive engagement. To create such a content format some technical challenges still need to be solved, the main being the need for a new generation of animation engines that can deliver interactive characters appropriate for narrative-focused VR interactive content. We review the main assumptions of this approach and recent progress in interactive character animation techniques that seems promising to realise this goal.
Exploring Affect Recognition in a Virtual Reality Environment
The current global pandemic has resulted in increased social isolation for many. To combat worsening mental states including increased stress and boredom resulting from loneliness, we present a virtual environment designed to simulate social interaction and improve mood. The virtual environment takes the form of a restaurant in which users hold a conversation with a virtual patron. Built in the Unity3D engine and experienced in an Oculus Quest virtual reality headset, our program communicates with an off-the-shelf electroencephalogram (EEG) headset to gather user affective states. Affective states trigger changes in lighting, sound, and conversation topics in real-time based on the user’s emotions. The changes made to the environment reflect relevant psychology research to potentially improve the user’s mood.
Augmented Reality Anatomy Visualization for Surgery Assistance with HoloLens: AR Surgery Assistance with HoloLens
Immediate care for trauma wounded patients in austere or remote settings makes medical knowledge, skills, and efficiency of the on-duty medical professional paramount. For wounds that extend deep into internal anatomy, proper visualization of internal anatomy can enable more efficient and effective evaluation when presented to medical providers positioned close to the point of injury (POI). In this paper, a conceptual Augmented Reality (AR) surgical tool is presented to provide visualization of internal human anatomy, superimposed on the view of a patient, to assist medical providers for immediate casualty care. This AR surgical tool can play a role in 3D surgery or treatment planning as a navigational aid in preparing medical interventions and enhancing surgery or treatment procedures by displaying otherwise obscured anatomy and nearby vessels. Critical software and hardware components are integrated to construct a prototype AR system for the portable AR surgical visualization tool. The system uses a Microsoft HoloLens 1 and an Azure Kinect camera for simultaneous body-tracking and anatomy overlay to demonstrate the overall concept. Future extension of this work will aim to create a more accurate and compact prototype system that utilizes HoloLens 2 with an embedded Kinect camera for laboratory and field tests of its use in surgery assistance. Such an AR tool can also serve as a training tool for medical caregivers, applied with a human subject or a medical manikin.