{"id":852,"date":"2021-06-24T17:38:54","date_gmt":"2021-06-24T17:38:54","guid":{"rendered":"http:\/\/imx.acm.org\/2021\/?page_id=852"},"modified":"2021-06-24T17:39:47","modified_gmt":"2021-06-24T17:39:47","slug":"proceedings","status":"publish","type":"page","link":"https:\/\/imx.acm.org\/2021\/index.php\/program\/proceedings\/","title":{"rendered":"IMX &#8217;21: ACM International Conference on Interactive Media Experiences"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-flow wp-block-group-is-layout-flow\">\n<div id=\"DLtoc\">\n         <div id=\"DLheader\">\n            <a class=\"DLcitLink\" title=\"Go to the ACM Digital Library for additional information about this proceeding\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/proceedings\/10.1145\/3452918\"><img decoding=\"async\" class=\"DLlogo\" alt=\"Digital Library logo\" height=\"30\" src=\"https:\/\/dl.acm.org\/specs\/products\/acm\/releasedAssets\/images\/footer-logo1.png\">\n               Full Citation in the ACM Digital Library\n               <\/a><\/div>\n         <div id=\"DLcontent\">\n            <h2>SESSION: MUSIC, ART &amp; HEALTH<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458799\">Visual Respiratory Feedback in Virtual Reality Exposure Therapy: A Pilot Study<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Deniz Mevlevio\u011flu<\/li>\n               <li class=\"nameList\">David Murphy<\/li>\n               <li class=\"nameList Last\">Sabin Tabirca<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> As the use of Virtual Reality (VR) expands across fields, new kinds of interaction\n                     methods are introduced. This study presents the Visual Heights VR experience that\n                     integrates natural breathing as an input method to provide visual respiratory feedback.\n                     Incorporating spatial audio, haptic feedback and breath visualisation, the experience\n                     aims to be highly immersive. This experience was made to be used as part of a controlled\n                     pilot study to see the effect of respiratory feedback on the user\u2019s anxiety levels.\n                     The user\u2019s anxiety is assessed by their heart rate, brain electrical activity, skin\n                     conductance and respiratory rate. These biosignals are recorded within the experience;\n                     captured by external hardware. The pieces of hardware used were Galvanic Skin Response\n                     to measure skin conductance, photoplethysmogram to measure heart rate; Electroencephalogram\n                     to measure the electrical activity in the brain, and a prototype device that records\n                     airflow on an axis from -1 to 1 for respiratory rate. It was found that the aforementioned\n                     prototype was not sufficient for calculating the respiratory rate. Results of the\n                     controlled study showed that the Visual Heights VR experience delivered the expected\n                     positive correlation between skin conductance and perceived height (r=.491, p &lt; .05,\n                     N=1543) which suggests it is plausible to be used as a material for further research.\n                     As the integration of user\u2019s physiological signals and breathing for visual feedback\n                     can contribute to therapeutic uses of VR, research with bigger sample sizes will be\n                     conducted to better investigate the relationship between visual respiratory feedback\n                     and anxiety using the Visual Heights VR experience.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458794\">Harassment Experiences of Women and LGBTQ Live Streamers and How They Handled Negativity<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jirassaya Uttarapong<\/li>\n               <li class=\"nameList\">Jie Cai<\/li>\n               <li class=\"nameList Last\">Donghee Yvette Yvette Wohn<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Live streaming is a form of interactive media that potentially makes streamers more\n                     vulnerable to harassment due to the unique attributes of the technology that facilitates\n                     enhanced information sharing via video and audio. In this study, we document the harassment\n                     experiences of 25 live streamers on Twitch from underrepresented groups including\n                     women and\/or LGBTQ streamers and investigate how they handle and prevent adversity.\n                     In particular, live streaming enables streamers to self-moderate their communities,\n                     so we delve into the methods of how they manage their communities from both a social\n                     and technical perspective. We found that technology can cover the basics for handling\n                     negativity, but much emotional and relational work is invested in moderation, community\n                     maintenance, and self-care.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458803\">Musical Haptic Wearables for Synchronisation of Visually-impaired Performers: a Co-design\n                  Approach<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Luca Turchet<\/li>\n               <li class=\"nameList\">David Baker<\/li>\n               <li class=\"nameList Last\">Tony Stockman<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> The emergence of new technologies is providing opportunities to develop novel solutions\n                     that facilitate the integration of visually-impaired people in different activities\n                     of our daily life, including collective music making. This paper presents a study\n                     conducted with visually-impaired music performers, which involved a participatory\n                     approach to the design of accessible technologies for musical communication in group\n                     playing. We report on three workshops that were conducted together with members of\n                     an established ensemble of solely visually-impaired musicians. The first workshop\n                     focused on the identification of the participants\u2019 needs during the activity of playing\n                     in groups and how technology could satisfy such needs. The second and third workshops\n                     investigated, respectively, the activities of choir singing and instrument playing\n                     in ensemble, focusing on the key issue of synchronisation that was identified in the\n                     first workshop. The workshops involved prototypes of musical haptic wearables, which\n                     were co-designed and evaluated by the participants. Overall, results indicate that\n                     wireless tactile communication represents a promising avenue to cater effectively\n                     to the needs of visually-impaired performers.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458807\">Extending Music Notation as a Programming Language for Interactive Music<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList Last\">Juan Carlos Martinez<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> This work describes a novel approach for composing and performing interactive music\n                     by extending the traditional staff notation to the programming language domain. The\n                     proposed syntax aims to describe the interaction between humans and computers in live-electronics\n                     music performance. Thus, both performers and machines will understand this new notation,\n                     creating a cohesive music representation for performance that is both human-readable\n                     and technology-independent. This paper starts by describing some critical issues related\n                     to live-electronics that make it challenging to build repertoire around this genre.\n                     Next, the proposed approach is detailed, along with some syntax examples. Finally,\n                     the last section describes the evaluation of the proposed approach, including a description\n                     of the software implementation and a set of short interactive-pieces.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458802\">Coping, Hacking, and DIY: Reframing the Accessibility of Interactions with Television\n                  for People with Motor Impairments<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Ovidiu-Ciprian Ungurean<\/li>\n               <li class=\"nameList Last\">Radu-Daniel Vatavu<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> We conduct an examination of the accessibility challenges experienced by people with\n                     upper body motor impairments when interacting with television. We report findings\n                     from a study with N=41 people with motor impairments (spinal cord injury, cerebral\n                     palsy, muscular dystrophy) and document their challenges and coping strategies for\n                     using the TV remote control, but also their television watching experience and expectations\n                     of suitable assistive technology for television. Our results show that, despite several\n                     accessible remote control products available on the market, the majority of our participants\n                     preferred to DIY and hack, and to adopt coping strategies to be able to use conventional\n                     TV remote controls. We contrast their experience against that of a control group with\n                     N=41 people without impairments. We reflect about the DIY culture and people with\n                     motor impairments, and we propose future work directions to increase the accessibility\n                     of interactions with television.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: SOCIAL<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458801\">AR-TV and AR-Di\u00e0nsh\u00ec: Cultural Differences in Users\u2019 Preferences for Augmented Reality\n                  Television<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Irina Popovici<\/li>\n               <li class=\"nameList\">Radu-Daniel Vatavu<\/li>\n               <li class=\"nameList\">Pu Feng<\/li>\n               <li class=\"nameList Last\">Wenjun Wu<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> As Augmented Reality television gains momentum, it is important to understand whether\n                     cultural differences among viewers favor different expectations and preferences for\n                     immersion in such new television environments. A previous study documented the preferences\n                     of 172 participants from various European countries for twenty application scenarios\n                     for ARTV, such as virtual objects coming out of the TV screen into the room. In this\n                     work, we conduct an empirical generalization of this previous study to understand\n                     potential cultural differences in users\u2019 preferences for and expectations of ARTV.\n                     To this end, we report insights from data collected from a sample of 147 participants\n                     from China, which we compare against the preferences expressed by the participants\n                     from Europe from the original study. Our findings reveal similarities, but also differences\n                     in terms of expectations of ARTV across the two cultural groups. We draw implications\n                     for future research on culturally-aware augmentations of the television watching experience.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458796\">Moderation Visibility: Mapping the Strategies of Volunteer Moderators in Live Streaming\n                  Micro Communities<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jie Cai<\/li>\n               <li class=\"nameList\">Donghee Yvette Wohn<\/li>\n               <li class=\"nameList Last\">Mashael Almoqbel<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Volunteer moderators actively engage in online content management, such as removing\n                     toxic content and sanctioning anti-normative behaviors in user-governed communities.\n                     The synchronicity and ephemerality of live-streaming communities pose unique moderation\n                     challenges. Based on interviews with 21 volunteer moderators on Twitch, we mapped\n                     out 13 moderation strategies and presented them in relation to the bad act, enabling\n                     us to categorize from proactive and reactive perspectives and identify communicative\n                     and technical interventions. We found that the act of moderation involves highly visible\n                     and performative activities in the chat and invisible activities involving coordination\n                     and sanction. The juxtaposition of real-time individual decision-making with collaborative\n                     discussions and the dual nature of visible and invisible activities of moderators\n                     provide a unique lens into a role that relies heavily on both the social and technical.\n                     We also discuss how the affordances of live-streaming contribute to these unique activities.\n                     <\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458808\">To Use or Not to Use: Mediation and Limitation of Digital Screen Technologies within\n                  Nuclear Families<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Melanie Duckert<\/li>\n               <li class=\"nameList Last\">Louise Barkhuus<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Today\u2019s home environment is affected by multiple screen technologies designed for\n                     personal and home use, making family members audience of the omnipresent technologies.\n                     We investigate how the past decades\u2019 increasingly technology saturated home environment\n                     influences home practices and parents\u2019 mediation of their rules of conduct for children\u2019s\n                     access and use. We conducted a two-part interview study with parents from different\n                     nuclear families, and found parental mediation of screen technologies to have become\n                     a complex and emotional process with continuous mediation of when to use or not use\n                     screens. Despite a shared goal of decreasing the role of screen technology, the parents\n                     differentiated between rules, regulations, and limitations, which could provide tensions\n                     within the family and between different families if attitudes and\/or practices were\n                     not consistent. As such, we argue internal family rules and regulations to be a continuous\n                     negotiation between parents and children, where personal principles and external expectations\n                     impact a family\u2019s code of conduct. Our study contributes to a better understanding\n                     of screen technology practices, leading to design guidelines for screenbased home\n                     technology.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458805\">Hugging from A Distance: Building Interpersonal Relationships in Social Virtual Reality<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Guo Freeman<\/li>\n               <li class=\"nameList Last\">Dane Acena<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> This paper focuses on how the emerging social VR systems, as new and unique social\n                     interaction spaces that afford high-fidelity and multidimensional physical presence,\n                     may support building interpersonal relationships in a more nuanced, immersive, and\n                     embodied way. Based on 30 interviews, our investigation focuses on 1) the main reasons\n                     why people build and foster interpersonal relationships in social VR; 2) various novel\n                     activities through which users can foster relationships in social VR; and 3) the complicated\n                     influences of social VR mediated relationships on users\u2019 online and offline social\n                     lives. We contribute to better understanding mediated interactive experiences by shedding\n                     light on the novel role of social VR in transforming how people meet, interact, and\n                     establish connections with others compared to other forms of media. We also provide\n                     potential directions to inform the design of future social VR systems to better afford\n                     healthy, fulfilling, and supportive interpersonal relationships.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: AI &amp; DATA<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458792\">Evaluating AI assisted subtitling<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Than Htut Soe<\/li>\n               <li class=\"nameList\">Frode Guribye<\/li>\n               <li class=\"nameList Last\">Marija Slavkovik<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Recent advances in artificial intelligence (AI) have led to an increased focus on\n                     automating media production. One relevant application area for AI is using speech\n                     recognition to create subtitles and closed captions for videos. The AI methods based\n                     on machine learning are still not sufficiently reliable in terms of producing perfect\n                     or acceptable subtitles. To compensate for this unreliability, AI can be used to build\n                     tools that support, rather than replace, human efforts and to create semi-automated\n                     workflows. In this paper, we present a prototype for including automated speech recognition\n                     for subtitling in an existing production-grade video editing tool. We devised an experiment\n                     with 25 participants and tested the efficiency and effectiveness of this tool compared\n                     to a fully manual process. The results show that there is a significant increase in\n                     both effectiveness and efficiency for novices in subtitling. Furthermore, the participants\n                     found the augmented process to be more demanding. We identify some usability issues\n                     and design choices that pertain to making augmented subtitling easier. <\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458797\">Human Data Interaction in Data-Driven Media Experiences : An Exploration of Data Sensitive\n                  Responses to the Socio-Technical Challenges of Personal Data Leverage<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Neelima Sailaja<\/li>\n               <li class=\"nameList\">Rhianne Jones<\/li>\n               <li class=\"nameList Last\">Derek McAuley<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>While explication of socio-technical challenges posed by personal data leverage in\n                     media research has been of interest recently, effective responses that alleviate them\n                     are yet to be studied. This paper reports the use of a Cross Media Profiler prototype\n                     supported by a Personal Data Store, that combines personal data from different media\n                     services for more holistic media recommendations. This prototype is used to probe\n                     and explore the integration of Human Data Interaction principles in media experiences,\n                     as a response to these challenges. Our focus groups reveal that users prefer the media\n                     service when supported by the PDS while highlighting improvements around transparency\n                     and control. The work leads to two outcomes : design recommendations for future media\n                     experiences to embody increased sensitivity to the implications of personal data leverage\n                     and a critique of the HDI agenda through the lens of data driven media, which scopes\n                     out potential for future IMX intervention.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458793\">Stay Tuned! An Investigation of Content Substitution, the Listener as Curator and\n                  Other Innovations in Broadcast Radio<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Maarten Wijnants<\/li>\n               <li class=\"nameList\">Eva Geurts<\/li>\n               <li class=\"nameList\">Hendrik Lievens<\/li>\n               <li class=\"nameList\">Peter Quax<\/li>\n               <li class=\"nameList Last\">Wim Lamotte<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> This paper demystifies listeners\u2019 wishes with respect to broadcast radio innovation\n                     (with a specific focus on radio-mediated music consumption). Our study encompasses\n                     an ideation workshop with radio experts, an exploratory survey and a mixed methods\n                     empirical evaluation. The empirical evaluation uses two concrete concepts (i.e., letting\n                     listeners on-the-fly replace radio content with preferred content and fostering participatory\n                     radio production by involving listeners as radio content curators) as a lens to zoom\n                     in on the questionable desirability of radio innovation. It is learned that a significant\n                     consumer group exists who will stay loyal to broadcast radio even if it does not evolve\n                     substantially, whereas others need disruptive incentives to start listening to radio\n                     (again). From our results we distill design recommendations to educate the radio production\n                     community about how best to approach radio innovation.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458798\">Data-driven Approaches for Discovery and Prediction of User-preferred Picture Settings\n                  on Smart TVs<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Hosub Lee<\/li>\n               <li class=\"nameList Last\">Youngmin Park<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> We discover user-preferred picture settings on smart TVs and investigate whether\n                     it is possible to predict the users\u2019 picture setting preferences through machine learning\n                     methods. We first perform K-means clustering on large-scale smart TV usage log data\n                     to understand how users fine-tune the factory default picture settings. Clustering\n                     results recognize 3\u20134 user groups who have reasonably different preferences toward\n                     the default settings. By characterizing these user preferences, we come up with new\n                     user-preferred picture settings. We perform an in-depth analysis of the newly discovered\n                     picture settings regarding diverse TV device characteristics. We also perform lab\n                     experiments to demonstrate how these new settings deliver different picture quality\n                     than the default. Next, we construct a deep learning-based classifier that learns\n                     and predicts the picture setting preferences of the users. The final trained model\n                     shows 86% accuracy in predicting users\u2019 decisions to choose a specific picture setting\n                     out of four available options.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: METHOD &amp; FRAMEWORK<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458806\">Mixing Modalities of 3D Sketching and Speech for Interactive Model Retrieval in Virtual\n                  Reality<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Daniele Giunchi<\/li>\n               <li class=\"nameList\">Alejandro Sztrajman<\/li>\n               <li class=\"nameList\">Stuart James<\/li>\n               <li class=\"nameList Last\">Anthony Steed<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Sketch and speech are intuitive interaction methods that convey complementary information\n                     and have been independently used for 3D model retrieval in virtual environments. While\n                     sketch has been shown to be an effective retrieval method, not all collections are\n                     easily navigable using this modality alone. We design a new challenging database for\n                     sketch comprised of 3D chairs where each of the components (arms, legs, seat, back)\n                     are independently colored. To overcome this, we implement a multimodal interface for\n                     querying 3D model databases within a virtual environment. We base the sketch on the\n                     state-of-the-art for 3D Sketch Retrieval, and use a Wizard-of-Oz style experiment\n                     to process the voice input. In this way, we avoid the complexities of natural language\n                     processing which frequently requires fine-tuning to be robust. We conduct two user\n                     studies and show that hybrid search strategies emerge from the combination of interactions,\n                     fostering the advantages provided by both modalities. <\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458795\">Context-Aware Question-Answer for Interactive Media Experiences<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Kyle Jorgensen<\/li>\n               <li class=\"nameList\">Zhiqun Zhao<\/li>\n               <li class=\"nameList\">Haohong Wang<\/li>\n               <li class=\"nameList\">Mea Wang<\/li>\n               <li class=\"nameList Last\">Zhihai He<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Media content has become a primary source of information, entertainment, and even\n                     education. The ability to provide video content querying as well as interactive experiences\n                     is a new challenge. To this end, question answering (QA) systems such as Alexa and\n                     Google Assistant have become quite established in consumer markets but are limited\n                     to general information and lack context awareness. In this paper, we propose Context-QA,\n                     a light-weight context-aware QA framework, to provide QA experiences on multimedia\n                     content. The context awareness is achieved through our innovative Staged QA Controller\n                     algorithm that keeps the search for answers in the context most relevant to the question.\n                     Our evaluation results show that Context-QA improves the quality of the answers by\n                     up to 49% and uses up to 56% less time compared to the conventional QA model. Subjective\n                     tests show Context-QA improved results over conventional QA models, with 90% reporting\n                     enjoying this new media form.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458800\">Attention Guidance Technique Using Visual Subliminal Cues And Its Application On Videos<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Eugene Hwang<\/li>\n               <li class=\"nameList Last\">Jeongmi Lee<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Attention is known to be shifted reflexively by subliminal cues in static environments,\n                     but their effect when applied in dynamic environments remains unclear. This study\n                     examines the effect of subliminal cues in both static and dynamic environments and\n                     presents a novel technique of applying subliminal cues within videos. Experiment 1\n                     confirmed the effect of subliminal cues in guiding covert spatial attention in a static\n                     environment. Experiment 2 investigated the effect of subliminal cues in guiding overall\n                     gaze distribution in video context by manipulating the frequency of subliminal cues\n                     to bias the viewer\u2019s gaze towards a specific side. There was no main effect of cue\n                     frequency, but additional findings showed the possibility that the effect of subliminal\n                     cues occurred differently between gender, and other factors such as gaze orientation\n                     bias influenced the viewer\u2019s gaze distribution. These results provide insights on\n                     application of subliminal cues in video contexts and render the directions for future\n                     studies.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3458804\">Camera Distances and Shot Sizes in Cinematic Virtual Reality<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Pia Carola Probst<\/li>\n               <li class=\"nameList\">Sylvia Rothe<\/li>\n               <li class=\"nameList Last\">Heinrich Hussmann<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> This paper describes the impact of different camera distances in cinematic virtual\n                     reality. For our findings, we took a closer look at proxemics, the study on how humans\n                     behave regarding space and distances. We explored the four proxemics distances (intimate,\n                     personal, social, public) and adapted them to camera distances in cinematic virtual\n                     reality. For the user study, a stereoscopic movie with a speaking person was produced\n                     for four different distances. The results show that different distances cause different\n                     feelings in viewers similar to shot sizes in traditional movies. In our scenario &#8211;\n                     a person in a museum is speaking about an exhibit &#8211; the personal distance was preferred\n                     by the participants. As an outcome of this work, the proxemics distances were put\n                     in relation to well-known shot sizes used in traditional filmmaking.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: AI\/ Autonomous Systems\/ IoT<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465486\">Going Beyond Second Screens: Applications for the Multi-display Intelligent Living\n                  Room<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Asterios Leonidis<\/li>\n               <li class=\"nameList\">Maria Korozi<\/li>\n               <li class=\"nameList\">Vasilios Kouroumalis<\/li>\n               <li class=\"nameList\">Emmanouil Adamakis<\/li>\n               <li class=\"nameList\">Dimitrios Milathianakis<\/li>\n               <li class=\"nameList Last\">Constantine Stephanidis<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>This work aims to investigate how the amenities offered by Intelligent Environments\n                     can be used to shape new types of useful, exciting and fulfilling experiences while\n                     watching sports or movies. Towards this direction, two ambient media players were\n                     developed aspiring to offer live access to secondary information via the available\n                     displays of an Intelligent Living Room, and to appropriately exploit the technological\n                     equipment so as to support natural interaction. Expert-based evaluation experiments\n                     revealed some factors that can influence the overall experience significantly, without\n                     hindering the viewers\u2019 immersion to the main media.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465488\">Appropriate Timing and Length of Voice News Notifications<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Hiromu Ogawa<\/li>\n               <li class=\"nameList\">Kinji Matsumura<\/li>\n               <li class=\"nameList Last\">Arisa Fujii<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Voice news notifications pushed without a user&#8217;s explicit request enable the user\n                     to consume the news while they are performing various other activities. Thus, the\n                     user can keep up with the news naturally on a daily basis. However, the notifications\n                     can be intrusive if the timing of delivery is inappropriate. To estimate the appropriate\n                     timing, we prototyped a system that detects breakpoints in users&#8217; daily routines at\n                     home based on IoT sensor data and then sends voice news notifications through a speaker\n                     centrally located in a smart home environment. The system was installed in the residence\n                     of four participants for field testing over a duration of nearly two weeks. Subsequently,\n                     the participants were interviewed regarding their subjective evaluation of the system.\n                     The results suggest that (1) voice news notifications at appropriate timings can increase\n                     the number of user opportunities in accessing news updates without unduly distracting\n                     the user, and (2) the acceptable voice news notification clip length differs depending\n                     on the activities subsequent to the breakpoints. We believe that our voice notification\n                     system can be applied to enable users to receive news updates in a low-intrusive manner\n                     without distracting them from their daily activities.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465505\">Smartphone-based Content Annotation for Ground Truth Collection in Affective Computing<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Gon\u00e7alo Filipe Duarte Salvador<\/li>\n               <li class=\"nameList\">Patr\u00edcia J. Bota<\/li>\n               <li class=\"nameList\">Vinoba Vinayagamoorthy<\/li>\n               <li class=\"nameList\">Hugo Pl\u00e1cido da Silva<\/li>\n               <li class=\"nameList Last\">Ana Fred<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> This paper presents a real-time emotion-annotation tool using a personal mobile device.\n                     To this end, an application based on the Valence-Arousal model was developed, following\n                     two different approaches (\u201dTwo-step Sequential Annotation\u201d and \u201dOne-step Matrix Annotation\u201d).\n                     The application was tested through, an experiment where users performed annotations\n                     with each version of the app and then filled a feedback questionnaire. This questionnaire\n                     contained statements used to assess the usability and mental workload. A total of\n                     16 (9 female) participants aged between 21 and 34 years old engaged in this experiment.\n                     Overall the results were quite encouraging in both versions, taking into account that\n                     this is still a work in progress. The \u201dSingle-step Matrix Annotation\u201d was considered\n                     the preferred version.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3467814\">Building a \u2018Sicko\u2019 AI: AIBO: An Emotionally Intelligent Artificial Intelligent GPT-2 AI Brainwave Opera<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList Last\">Ellen Pearlman<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Using the GPT-2 algorithm developed by OpenAI in February 2019, a skewered or \u2018sicko\u2019\n                     AI character was created as a live time entity running in the Google cloud. The algorithm\n                     allows imitations of human dialogue that produce fake and often realistic interactions\n                     emanating from computer cloud-based agents. The character was created as one of two\n                     characters in the emotionally intelligent artificial intelligence brainwave opera\n                     AIBO (Artificial Intelligent Brainwave Opera). The spoken word opera rhetorically\n                     inquired \u201cCan an AI be fascist?\u201d and \u201cCan an AI have epigenetic or inherited traumatic\n                     memory?\u201d through the interplay of human and non-human characters. This paper discusses\n                     certain aspects involved in creating the GPT-2 cloud-based character AIBO.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: Computer-Mediated Communication and Interaction<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465498\">Conversational User Interfaces As Assistive Interlocutors For Young Children\u2019s Bilingual\n                  Language Acquisition<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Neelma Bhatti<\/li>\n               <li class=\"nameList\">Timothy L. Stelter<\/li>\n               <li class=\"nameList\">Scott McCrickard<\/li>\n               <li class=\"nameList Last\">Aisling Kelliher<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Children in international and cross-cultural families in and outside of the US often\n                     learn and speak more than one language. Challenges can arise for these children in\n                     terms of communicating with other children and being able to fully participate in\n                     school and society using the primary country language, in developing relationships\n                     with distant relatives in other languages, and with the lack of opportunities for\n                     practising additional languages within a small community of speakers. Recent research\n                     shows that some parents use screen media content to acquaint their children with their\n                     parent\u2019s native language, and to also help them become proficient in the language\n                     of communication in the country that they reside in. We leverage the qualities of\n                     screen media in aiding children with language learning, and try to translate those\n                     qualities into the design of a CUI for children to explore the potential of designing\n                     conversational user interfaces which can double as assistive language aids. By reviewing\n                     the relevant literature about the role of screen media content in young children\u2019s\n                     language learning, and interviewing a subset of parents raising multilingual children,\n                     we present a preliminary list of objectives to guide the design of conversational\n                     user interfaces for young children\u2019s bilingual language acquisition.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465493\">Measuring the User Satisfaction in a Recommendation Interface with Multiple Carousels<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Nicol\u00f2 Felicioni<\/li>\n               <li class=\"nameList\">Maurizio Ferrari Dacrema<\/li>\n               <li class=\"nameList Last\">Paolo Cremonesi<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> It is common for video-on-demand and music streaming services to adopt a user interface\n                     composed of several recommendation lists, i.e., widgets or swipeable carousels, each\n                     generated according to a specific criterion or algorithm (e.g., most recent, top popular,\n                     recommended for you, editors\u2019 choice, etc.). Selecting the appropriate combination\n                     of carousel has significant impact on user satisfaction. A crucial aspect of this\n                     user interface is that to measure the relevance a new carousel for the user it is\n                     not sufficient to account solely for its individual quality. Instead, it should be\n                     considered that other carousels will already be present in the interface. This is\n                     not considered by traditional evaluation protocols for recommenders systems, in which\n                     each carousel is evaluated in isolation, regardless of (i) which other carousels are\n                     displayed to the user and (ii) the relative position of the carousel with respect\n                     to other carousels. Hence, we propose a two-dimensional evaluation protocol for a\n                     carousel setting that will measure the quality of a recommendation carousel based\n                     on how much it improves upon the quality of an already available set of carousels.\n                     Our evaluation protocol takes into account also the position bias, i.e., users do\n                     not explore the carousels sequentially, but rather concentrate on the top-left corner\n                     of the screen. <\/p> \n                  <p>We report experiments on the movie domain and notice that under a carousel setting\n                     the definition of which criteria has to be preferred to generate a list of recommended\n                     items changes with respect to what is commonly understood.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465485\">Accessibility of Interactive Television and Media Experiences: Users with Disabilities\n                  Have Been Little Voiced at IMX and TVX<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList Last\">Radu-Daniel Vatavu<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> We conduct an overview of the landscape of scientific research falling at the intersection\n                     of television, immersive media, and accessible interactive technology. To this end,\n                     we consider for our analysis a number of 449 papers published at IMX, TVX, and EuroITV\n                     between 2003 and 2020, of which we report a total of 19 papers (4.23%) addressing\n                     users with disabilities and only 9 (2.00%) actually involving people with disabilities\n                     as participants in user studies. We analyze the topics and research contributions\n                     of these papers, and draw conclusions about the extent to which accessibility research\n                     has been present in the IMX\/TVX community.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465487\">PRIM Project: Playing and Recording with Interactivity and Multisensoriality<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">C\u00e9line Jost<\/li>\n               <li class=\"nameList\">Justin Debloos<\/li>\n               <li class=\"nameList\">Dominique Archambault<\/li>\n               <li class=\"nameList\">Brigitte Le P\u00e9v\u00e9dic<\/li>\n               <li class=\"nameList\">Jack Sagot<\/li>\n               <li class=\"nameList\">R\u00e9my Sohier<\/li>\n               <li class=\"nameList\">Charles Albert Tijus<\/li>\n               <li class=\"nameList\">Isis Truck<\/li>\n               <li class=\"nameList Last\">G\u00e9rard Uzan<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Multisensory Interaction (M.I.) is a promising research field acting on both perception\n                     and cognition. Among benefits, including the &#8220;design for all&#8221; approach, it is expected\n                     to increase humans\u2019 cognitive performance (such as learning and cognitive stimulation)\n                     as well as user&#8217;s experience. To our knowledge, there is no convincing tool allowing\n                     researchers to create easily multisensory scenarios, exercises or experimental interaction\n                     situations. This paper introduces the PRIM project which aims at designing a new and\n                     original tool for designing multisensory interactive interaction situations.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465489\">Graphic Novel Subtitles: Requirement Elicitation and System Implementation<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Amy Gourlay<\/li>\n               <li class=\"nameList Last\">Michael Crabb<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Consuming subtitled video content relies on a viewers ability to match up and understand\n                     a number of visual inputs simultaneously. This can create challenges in immersion\n                     due to the overall readability of subtitles and the speed at which they are presented.\n                     In this paper we introduce Graphic Novel Subtitles as an alternative media consumption\n                     method that is based on combining video keyframes with subtitle text to create a comic-type\n                     experience. We carry out a requirement elicitation survey with 34 participants in\n                     order to explore this concept in more detail and identify key features that we present\n                     as system requirements.We then introduce a system that can automatically generate\n                     a graphic novel from video and subtitle files, and discuss our future evaluation plans.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465500\">MixMyVisit \u2013 Enhancing the Visitor Experience Through Automatic Generated Videos<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Pedro Almeida<\/li>\n               <li class=\"nameList\">Pedro Be\u00e7a<\/li>\n               <li class=\"nameList\">Jos\u00e9 Soares<\/li>\n               <li class=\"nameList Last\">B\u00e1rbara Soares<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Cultural places like museums have been looking for means to enrich the visitor experience\n                     during and after the visit. This paper reports on a proposal to contribute to this\n                     field presenting a system that automatically creates personalized memory videos of\n                     the visit, by identifying the paths of visitors in cultural spaces. The MixMyVisit\n                     project combines low-cost devices with NFC technology, allowing a simple way for identifying\n                     visitors\u2019 paths, a bot for interaction with the visitor through a textual chat implemented\n                     in a social network, the ability for the visitor to share its own captured contents\n                     (photos or videos), a server-side video engine supported by ffmpeg components and\n                     an online responsive video editor. The features and main technical developments are\n                     presented on the paper. The results of an evaluation carried by two experts provide\n                     positive insights towards such a system and the team expects to get a final validation\n                     in a field trial to be carried out.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465503\">Foldables and 2-in-1s: Understanding and Supporting the Needs of Hybrid Device Users<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Steven Schirra<\/li>\n               <li class=\"nameList Last\">Caitlin Barta<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Foldables and 2-in-1 devices provide users with new ways to adapt their device to\n                     their mobile context, allowing users to switch device configuration, screen size and\n                     shape, and input mechanism in seconds. This has strong implications for the design\n                     of interactive media applications that need to adapt to the user&#8217;s changing context\n                     and device parameters. In this study, we explored how users opportunistically make\n                     use of these new device affordances in their everyday lives through interviews with\n                     15 diverse hybrid device owners, identifying use cases and pain points. Users reported\n                     regularly moving through multiple configurations or input modes while using a single\n                     product, or completing a single task. Based on our learnings, we provide recommendations\n                     for practitioners designing for hybrid mobile devices.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465499\">Hybrid Workflow Process for Home Based Rehabilitation Movement Capture<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Juliet Clark<\/li>\n               <li class=\"nameList\">Setor Zilevu<\/li>\n               <li class=\"nameList\">Tamim Ahmed<\/li>\n               <li class=\"nameList\">Aisling Kelliher<\/li>\n               <li class=\"nameList\">Sai Krishna Yeshala<\/li>\n               <li class=\"nameList\">Sarah Garrison<\/li>\n               <li class=\"nameList\">Cathleen Garcia<\/li>\n               <li class=\"nameList\">Olivia C. Menezes<\/li>\n               <li class=\"nameList\">Minakshi Seth<\/li>\n               <li class=\"nameList Last\">Thanassis Rikakis<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Telehealth rehabilitation systems aimed at providing physical and occupational therapy\n                     in the home face considerable challenges in terms of clinician and therapist buy-in,\n                     system and training costs, and patient and caregiver acceptance. Understanding the\n                     optimal workflow process to support practitioners in delivering quality care in partnership\n                     with assistive technologies is significant. We describe the iterative co-development\n                     of our hybrid physical\/digital workflow process for assisting therapists with the\n                     setup and calibration of a computer vision based system for remote rehabilitation.\n                     Through an interdisciplinary collaboration, we present promising preliminary concepts\n                     for streamlining the translation of research outcomes into everyday healthcare experiences.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: Health and Education<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465504\">Designing an Educational Virtual Reality Application to Learn Ergonomics in a Work\n                  Place<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Elisabeth Mayer<\/li>\n               <li class=\"nameList\">Klara Kriszun<\/li>\n               <li class=\"nameList\">Luisa Merz<\/li>\n               <li class=\"nameList\">Katja Radon<\/li>\n               <li class=\"nameList\">Marie Astrid Garrido<\/li>\n               <li class=\"nameList Last\">Dieter August Kranzlm\u00fcller<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> In this paper we describe the process of designing an educational Virtual Reality\n                     (VR) application for university medical students to learn how to set up a work place\n                     ergonomically correct. We focus on commercially available hardware and a commercial\n                     game engine. This was a collaboration between medical experts and VR experts to achieve\n                     the ideal results. This use case builds on research of 3D user interfaces (UI) to\n                     create an UI system that novice VR-users can use intuitively. Next steps include a\n                     user study to compare immersive and non-immersive versions, as well as learn effects\n                     in VR.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465497\">Exploring Perceptions of Bystander Intervention Training using Virtual Reality<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Sarah Garcia<\/li>\n               <li class=\"nameList\">Soumya Joseph Abraham<\/li>\n               <li class=\"nameList Last\">Marvin Andujar<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>This paper presents a virtual reality (VR) application that allows users to view a\n                     series of 360 degree videos, depicting bystander intervention scenarios, from a bystander\n                     perspective. Bystander intervention is a commonly used training on how to prevent\n                     and de-escalate potentially harmful or violent scenarios [5]. This application enables\n                     users to witness, from a first-hand perspective, a successful bystander intervention\n                     strategy being used by another person. This paper discusses motivations for creating\n                     such an application by giving an overview of the state of the art in bystander intervention\n                     training methods. It also discusses the application flow and design of the created\n                     system. Additionally, a preliminary user study was conducted to gain initial feedback\n                     and user perspectives on the system. <\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465502\">Augmented Reality-Based Remote Family Visits in Nursing Homes<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Alexander Toet<\/li>\n               <li class=\"nameList\">Hans Stokking<\/li>\n               <li class=\"nameList\">Tessa Klunder<\/li>\n               <li class=\"nameList\">Zeph M.C. van Berlo<\/li>\n               <li class=\"nameList\">Bram Smeets<\/li>\n               <li class=\"nameList Last\">Omar Niamut<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>During the COVID-19 pandemic, many nursing homes had to restrict visitations. This\n                     had a major negative impact on the wellbeing of residents and their family members.\n                     In response, residents and family members increasingly resorted to mediated communication\n                     to maintain social contact. To facilitate high-quality mediated social contact between\n                     residents in nursing homes and remote family members, we developed an augmented reality\n                     (AR)-based communication tool. In this study, we compared the user experience (UX)\n                     of AR-communication with that of video calling, for 10 pairs of residents and family\n                     members. We measured enjoyment, spatial presence and social presence, attitudes, behavior\n                     and conversation duration. In the AR-communication condition, residents perceived\n                     a 3D projection of their remote family member onto a chair placed in front of them.\n                     In the video calling condition, the family member was shown using 2D video. In both\n                     conditions, the family member perceived the resident in the video calling mode on\n                     a 2D screen. While residents reported no differences in their UX between both conditions,\n                     family members reported higher spatial presence for the AR-communication condition\n                     compared to video-calling. Conversation durations were significantly longer during\n                     AR-communication than during video calling. We tentatively suggest that there may\n                     be (unconscious) differences in UX during AR-based communication compared to video\n                     calling.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3468027\">Using Video Games to Help Visualize and Teach Microbiological Concepts: A study analyzing the learning implications of a video game in comparison to a traditional\n                  method<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Angelos R. Gogonis<\/li>\n               <li class=\"nameList Last\">Rabindra Ratan<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Microbiology is the study of the invisible world that is ever present in and around\n                     us. Oftentimes, it can be difficult to communicate and visualize microbiological concepts\n                     to students due to many limitations. Static images in textbooks and simple representations\n                     of microscopic organisms and processes attempt to combat this, but it can be restricting.\n                     To address this, we seek to understand if interactive learning biology video games\n                     can lead to better concept retention and comprehension than traditional assignments,\n                     such as a reading article with static images. To analyze this, we use Infection Defense,\n                     a free, appealing, easily accessible video game about the immune system, a highly\n                     relevant and important microbiological concept. Concurrently, we also seek to analyze\n                     participant interest before and after playing the game, compared to standard methods.\n                     It is reasonable that the use of a video game to learn can lead to enhanced comprehension\n                     and engagement in scientific concepts.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: Live Streaming, Videos, Authoring Tools<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465483\">Co-creation Stage: a Web-based Tool for Collaborative and Participatory Co-located\n                  Art Performances<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">H\u00e9ctor Rivas Pagador<\/li>\n               <li class=\"nameList\">Ana Dominguez<\/li>\n               <li class=\"nameList\">Stefano Masneri<\/li>\n               <li class=\"nameList\">I\u00f1igo Tamayo<\/li>\n               <li class=\"nameList\">Mikel Zorrilla<\/li>\n               <li class=\"nameList\">Pedro Almeida<\/li>\n               <li class=\"nameList\">Jie Li<\/li>\n               <li class=\"nameList\">Alina Striner<\/li>\n               <li class=\"nameList Last\">Pablo Cesar<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> In recent years, artists and communities have expressed the desire to work with tools\n                     that facilitate co-creation and allow distributed community performances. These performances\n                     can be spread over several physical stages, connecting them on real-time towards a\n                     single experience with the audience distributed along them. This enables a wider remote\n                     audience consuming the performance through their own devices, and even grants the\n                     participation of remote users in the show. In this paper we introduce the Co-creation\n                     Stage, a web-based tool that allows managing heterogeneous content sources, with a\n                     particular focus on live and on-demand media, across several distributed devices.\n                     The Co-creation Stage is part of the toolset developed in the Traction H2020 project\n                     which enables community performing art shows, where professional artists and non-professional\n                     participants perform together from different stages and locations. Here we present\n                     the design process, the architecture and the main functionalities of the tool as well\n                     as the results of the first user evaluation with Opera houses and artists.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465484\">Taking That Perfect Aerial Photo: A Synopsis of Interactions for Drone-based Aerial\n                  Photography and Video<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Alexandru-Ionut Siean<\/li>\n               <li class=\"nameList\">Radu-Daniel Vatavu<\/li>\n               <li class=\"nameList Last\">Jean Vanderdonckt<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Personal drones are more and more present in our lives and acting as \u201cflying cameras\u201d\n                     is one of their most prominent applications. In this work, we conduct a synopsis of\n                     the scientific literature on human-drone interaction to identify system functions\n                     and corresponding commands for controlling drone-based aerial photography and video,\n                     from which we compile a dictionary of interactions. We also discuss opportunities\n                     for more research at the intersection of drone computing, augmented vision, and personal\n                     photography.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465501\">Timeline: An Authoring Platform for Parameterized Stories<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Pedro Silva<\/li>\n               <li class=\"nameList\">Shuyu Gao<\/li>\n               <li class=\"nameList\">Sanjeev Nayak<\/li>\n               <li class=\"nameList\">Michelle Ramirez<\/li>\n               <li class=\"nameList\">Colin Stricklin<\/li>\n               <li class=\"nameList Last\">Janet Murray<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> The Timeline platform aims to afford interactors a novel way of experiencing complex,\n                     multi-sequential stories and causal relationships while capitalizing on the affordances\n                     of interactive digital narratives. Along with the authoring tool, this paper discusses\n                     exploratory techniques and best practices that hope to promote new methods of multi-sequential\n                     storytelling, i.e. replay stories centered on multiple instantiations, that emphasize\n                     parallelism and orienting milestones.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465492\">Personal Viewing History Collection Method for Video Streaming Services in User-Centric\n                  Model<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Daisuke Sekine<\/li>\n               <li class=\"nameList\">Kinji Matsumura<\/li>\n               <li class=\"nameList Last\">Arisa Fujii<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>The personal viewing history of broadcast and internet content can be used to improve\n                     the quality of online services for the user; consequently, enhancing a person&#8217;s experience\n                     in their daily life. This study proposes a collection method of the personal viewing\n                     history of online video streaming services in a user-centric model, where the user\n                     owns and controls their data. By obtaining event data from a video element of HTML5\n                     in a video streaming player on a web browser, users can store their viewing history\n                     separately from the service provider&#8217;s system. Because this method employs standardized\n                     web technologies, it can be applied to multiple viewing devices and player implementations.\n                     Additionally, a prototype of a video streaming player extension is developed. Implementation\n                     of the proposed method and verification tests are conducted on the prototype. The\n                     results obtained suggest that the proposed method is applicable to existing HTML5\n                     compliant web browsers.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465491\">Understanding Rules in Live Streaming Micro Communities on Twitch<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Jie Cai<\/li>\n               <li class=\"nameList\">Cameron Guanlao<\/li>\n               <li class=\"nameList Last\">Donghee Yvette Yvette Wohn<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Rules and norms are critical to community governance. Live streaming communities like\n                     Twitch consist of thousands of micro-communities called channels. We conducted two\n                     studies to understand the micro-community rules. Study one suggests that Twitch users\n                     perceive that both rules transparency and communication frequency matter to channel\n                     vibe and frequency of harassment. Study two finds that the most popular channels have\n                     no channel or chat rules; among these having rules, rules encouraged by streamers\n                     are prominent. We explain why this may happen and how this contributes to community\n                     moderation and future research. <\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3468083\">Content Wizard: demo of a trans-vector digital video publication tool<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Lyndon J B Nixon<\/li>\n               <li class=\"nameList\">Konstantinos Apostolidis<\/li>\n               <li class=\"nameList\">Evlampios Apostolidis<\/li>\n               <li class=\"nameList\">Damianos Galanopoulos<\/li>\n               <li class=\"nameList\">Vasileios Mezaris<\/li>\n               <li class=\"nameList\">Basil Philipp<\/li>\n               <li class=\"nameList Last\">Rasa Bocyte<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> In order to optimise the distribution of video assets online, media organizations\n                     need tailor their offerings for specific digital channels and better understand the\n                     interests of their audiences at particular points in time, which are often influenced\n                     by contemporary new stories and trends on social media. For this purpose, the research\n                     project ReTV has developed a Web-based tool termed \u2019Content Wizard\u2019 which demonstrates\n                     an end-to-end, semi-automated workflow for video content creation, adaptation and\n                     distribution across digital channels. Digital assets can be selected based on predicted\n                     future trending topics, re-purposed according to the different digital channels they\n                     will be published upon and scheduled for the optimal future publication date. The\n                     result is an innovative video publication workflow that meets the marketing needs\n                     of media organisations in this age of transient online media spread across multiple\n                     channels.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            <h2>SESSION: Showcase: Mixed-Reality (VR\/AR)<\/h2>\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465495\">Towards User Generated AR Experiences: Enable consumers to generate their own AR experiences for planning indoor spaces<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Richard Whitehand<\/li>\n               <li class=\"nameList\">Georgios Albanis<\/li>\n               <li class=\"nameList\">Nikolaos Zioulis<\/li>\n               <li class=\"nameList\">Werner Bailer<\/li>\n               <li class=\"nameList\">Dimitrios Zarpalas<\/li>\n               <li class=\"nameList Last\">Petros Daras<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Communication with a customer or future user during the planning and design phase\n                     is crucial in applications such as interior design and furniture retailing. Augmented\n                     Reality (AR) has the potential to make these communication processes highly effective\n                     and provide a better experience for the customer. Current AR authoring solutions are\n                     quite complex and require manually creating scenes or rely on objects prepared with\n                     even more complex applications such as CAD tools. However, both design experts and\n                     their customers often lack the IT skills to use these tools. In addition, many practical\n                     cases involve changing reality rather than just adding to it, thus requiring the use\n                     of Diminished Reality (DR) technologies. This paper presents a comprehensive analysis\n                     of the requirements of both professionals and consumers (gathered using user surveys\n                     and individual interviews) for a lightweight and automated authoring process of AR\n                     and DR experiences, deriving a set of requirements that can be aligned with state\n                     of the art technologies and identifying a number of challenges for AR research.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465496\">Liquid Hands: Evoking Emotional States via Augmented Reality Music Visualizations<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">G S Rajshekar Reddy<\/li>\n               <li class=\"nameList Last\">Damien Rompapas<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Music performances have transformed in unprecedented ways with the advent of digital\n                     music. Plenty of music visualizers enhance live performances in various forms, including\n                     LED display boards and holographic illustrations. However, the impracticability of\n                     live performances due to the CoVID-19 outbreak has led to event organizers adopting\n                     alternatives in virtual environments. In this work, we propose Liquid Hands, an Augmented\n                     Reality (AR) music visualizer system, wherein three-dimensional particles react to\n                     the flow of music, forming a visually aesthetic escapade. With hand-particle interactions,\n                     Liquid Hands aims to enrich the music listening experience in one\u2019s personal space\n                     and bridge the gap between virtual and physical concerts. We intend to explore the\n                     emotions our system induces by conducting a pilot study, in which we measure the user\u2019s\n                     psychological state through Electroencephalography (EEG). We hypothesize that the\n                     proposed system will evoke emotions akin to those exhibited in live music performances.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465490\">Towards Immersive and Social Audience Experience in Remote VR Opera<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Alina Striner<\/li>\n               <li class=\"nameList\">Sarah Halpin<\/li>\n               <li class=\"nameList\">Thomas R\u00f6ggla<\/li>\n               <li class=\"nameList Last\">Pablo Cesar<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Opera is a historic art that struggles to be approachable to modern audiences. In\n                     partnership with the Irish National Opera (INO), this work considers how VR may be\n                     used to develop a new form of immersive opera. To this end, we ran three open-ended\n                     focus groups to consider how creative, multisensory, and social VR technology may\n                     be employed in digital opera. Our findings assert the importance of creating an immersive\n                     experience by safely giving audiences agency to interact, to democratize personal\n                     and social experiences, and to consider different ways of representing their bodies,\n                     their social rituals, and the virtual social space. Using these findings, we envision\n                     a new form of VR opera that couples physical traditions with digital affordances.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3467815\">Towards XR Communication for Visiting Elderly at Nursing Homes<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Sylvie Dijkstra-Soudarissanane<\/li>\n               <li class=\"nameList\">Tessa Klunder<\/li>\n               <li class=\"nameList\">Aschwin Brandt<\/li>\n               <li class=\"nameList Last\">Omar Niamut<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Due to the current pandemic, the elderly in care homes are greatly affected by the\n                     lack of contact with their families, resulting in various mental conditions (e.g.,\n                     depression, feelings of loneliness) and deterioration of mental health for dementia\n                     patients. In response, residents and family members increasingly resorted to mediated\n                     communication to maintain social contact. To facilitate high-quality mediated social\n                     contact between residents in nursing homes and remote family members, we developed\n                     an Augmented Reality (AR)-based communication tool. The proposed demonstrator improved\n                     this situation by providing a working communication tool that enables the elderly\n                     to feel being together with their family by means of AR techniques. A complete end-to-end-chain\n                     architecture is defined, where the aspects of capture, transmission, and rendering\n                     are thoroughly investigated to fit the purpose of the use case. Based on an extensive\n                     user study comprising user experience (UX) and quality of service (QoS) measurements,\n                     each module is presented with the improvements made and the resulting higher quality\n                     AR communication platform.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3465494\">Interactive Characters for Virtual Reality Stories<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Joan Llobera<\/li>\n               <li class=\"nameList Last\">Caecilia Charbonnier<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p> Virtual Reality (VR) content production is now a flourishing industry. The specifics\n                     of VR, as opposed to videogames or movies, allow for a content format where users\n                     experience, at the same time, the narrative richness characteristic of movies and\n                     theatre plays with interactive engagement. To create such a content format some technical\n                     challenges still need to be solved, the main being the need for a new generation of\n                     animation engines that can deliver interactive characters appropriate for narrative-focused\n                     VR interactive content. We review the main assumptions of this approach and recent\n                     progress in interactive character animation techniques that seems promising to realise\n                     this goal.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3467671\">Exploring Affect Recognition in a Virtual Reality Environment<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Brandon Hang<\/li>\n               <li class=\"nameList\">Sara Loucks<\/li>\n               <li class=\"nameList\">Pooja Patel<\/li>\n               <li class=\"nameList\">Kimberly Wiseman<\/li>\n               <li class=\"nameList Last\">Javier Gonzalez-Sanchez<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>The current global pandemic has resulted in increased social isolation for many. To\n                     combat worsening mental states including increased stress and boredom resulting from\n                     loneliness, we present a virtual environment designed to simulate social interaction\n                     and improve mood. The virtual environment takes the form of a restaurant in which\n                     users hold a conversation with a virtual patron. Built in the Unity3D engine and experienced\n                     in an Oculus Quest virtual reality headset, our program communicates with an off-the-shelf\n                     electroencephalogram (EEG) headset to gather user affective states. Affective states\n                     trigger changes in lighting, sound, and conversation topics in real-time based on\n                     the user&#8217;s emotions. The changes made to the environment reflect relevant psychology\n                     research to potentially improve the user&#8217;s mood.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t\n            \t\t\t\t\t\t\n            <h3><a class=\"DLtitleLink\" title=\"Full Citation in the ACM Digital Library\" referrerpolicy=\"no-referrer-when-downgrade\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3452918.3468005\">Augmented Reality Anatomy Visualization for Surgery Assistance with HoloLens: AR Surgery Assistance with HoloLens<\/a><\/h3>\n            <ul class=\"DLauthors\">\n               <li class=\"nameList\">Enrique Castelan<\/li>\n               <li class=\"nameList\">Margarita Vinnikov<\/li>\n               <li class=\"nameList Last\">Xianlian Alex Zhou<\/li>\n            <\/ul>\n            <div class=\"DLabstract\">\n               <div style=\"display:inline\">\n                  \t\t\n                  <p>Immediate care for trauma wounded patients in austere or remote settings makes medical\n                     knowledge, skills, and efficiency of the on-duty medical professional paramount. For\n                     wounds that extend deep into internal anatomy, proper visualization of internal anatomy\n                     can enable more efficient and effective evaluation when presented to medical providers\n                     positioned close to the point of injury (POI). In this paper, a conceptual Augmented\n                     Reality (AR) surgical tool is presented to provide visualization of internal human\n                     anatomy, superimposed on the view of a patient, to assist medical providers for immediate\n                     casualty care. This AR surgical tool can play a role in 3D surgery or treatment planning\n                     as a navigational aid in preparing medical interventions and enhancing surgery or\n                     treatment procedures by displaying otherwise obscured anatomy and nearby vessels.\n                     Critical software and hardware components are integrated to construct a prototype\n                     AR system for the portable AR surgical visualization tool. The system uses a Microsoft\n                     HoloLens 1 and an Azure Kinect camera for simultaneous body-tracking and anatomy overlay\n                     to demonstrate the overall concept. Future extension of this work will aim to create\n                     a more accurate and compact prototype system that utilizes HoloLens 2 with an embedded\n                     Kinect camera for laboratory and field tests of its use in surgery assistance. Such\n                     an AR tool can also serve as a training tool for medical caregivers, applied with\n                     a human subject or a medical manikin.<\/p>\n                  \t<\/div>\n            <\/div>\n            \t\t\t\t\t\t\n            \t\t\t\t\t<\/div>\n      <\/div>\n<\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Full Citation in the ACM Digital Library SESSION: MUSIC, ART &amp; HEALTH Visual Respiratory Feedback in Virtual Reality Exposure Therapy: A Pilot Study Deniz Mevlevio\u011flu David Murphy Sabin Tabirca As the use of Virtual Reality (VR) expands across fields, new kinds of interaction methods are introduced. This study presents the Visual Heights VR experience that [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":6,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-852","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/pages\/852"}],"collection":[{"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/comments?post=852"}],"version-history":[{"count":5,"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/pages\/852\/revisions"}],"predecessor-version":[{"id":857,"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/pages\/852\/revisions\/857"}],"up":[{"embeddable":true,"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/pages\/6"}],"wp:attachment":[{"href":"https:\/\/imx.acm.org\/2021\/index.php\/wp-json\/wp\/v2\/media?parent=852"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}