Future Visions for Higher Education: An Investigation of the Benefits of Virtual Reality for Teaching University Students
Gary Burnett, University of Nottingham
Rebekah Kay, University of Nottingham
Catherine Harvey, University of Nottingham
This study sought to understand the social value to University students of engaging universally in a virtual environment throughout a taught module. In a series of weekly seminars throughout September-December 2020, 49 undergraduate and postgraduate Engineering students interacted with module convenors and each other within a virtual teaching island (termed ‘Nottopia’) to consolidate learning from pre-recorded lectures. Questionnaire results highlighted the positive impact this innovative form of teaching had on the cohort, as the overwhelming majority felt highly motivated to participate in class activities and felt connected to their peers and lecturers. Objective data backed up these findings as attendance at the optional seminars was 90-100% throughout the semester. Follow-up interviews with students and behavioural observations identified five themes elucidating why such benefits accrue from the use of VR in this educational context: 1) A shared, student-owned and purposive space was provided; 2) content was inherently highly spatial/3D; 3) natural group dynamics arose in activities; 4) students could express their identity and/or hide using their chosen/designed avatar; 5) many magical and informative interactions were possible. These themes are realized in an initial framework which can aid educators in considering the different ways in which VR can affect the socially oriented experience of their students.
A Toolkit to Evaluate and Characterize the Collaborative Process in Scenarios of Remote Collaboration Supported by AR
Bernardo Marques, Universidade de Aveiro
Samuel Silva, University of Aveiro
Paulo Dias, University of Aveiro
Beatriz Sousa Santos, University of Aveiro
Remote collaboration using Augmented Reality (AR) has enormous potential to support collaborators that need to achieve a common goal. However, there is a lack of tools for evaluating these multifaceted contexts, involving many aspects that may influence the way collaboration occurs. Therefore, it is essential to develop solutions to monitor AR-supported collaboration in a more structured manner, allowing adequate portrayal and report of such efforts. As a contribute, we describe CAPTURE, a toolkit to instrument AR-based tools via visual editors, enabling rapid data collection and filtering during distributed evaluations. We illustrate the use of the toolkit through a case study on remote maintenance and report the results obtained, which can elicit a more complete characterization of the collaborative process moving forward.
Tactile Telepresence for Isolated Patients
Nafisa Mostofa, University of Central Florida
Indira Avendano, University of Central Florida
Ryan P. McMahan, University of Central Florida
Norma Conner, University of Central Florida
Mindi Anderson, University of Central Florida
Greg Welch, University of Central Florida
For isolated patients, such as COVID-19 patients in an intensive care unit, conventional video tools can provide a degree of visual telepresence. However, video alone offers, at best, an approximation of a “through a window” metaphor—remote visitors, such as loved ones, cannot touch the patient to provide reassurance. Here, we present preliminary work aimed at providing an isolated patient and remote visitors with audiovisual interactions that are augmented by mediated social touch—the perception of being touched for the isolated patient, and the perception of touching for the remote visitor. We developed a tactile telepresence system prototype that provides a remote visitor with a tablet-based, touch-video interface for conveying touch patterns on the forehead of an isolated patient. The isolated patient can see the remote visitor, see themselves with the touch patterns indicated on their forehead, and feel the touch patterns through a vibrotactile headband interface. We motivate the work, describe the system prototype, and present results from pilot studies investigating the technical feasibility of the system, along with the social and emotional affects of using the prototype system.
Augmenting Human Perception: Mediation of Extrasensory Signals in Head-Worn Augmented Reality
Austin Erickson, University of Central Florida
Dirk Reiners, University of Central Florida
Gerd Bruder, University of Central Florida
Greg Welch, University of Central Florida
Mediated perception systems are systems in which sensory signals from the user’s environment are mediated to the user’s sensory channels. This type of system has great potential for enhancing the perception of the user via augmenting and/or diminishing incoming sensory signals according to the user’s context, preferences, and perceptual capability. They also allow for extending the perception of the user to enable them to sense signals typically imperceivable to human senses, such as regions of the electromagnetic spectrum beyond visible light.
In this paper, we present a prototype mediated perception system that maps extrasensory spatial data into visible light displayed within an augmented reality (AR) optical see-through head-mounted display (OST-HMD). Although the system is generalized such that it could support any spatial sensor data with minor modification, we chose to test the system using thermal infrared sensors. This system improves upon previous extended perception augmented reality prototypes in that it is capable of projecting registered egocentric sensor data in real time onto a 3D mesh generated by the OST-HMD that is representative of the user’s environment. We present the lessons learned through iterative improvements to the system, as well as a performance analysis of the system and recommendations for future work.
Revive Family Photo Albums through a Collaborative Environment Exploiting the HoloLens 2
Lorenzo Stacchio, University of Bologna
Alessia Angeli, University of Bologna
Shirin Hajahmadi, University of Bologna
Gustavo Marfia, University of Bologna
While human culture evolves, human heritage remains frozen in time, engraved in material culture as a testament to the past. Among such materials, pictures amount to a prominent example, as they yield information and clues about what happened in the past. In fact, pictures from the past represent a unique chance to revive old memories about affections, relatives, friends, special events etc. During all of the 20th century people printed and collected pictures in photo albums, namely family albums. Even if this phenomenon is not as popular as before, due to the advent of digital photography and the spread of social media, such kind of photos are still of interest, for all those who like to look back and discover their families’ pasts. At the time of social distancing, such photo albums may represent a link between people who are forced to stay away from each other and a distraction from worries and fears. For this reason, we here propose an augmented reality application, as a digital application that may bring people together and support the exploration of the content of photo albums with the aid of artificial intelligence paradigms.
Fisheye vs Rubber Sheet: Supporting Visual Search and Fine Motor Skills in Augmented Reality
Qiaochu Wang, City University of Hong Kong
Christian Sandor, City University of Hong Kong
Fisheye and rubber sheet are common magnifying methods in information visualization. In this paper, we present two experiments about their performance in AR scenarios: visual search and fine motor manipulation. We also included baseline conditions without any magnification. For all magnified views, the size and shape of the magnified region were constant and we evaluated performance based on completion time and accuracy. Our results show that visual distortions of our magnifiers do not significantly decrease performance; our insights will be a reference for projects that require magnified AR views.
Augmented Reality in Chinese Language Pronunciation Practice
Daria Sinyagovskaya, University of Central Florida
John Murray, University of Central Florida
Augmented reality (AR) has the unique ability to situate an activity within a physical context, making it valuable for training and education. The affordances of AR include multimodal visualizations and auditory support. The disciplines of phonetics and phonology in applied linguistics concern how sounds are produced and articulated. We have designed and developed a pronunciation training app for the novice second language student learning Mandarin pinyin by applying linguistic theory while adhering to usability principles. This paper describes the requirements that drove the development of the app and the planned online experiment design that compares the same training under two conditions: AR and non-AR. The design features peer assessment, drills, in-app assessment, surveys, and a posttest, and is designed for a remote study without having participants come to the lab.
Socially Distanced: Have user evaluation methods for Immersive Technologies changed during the COVID-19 pandemic?
Becky Spittle, Birmingham City University
Wenge Xu, Birmingham City University
Maite Frutos-Pascual, Birmingham City University
Chris Creed, Birmingham City University
Ian Williams, Birmingham City University
Since the emergence of COVID-19 in late 2019, there has been a significant disturbance in human-to-human interaction that has changed the way we conduct user studies in the field of Human-Computer Interaction (HCI), especially for extended (augmented, mixed, and virtual) reality (XR). To uncover how XR research has adapted throughout the pandemic, this paper presents a review of user study methodology adaptations from a corpus of 951 papers. This corpus of papers covers CORE 2021 A* published conference submissions, from Q2 2020 through to Q1 2021 (IEEE ISMAR, ACM CHI, IEEE VR). The review highlights how methodologies were changed and reported; sparking discussions surrounding how methods should be conveyed and to what extent research should be contextualised, by drawing on external topical factors such as COVID-19, to maximise usefulness and perspective for future studies. We provide a set of initial guidelines based on our findings, posing key considerations for researchers when reporting on user studies during uncertain and unprecedented times.
Global Heading Estimation For Wide Area Augmented Reality Using Road Semantics For Georeferencing
Taragay Oskiper, SRI International
Supun Samarasekera, SRI International
Rakesh Kumar, SRI International
In this paper, we present a method to estimate global camera heading by associating directional information from road segments in the camera view with annotated satellite imagery. The system is based on a multi-sensor fusion framework that relies on GPS, camera and an inertial measurement unit (IMU). The backbone of the system is built on a very strong visual-inertial navigation (VIO) pipeline with very low drift rate and the proposed algorithm combines relative motion provided by VIO with global cues obtained by an image segmentation module to extract heading information for geo-referenced AR applications over wide area.
Dynamic Content Generation for Augmented Technical Support
Sinem Guven, IBM T. J. Watson Research Center
Bing Zhou, IBM T. J. Watson Research Center
Rohan Arora, IBM T. J. Watson Research Center
Noah Zheutlin, IBM T. J. Watson Research Center
Gerard Vanloo, IBM T. J. Watson Research Center
Elif Eyigoz, IBM T. J. Watson Research Center
In the hardware technical support domain, scaling technician skills remains a prevalent problem. Given the large portfolio of hardware products service providers need to maintain, it is not possible for every technician to be expert at repairing every product. Augmented reality addresses this problem through virtual procedures, which are interactive 3D visual representations of text-based knowledge articles that describe how to perform step-by-step repair actions. Virtual procedures, thus, equip the technicians with the skills they need to support a wide range of hardware products. In this paper, we present a novel and scalable approach to dynamically construct virtual procedures, and we demonstrate the feasibility of our approach through a real-life implementation.
COVINS: Visual-Inertial SLAM For Centralized Collaboration
Patrik Schmuck, ETH Zurich
Thomas Ziegler, ETH Zurich
Marco Karrer, ETH Zurich
Jonathan Perraudin, V4RL
Margarita Chli, ETH Zurich
Collaborative SLAM enables a group of agents to simultaneously co-localize and jointly map an environment, thus paving the way to wide-ranging applications of multi-robot perception and multi-user AR experiences by eliminating the need for external infrastructure or pre-built maps. This article presents COVINS, a novel collaborative SLAM system, that enables multi-agent, scalable SLAM in large environments and for large teams of more than 10 agents. The paradigm here is that each agent runs visual-inertial odomety independently onboard in order to ensure its autonomy, while sharing map information with the COVINS server back-end running on a powerful local PC or a remote cloud server. The server back-end establishes an accurate collaborative global estimate from the contributed data, refining the joint estimate by means of place recognition, global optimization and removal of redundant data, in order to ensure an accurate, but also efficient SLAM process. A thorough evaluation of COVINS reveals increased accuracy of the collaborative SLAM estimates, as well as efficiency in both removing redundant information and reducing the coordination overhead, and demonstrates successful operation in a large-scale mission with 12 agents jointly performing SLAM.
Motion and Meaning: Sample-level Nonlinear Analyses of Virtual Reality Tracking Data
Mark Miller, Stanford University
Hanseul Jun, Stanford University
Jeremy Bailenson, Stanford University
Behavioral data is the “gold standard” for experiments in psychology. The tracking component of virtual reality systems captures data on nonverbal behavior both covertly and continuously at high spatial and temporal fidelity, enabling what is called behavioral tracing. With previous research analyzing this type of data, however, inference has primarily been limited to linear relationships of subject-level aggregates. In this work, we suggest these rough aggregations are often neither the best according to theory nor do they make use of the rich data available from behaviorally traced experiments. We also explore the relationships between motion and subjective experiences with a previously published dataset of 360-degree video and emotion, and we find evidence for nonlinear sample-level relationships. In particular, reported valence relates with head pitch and pitch velocity, among others, and reported arousal relates with head rotation speed and yaw velocity, among others. The role of these sample-level nonlinear relationships for future work are discussed.
A Comparison of Common Video Game versus Real-World Heads-Up-Display Designs for the Purpose of Target Localization and Identification.
Yanqiu Tian, University of Technology Sydney
Alexander Minton, University of Technology Sydney
Howe Zhu, University of Technology Sydney
Gina Notaro, Lockheed Martin
Raquel Galvan, Lockheed Martin
Yu-Kai Wang, University of Technology Sydney
Hsiang-Ting Chen, University of Adelaide
James Allen, Lockheed Martin
Matthias Ziegler, Lockheed Martin
Chin-Teng Lin, Centre of Artificial Intelligence, School of Software, Faculty of Engineering and Information Technology, University of Technology Sydney
This paper presents the findings of an investigation into the user ergonomics and performance for industry-inspired and traditional video game-inspired Heads-Up-Display (HUD) designs for target localization and identification in a 3D real-world environment. Our online user study (N = 85) compared one industry-inspired design (Ellipse) to three common video game HUD designs (Radar, Radar Indicator, and Compass). Participants interacted and evaluated each HUD design through our novel web-based game. The game involved a target localization and identification task where we recorded and analyzed their performance results as a quantitative metric. Afterwards, participants were asked to provide qualitative responses for specific aspects of each HUD design and comparatively rate the designs. Our findings show that not only do common video game HUDs provide comparable performance to the real-world inspired HUD, participants tended to prefer the designs they had experience with, these being video game designs.
Enabling Collaborative Interaction with 360° Panoramas between Large-scale Displays and Immersive Headsets
Leah Emerson, University of St. Thomas
Riley Lipinski, University of St. Thomas
Heather Shirey, University of St. Thomas
Theresa Malloy, University of St. Thomas
Thomas Marrinan, University of St. Thomas
Head mounted displays (HMDs) can provide users with an immersive virtual reality (VR) experience, but often are limited to viewing a single environment or data set at a time. In this paper, we describe a system of networked applications whereby co-located users in the real world can use a large-scale display wall to collaborate and share data with immersed users wearing HMDs. Our work focuses on the sharing of 360° surround-view panoramic images and contextual annotations. The large-scale display wall affords non-immersed users the ability to view a multitude of contextual information and the HMDs afford the ability for users to immerse themselves in a virtual scene. The asymmetric virtual reality collaboration between immersed and non-immersed individuals can lead to deeper understanding and the feeling of a shared experience. We will highlight a series of use cases – two digital humanities projects that capture real locations using a 360° camera, and one scientific discovery project that uses computer generated 360° surround-view panoramas. In all cases, groups can benefit from both the immersive capabilities of HMDs and the collaborative affordances of large-scale display walls, and a unified experience is created for all users.
An Evaluation of Virtual Reality for Fear Arousal Safety Training in the Construction Industry
Thuong Hoang, Deakin University
Stefan Greuter, Deakin University
Simeon Taylor, Deakin University
George Aranda, Deakin University
Gerard Mulvany, Deakin University
Occupational Health and Safety is a significant area of concern for the construction industry, in which subcontractors work across multiple sites with varying safety induction and training. Prior work in applying immersive technologies for safety training often focuses on the simulation of working sites for hazard identification, demonstration of safety practice, and knowledge-based safety tests. However, it has been identified that current safety training is largely ineffective in improving workers’ attitudes towards safe work practices. We apply a fear-arousal approach to safety training by simulating the experience of different types of common safety accidents on a construction site in virtual reality. We conducted an evaluation with workers, contractors, and employees of a commercial construction company, where each participant experienced safety incidents on a virtual construction site. We applied pre- and post-testing measures on the impact on safety attitude and learning practice. We present the empirical evidence of fear arousal safety training in VR for improving the safety attitudes of construction workers, subcontractors, and employees. We suggested improvements and design considerations for other researchers, designers, and stakeholders in this domain based on our findings.
Designing an Extended Reality Application to Expand Clinic-Based Sensory Strategies for Autistic Children Requiring Substantial Support: Participation of Practitioners
Valentin BAUER, Université Paris-Saclay, CNRS
TIFANIE BOUCHARA, CNAM
Patrick Bourdot, Université Paris-Saclay, CNRS, LIMSI, VENISE team
eXtended Reality (XR) has already been used to support interventions for autistic children, but mainly focuses on training the socioemotional abilities of children requiring low support. To also consider children requiring substantial support, this paper examines how to design XR applications in order to expand clinic-based sensory strategies that are often used by practitioners to put them in a secure state, and how to maximize the acceptability of such applications among practitioners. To that respect, a “Mixed Reality platform for Engagement and Relaxation of Autistic children” was designed and developed, which allows to add audio, visual and haptic individualized or common stimuli onto reality. A first Augmented Reality freeplay use case called Magic Bubbles was created based on interviews with stakeholders and on a collaboration with three practitioners. A preliminary study with eleven practitioners confirmed its well-being potential and acceptability. XR design guidelines are finally derived.
Perceived Transparency in Optical See-Through Augmented Reality
Lili Zhang, Rochester Institute of Technology
Michael Murdoch, Rochester Institute of Technology
Optical see-through (OST) displays overlay rendered images to the real-world background creating augmented reality (AR). But the blending between the rendering and the real-world background introduces perceived transparency. The increased luminance on the rendering decreases the background contrast, and reduces the perceived transparency, which is not incorporated in existing color appearance models or display color management pipelines. We studied the perceived transparency in AR focusing on the interaction between the rendering and the patterned background in various luminance, contrast, and wave forms. In addition to AR contrast, we also examined simulated contrast modulation by changing the luminance amplitude. Two psychophysical experiments were conducted to quantify the perceived transparency. The first experiment measured a perceived transparency scale using direct scaling, and the second experiment evaluated the transparency equivalency between two methods of contrast modulation. The result showed that the two methods evoke different transparency perceptions. The background contrast affects the perceived AR transparency significantly, while the background luminance and wave form do not. We proposed a model predicting the perceived transparency based on the first experiment result from AR luminance and background contrast. The model was verified with the second experiment and showed good prediction. Our model presented a new perceptual dimension in OST AR and can possibly be incorporated into color management pipeline to improve AR image quality.
Walking Through Walls: The Effect of Collision-Based Feedback on Affordance Judgments in Augmented Reality
Holly Gagnon, University of Utah
Dun Na, Vanderbilt University
Keith Heiner, University of Utah
Jeanine Stefanucci, University of Utah
Sarah Creem-Regehr, University of Utah
Bobby Bodenheimer, Vanderbilt University
Feedback about actions in augmented reality (AR) is limited and can be ambiguous due to the nature of interacting with virtual objects. AR devices also have a restricted field of view (FOV), limiting the amount of available visual information that can be used to perform an action or provide feedback during or after an action. We used the Microsoft HoloLens 1 to investigate whether perceptual-motor, collision-based outcome feedback calibrates judgments of whether one can pass through an aperture in AR. Additionally, we manipulated the amount of information available within the FOV by having participants view the aperture at two different distances. Feedback calibrated passing-through judgments at both distances but resulted in an overestimation of the just-passable aperture width. Moreover, the far viewing condition had more overestimation of just-passable aperture width than the near viewing condition.
Mobile Augmented Reality as a Field-Assistance Tool in Urban Maintenance
André Rodrigues, Nova School of Science and Technology
Nuno Correia, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa
Fernando Birra, Nova School of Science and Technology
We present a mobile AR application designed to be used by urban maintenance workers as a field-assistance tool. Using any standard smartphone camera, our system can accurately detect the desired equipment and augment it with relevant information and step-by-step instructions on how to do any required maintenance jobs. Alongside this mobile application, we also developed a desktop application for creating and authoring the data and augmentations that should be displayed during a given job (Archer). This paper proposes a novel approach to automatically detect and minimize the amount of points (checkpoints) at which the app will ask the user to perform a new equipment recognition, which are useful in order to maintain tracking stability as the user modifies the real-world object during the course of the job. The experiments and user tests demonstrate the accuracy and practicality of the developed systems, which can effectively be used to greatly improve the workflow of urban maintenance workers.
A Mixed-Reality System to Promote Child Engagement in Remote Intergenerational Storytelling
Jennifer Healey, Adobe Research
Duotun Wang, University of Maryland
Curtis Wigington, Adobe Research
Tong Sun, Adobe Research
Huaishu Peng, University of Maryland
We present a mixed reality (MR) storytelling system designed specifically for multi-generational collaboration with child engagement as a key focus. Our “Let’s Make a Story” system comprises a two-sided experience that brings together a remote adult and child to tell a story collaboratively. The child has a mixed reality phone-based application with an augmented manipulative that controls the story’s main character. The remote adult participates through a web-based interface. The adult reads the story to the child and helps the child play the story game by providing them with items they need to clear the scenes.
In this paper, we detail the implementation of our system and the results of a user study. Eight remote adult-child pairs experienced both the MR and a traditional paper-based storytelling system. To measure engagement, we used questionnaire analysis, engagement time with the story activity, and the word count of the child’s description of how the story should end. We found that children uniformly preferred the MR system, spent more time engaged with the MR system, and used more words to describe how the story should end incorporating details from the game.
Shadow-based estimation of multiple light sources in interactive time for more photorealistic AR experiences
Matthieu Fradet, InterDigital
Patrice Hirtzlin, InterDigital
pierrick Jouet, InterDigital
Anthony Laurent, InterDigital
caroline BAILLARD, InterDigital
Light4AR is a light source estimation solution for AR. It is based on the detection of real cast shadows in an image captured by a mobile phone, which are then used to determine the 3D position and intensity of the real light sources. By creating virtual point lights based on the resulting parameters and adding them to the AR scene, the virtual objects cast virtual shadows consistent with the real environment lighting, thereby enhancing object presence and user experience. A server-based GPU implementation provides results in interactive time, offers the ability to share results over multiple users while preserving device resources, and makes photorealistic AR experiences accessible to most mobile devices.
Subtle Attention Guidance for Real Walking in Virtual Environments
Emanuele Nonino, ETH Zurich
Joy Gisler, ETH Zürich
Valentin Holzwarth, University of Liechtenstein
Christian Hirt, ETH Zürich
Andreas Kunz, ETH Zurich
Virtual reality is today being applied to an increasing number of fields such as education, industry, medicine, or gaming. Attention guidance methods are used in virtual reality to help users navigate the virtual environment without being overwhelmed by the overabundance of sensory stimuli. However, visual attention guidance methods can be overt, distracting and confusing as they often consist of artefacts placed in the center of the user’s field of view. This is the case for the arrow method, which consists of an arrow pointing towards a target object and which serves as a reference for our study. In this paper, we compare such an arrow to two methods that are less distracting and more subtle: haptic feedback and temporal luminance modulation. The haptic feedback method guides a user to a target using controller vibration. The temporal luminance modulation method makes use of flickering visual artefacts placed at the user’s peripheral field of view and thus do not cover regions of interest that are typically in the central field of view. This creates a subtle attention guidance since these flickering artefacts can be perceived by the user, but not recognized in terms of form and shape. To compare the different attention guidance methods, we designed a virtual environment that can be explored through real walking, wherein a user performs a search task. We then conducted a pilot study with seven participants to compare the haptic feedback and the temporal luminance modulation methods to the arrow method and to a baseline condition of navigation without any attention guidance. The preliminary results suggest that all three methods are more effective than the condition without guidance. Moreover, the temporal luminance modulation method appears to be comparable to the more effective, but non-subtle arrow method in terms of task completion time.
Towards In-situ Authoring of AR Visualizations with Mobile Devices
Marc Satkowski, Technische Universität Dresden
Weizhou Luo, Technische Universität Dresden
Raimund Dachselt, Technische Universität Dresden
Augmented Reality (AR) has been shown to enhance the data visualization and analysis process by supporting users in their immersive exploration of data in a real-world context. However, authoring such visualizations still heavily relies on traditional, stationary desktop setups, which inevitably separates users from the actual working space. To better support the authoring process in immersive environments, we propose the integration of spatially-aware mobile devices. Such devices also enable precise touch interaction for data configuration while lowering the entry barriers of novel immersive technologies. We therefore contribute an initial set of concepts within a scenario for authoring AR visualizations. We implemented an early prototype for configuring visualizations in-situ on the mobile device without programming and report our first impressions.
Evaluating Wearable Tactile Feedback Patterns During a Virtual Reality Fighting Game
Dixuan Cui, Purdue University
Christos Mousas, Purdue University
A two-study approach was used to explore the effects of tactile feedback patterns on virtual reality gaming experiences of participants, including presence, flow, usability, emotion, technology adoption, and tactile sensation. In our main study, participants were instructed to play a virtual reality fighting game. Five experimental conditions with different tactile feedback patterns (no-tactile, random, fixed, discrete, and realistic tactile feedback) were examined. The results of our study indicated that: (1) tactile feedback patterns have no significant effect on flow and emotion, and (2) the no-tactile feedback condition was rated significantly lower by the participants than other tactile feedback conditions in terms of presence, usability, technology adoption, and tactile sensation. A follow-up study was then conducted to understand the different effects of realistic feedback in active versus passive fighting games. The results of our follow-up study indicated that participants rated tactile sensation and usability higher when under the passive gaming condition. We discuss our findings along with the study limitations and future research directions.
Exploring and Slicing Volumetric Medical Data in Augmented Reality Using a Spatially-Aware Mobile Device
Weizhou Luo, Technische Universität Dresden
Eva Goebel, Technische Universität Dresden
Patrick Reipschläger, Technische Universität Dresden
Mats Ellenberg, Technische Universität Dresden
Raimund Dachselt, Technische Universität Dresden
We present a concept and early prototype for exploring volumetric medical data, e.g., from MRI or CT scans, in head-mounted Augmented Reality (AR) with a spatially tracked tablet. Our goal is to address the lack of immersion and intuitive input of conventional systems by providing spatial navigation to extract arbitrary slices from volumetric data directly in three-dimensional space. A 3D model of the medical data is displayed in the real environment, fixed to a particular location, using AR. The tablet is spatially moved through this virtual 3D model and shows the resulting slices as 2D images. We present several techniques that facilitate this overall concept, e.g., to place and explore the model, as well as to capture, annotate, and compare slices of the data. Furthermore, we implemented a proof-of-concept prototype that demonstrates the feasibility of our concepts. With our work we want to improve the current way of working with volumetric data slices in the medical domain and beyond.
Device-Agnostic Augmented Reality Rendering Pipeline for AR in Medicine
Fabrizio Cutolo, University of Pisa
Nadia Cattari, University of Pisa
Marina Carbone, University of Pisa
Renzo D’Amato, University of Pisa
Vincenzo Ferrari, Università di Pisa
Visual augmented reality (AR) headsets have the potential to enhance surgical navigation by providing physicians with an egocentric visualization interface capable of seamlessly blending the virtual navigation aid with the real surgical scenario. However, technological and human-factor limitations still hinder the routine use of commercial AR headsets in clinical practice.
The aim of this work is to unveil the AR rendering pipeline of a device-agnostic software framework conceived to fulfil strict requirements towards the realization of a functional and reliable AR-based surgical navigator and capable of supporting the deployment of AR applications for image-guided surgery on different AR headsets. The AR rendering pipeline provides highly accurate AR overlay under both video and optical see-through modalities with almost no perceivable difference in terms of perception of relative distances and depths when used in the peripersonal space. The rendering pipeline allows the setting of the intrinsic and extrinsic projection parameters of the virtual rendering cameras offline and at runtime: under video see-through modality, the rendering pipeline can be modified to adapt the warping of the camera frames and pursue an orthostereoscopic and almost natural perception of the real scene in the peripersonal space. Similarly, under optical see-through modality, the calibrated intrinsic and extrinsic parameters of the eye-display model can be updated by the user to account for the actual user’s eye position. The results of the performance tests with an eye-replacement camera show an average motion-to-photon latency of around 110 ms for both AR rendering modalities. The AR platform for surgical navigation has already proven its efficacy and reliability under VST modality during real surgical operations in craniomaxillofacial surgery.
Augmented Reality meets Non-Fungible Tokens: Insights Towards Preserving Property Rights
Mihai Duguleana, University Transilvania of Brasov
Florin Girbacia, Transilvania University of Brasov
Non-fungible tokens are one of the newest use-cases of blockchain technology. Ethereum Virtual Machine (EVM)-based blockchains such as Ethereum, Binance Smart Chain (BSC), or Polygon/Matic have standardized this type of tokens using interfaces such as ERC721 and ERC1155. This development fostered a connection between blockchain and Mixed Reality technologies, which can now coexist in cohesive applications. In this paper, we present the opportunities and challenges resulting from using Augmented Reality NFTs, with a particular focus on the preservation of property, whether it is physical or purely digital. We present a methodology that can ensure the protection of privacy and the preservation of intellectual property when transitioning assets from the virtual to the physical world.
Positive Computing in Virtual Reality Industrial Training
Michele Gattullo, Polytechnic Institute of Bari
Enricoandrea Laviola, Polytechnic Institute of Bari
Michele Fiorentino, Polythecnic Institute of Bari
Antonio Uva, Polytechnic Institute of Bari
This research investigates the application of positive computing principles to Virtual Reality (VR) training scenarios where the Virtual Environment (VE) has not a direct influence on operator learning. We propose to place the 3D models of the only objects needed for the task in a VE consisting of 360° panoramas of natural environments. We made a preliminary evaluation of the user experience which showed that the hedonic quality is significantly higher with this VE than a 3D modeled empty room. However, we also observed a reduction of the pragmatic quality, due to potential distractions. Thus, further research is needed to demonstrate the efficacy of our positive computing approach in training against a traditional one based on the faithful 3D reproduction of the real environment.
A Classification of Augmented Reality Approaches for Spatial Data Visualization
Kostas Cheliotis, National Technical University of Athens
Fotis Liarokapis, CYENS – Centre of Excellence
Margarita Kokla, National Technical University of Athens
Eleni Tomai, National Technical University of Athens
Katerina Pastra, ATHENA Research Center
Athanasia Darra, National Technical University of Athens
Maria Beserianou, National Technical University of Athens
Marinos Kavouras, National Technical University of Athens
The field of Augmented Reality (AR) has seen a rapid expansion within the past decades, with many sectors nowadays applying AR methodologies for their own particular visualization needs. One field in particular that has adopted AR visualization techniques is the geospatial sciences, as they have identified the inherent spatial nature of AR as particularly applicable for visualizing (geo)spatial relationships in systems of interest. However, while we can find multiple classification schemes for AR applications in AR literature, to our knowledge no such classification exists that focuses specifically on the spatial aspects of AR applications, namely the visualization size, scale, and acknowledgement of and alignment with the user’s physical environment. Therefore no AR classification exists that is specifically applicable to the geospatial sciences. In this paper we present an initial classification of AR approaches for spatial data visualization, highlighting the different spatial characteristics that are of particular relevance to the geospatial sciences. We expect that the initial classification presented here will help researchers working with AR visualizations in organizing their application design with regard to its spatial characteristics.
A Grasp on Reality: Understanding Grasping Patterns for Object Interaction in Real and Virtual Environments
Andreea Dalia Blaga, Birmingham City University
Maite Frutos-Pascual, Birmingham City University
Chris Creed, Birmingham City University
Ian Williams, Birmingham City University
Grasping is the most natural and primary interaction paradigm people perform for every-day manual tasks in reality. However, while grasping real objects in Real Environments (RE) has been highly explored in literature, there is a recent emerging trend to explore the complications and nuances of hand interaction including grasping in Virtual Environments (VE). While this is leading towards a richer body of work to understand users’ approach to grasping in VE, a direct comparison between grasping real objects in RE and grasping virtual representations of real objects in VE has not been explored before. To address this gap, we perform a user study (n=20) on 7 representative real objects and their virtual twins from the“Yale–Carnegie Mellon University–Berkeley Object and ModelSet”. We report on 840 grasp instances collected during a grasp and translate task across RE and VE. We present initial results on the observed differences between RE and VE grasping across the different objects using the grasp type metric from real grasping studies. We explore the rationale for any observed differences between the RE and VE and present indicative trends for VE grasping. Finally, we propose methods and approaches for furthering work within VE grasping for improving the natural grasping interface.
A Study of Human-Machine Teaming For Single Pilot Operation with Augmented Reality
Narek Minaskan, German Research Center for Artificial Intelligence DFKI
Alain Pagani, German Research Center for Artificial Intelligence
Charles-Alban Dormoy, CATIE
Jean-Marc ANDRE, ENSC
Didier Stricker, German Research Center for Artificial Intelligence
With the increasing number of flights in the recent years, airlines and aircraft manufacturers are facing a daunting problem: shortage of pilots. One solution to this is to reduce the number of pilots in the aircraft and move towards single pilot operations (SPO). However, with this approach, the safety and quality of the flights must be guaranteed. Due to the complex nature of piloting task, a form of human-machine teaming is required to provide extra help and insight to the pilot. To this end, it is natural to look for proper artificial intelligence (AI) solutions as the field has evolved rapidly through the past decades with rise of machine learning and deep learning. The ideal AI for this task should aim to improve the human decisionmaking and focus on interaction with human rather than simply automating processes without human intervention. This particular field of AI is designed to communicate with the human and is known as cognitive computing (CC). To this end, several technologies can be employed to cover different aspects of interaction. One such technology is augmented reality (AR) which as of today, has matured enough to be used in commercial products. As such, an experiment was conducted to study the interaction between the pilot and CC teammate, and understand whether assistance is required to enable safe transition towards SPO.
Exploring Augmented Reality Privacy Icons for Smart Home Devices and their Effect on Users' Privacy Awareness
Kathrin Knutzen, Ilmenau University of Technology
Florian Weidner, Ilmenau University of Technology
Wolfgang Broll, Ilmenau University of Technology
Smart home devices often blend in seamlessly into the environment and operate ubiquitously, providing almost no contextual information such as data collection or activated sensors. Augmented Reality (AR) could, for example in form of head-mounted-displays (HMD), offer users a non-intrusive way to query the devices and to get privacy-related information. This pilot study explored how privacy icons, displayed by an AR-HMD and co-located with smart home devices, affect users’ privacy awareness. In a qualitative within-subject study, 16 participants experienced first a setup without AR privacy information and then one with such information. Participants’ answers indicate a high potential and excitement towards such a setup. Among others, they stated changed privacy awareness after experiencing AR privacy icons: While the icons prioritized privacy-related information for users and educated them to promote conscious decision-making, they sometimes also reinforced existing negative attitudes towards smart home devices. Further, the display of icons in AR has the potential to inspire trust towards manufacturers and providers, potentially leading to a false sense of security. Next to the discussion of participants’ answers, we outline the implications of our findings and provide recommendations for future research including trust and learning effects.
Analysing a UI's Impact on the Usability of Hands-free Interaction on Smart Glasses
Alexander Mantel, Clausthal University of Technology
Michael Prilla, Clausthal University of Technology
As smart glasses and other head-mounted devices (HMD) are becoming more developed, the number of different use cases and settings where they are deployed have also increased. This includes scenarios where the hands of the user are not available to interact with a system running on such hardware, which precludes some interaction designs from these devices, such as free-hand gestures or the use of a touchpad attached to the device (e.g., on the frame). Alternative modalities include head gestures and speech-based input. However, while these interfaces leave the hands of their users free, they are not as intuitive: common metaphors like touching, pointing, or clicking do not apply. Hence there is an increased need to explain these mechanisms to the user and to make sure they can be used to operate such a device. However, there is no work available on how this should be done properly.
In the research presented here, we conducted a study on different ways to support the use of head gestures and voice control on HMDs. For each modality, an abstract as well as an explicit UI design for communicating their usage to users were designed and evaluated in a care setting, where hands-free interaction is necessary to interact with patients and for hygienic reasons. First results from a within-subjects analysis show that surprisingly there does not seem to be much of a difference in performance when comparing these approaches to each other as well as when comparing them to a baseline implementation which offered no additional help. User preferences between the designs diverged: participants often had one clear favourite for the head-gesture UIs while barely noticing the difference between the speech-based UIs. Preferences on certain designs did not seem to impact performance in objective and subjective measures such as error rates and questionnaire results. This suggests that either implementations’ support for these modalities should adapt to individual preferences or that there is a need to focus on other areas of support to increase usability.
Visual SLAM with Graph-Cut Optimized Multi-Plane Reconstruction
Fangwen Shu, DFKI GmbH
Yaxu Xie, DFKI GmbH
Jason Rambach, German Research Center for Artificial Intelligence (DFKI)
Alain Pagani, German Research Center for Artificial Intelligence
Didier Stricker, German Research Center for Artificial Intelligence
This paper presents a semantic planar SLAM system that improves pose estimation and mapping using cues from an instance planar segmentation network. While the mainstream approaches are using RGB-D sensors, employing a monocular camera with such a system still faces challenges such as robust data association and precise geometric model fitting. In the majority of existing work, geometric model estimation problems such as homography estimation and piece-wise planar reconstruction (PPR) are usually solved by standard (greedy) RANSAC separately and sequentially. However, setting the inlier-outlier threshold is difficult in absence of information about the scene (i.e. the scale). In this work, we revisit these problems and argue that two mentioned geometric models (homographies/3D planes) can be solved by minimizing an energy function that exploits the spatial coherence, i.e. with graph-cut optimization, which also tackles the practical issue when the output of a trained CNN is inaccurate. Moreover, we propose an adaptive parameter setting strategy based on our experiments, and report a comprehensive evaluation on various open-source datasets.
Augmented Reality Interface for Sailing Navigation: a User Study for Wind Representation
Francesco Laera, Polytechnic University of Bari
Vito Manghisi, Polytechnic University of Bari
Alessandro Evangelista, Polytechnic University of Bari
Mario Massimo Foglia, Polytechnic University of Bari
Michele Fiorentino, Polytechnic University of Bari
This paper presents a novel Augmented Reality (AR) interface for Head Mounted Display HMD, specifically for sailing navigation. Compared to literature and the commercial solutions available, the novelty is to use boat-referenced 3D graphics. It allows representing wind direction and intensity, heading with monochrome green elements. Furthermore, we carried out a user validation study. We implemented a virtual simulator including a sailboat, the marine environment (i.e., sea, sky, marine traffic, and sounds), and the presented interface as AR overlay. We evaluated the effectiveness of the wind representation of the AR interface through an online questionnaire based on a video simulation and asking the user to imagine it as the result of an AR visualization. We defined one test scenario with wind variations and a distracting element (i.e., a crossing vessel). 75 sailors (59% experts, with more than 50 sailing days per year) participated in the questionnaire, and most of them (63%) considered the video effective at simulating the AR interface. In addition, 75% are ready to wear an AR device during sailing. Also, the usability (SUS questionnaire) and the user experience (UEQ) results provided positive results.
Depth Inpainting via Vision Transformer
Ilya Makarov, HSE University
Gleb Borisenko, HSE University
Depth inpainting is a crucial task for working with augmented reality. In previous works, missing depth values were completed by convolutional encoder-decoder networks, which is a bottleneck for the current models. Nowadays, vision transformers show high quality in various computer vision tasks. In this study, we present a supervised method for depth inpainting via RGB image and sparse depth map using vision transformers. The proposed model is trained and evaluated on the NYUv2 dataset used for indoor navigation tasks for augmented reality and robotics. Experiments show that a vision transformer with a restrictive convolutional tokenization model significantly improves the quality of the inpainted depth map.
Evaluation of Visual Requirements and Software-Design for Immersive Visibility in Industrial Applications
Maximilian Rosilius, University of Applied Sciences Würzburg-Schweinfurt
Benedikt Wirsing, University of Applied Sciences Würzburg-Schweinfurt
Ingo von Eitzen, University of Würzburg
Markus Wilhelm, University of Applied Sciences Würzburg-Schweinfurt
Jan Schmitt, University of Applied Sciences Würzburg-Schweinfurt
Bastian Engelmann, University of Applied Sciences Würzburg-Schweinfurt
Volker Braeutigam, University of Applied Sciences Würzburg-Schweinfurt
Currently, many sources predict increasing use of AR technology in the industrial environment. The task of immersive productive assistance systems is to provide information contextually to the industrial user. Therefore, it is essential to explore the factors and effects that influence the visibility and the corresponding quality of this information. Caused by the technical limitations of additive display technology and application conditions, this new approach has evaluated the immersive visibility of Landolt Rings in various greyscales against ambient illuminance levels on different industrial-like surfaces, coupled with and without a white virtual background. For this purpose, an empirical study in a within-subjects-design with full factorial experimental design (n=23) was conducted on Microsoft HoloLens 2 hardware. The mean values of the main effects indicate that visibility is significantly affected by ambient illuminance (best results at lower level), greyscale (best results at middle level) and virtual background (best results with background). In contrast, the choice of surface is shown to have no statistically significant effect on visibility, however it affects the response time. Additionally, cross-interactions of variables were analyzed and lead to a design recommendation for immersive industrial applications.
Comparing Head and AR Glasses Pose Estimation
Ahmet Firintepe, BMW Group Research, New Technologies, Innovations
Oussema Dhaouadi, BMW Group Research, New Technologies, Innovations
Alain Pagani, German Research Center for Artificial Intelligence
Didier Stricker, German Research Center for Artificial Intelligence
In this paper, we compare AR glasses and head pose estimation performance. We train different pose estimation approaches for head pose estimation with the generated head pose labels to compare them to their AR glasses estimation accuracy. These include the state-of-art GlassPoseRN and P2P networks, as well as our novel CapsPose algorithm. We show that estimating the AR glasses pose is more accurate than the head pose in general. In a first analysis, we show the general regression performance of the models when the AR glasses and faces are both known to the network during training. We then analyze the driver generalization performance, where all glasses are known, but part of the drivers are unknown to the Neural Networks. There, the estimation of AR glasses pose again exceeds the head pose. Only in our third analysis, head pose estimation performs better than AR glasses pose estimation. In this case, a new glasses model is added, which was unknown to the Neural Network yet. In addition, we introduce a novel pose estimation network called CapsPose, which is the first network deploying Capsule Networks for 6-DoF pose estimation. We outperform the current state-of-the-art method GlassPoseRN on the HMDPose dataset by reducing the error by 46% for orientation and 51% for translation.
A Nugget-Based Concept for Creating Augmented Reality
Linda Rau, RheinMain University of Applied Sciences
Robin Horst, RheinMain University of Applied Sciences
Yu Liu, RheinMain University of Applied Sciences
Ralf Dörner, RheinMain University of Applied Sciences
Creating Augmented Reality (AR) applications can be challenging, especially for persons with little or no technical background. This work introduces a concept for pattern-based AR applications that we call AR nuggets. One AR nugget reflects a single pattern from an application domain and includes placeholder objects and default parameters. Authors of AR applications can start with an AR nugget as an executable stand-alone application and customize it. This aims to support and facilitate the authoring process. Additionally, this paper identifies suitable application patterns that serve as a basis for AR nuggets. We implement and adapt AR nuggets to an exemplary use case in the medical domain. In an expert user study, we show that AR nuggets add a statistically significant value to an educational course and can support continuing education in the medical domain.
Exploring the Effect of Visual Cues on Eye Gaze During AR-Guided Picking and Assembly Tasks
Arne Seeliger, ETH Zurich
Gerrit Merz, Karlsruhe Institute of Technology
Christian Holz, ETH Zürich
Stefan Feuerriegel, ETH Zurich
In this paper, we present an analysis of eye gaze patterns pertaining to visual cues in augmented reality (AR) for head-mounted displays (HMDs). We conducted an experimental study involving a picking and assembly task, which was guided by different visual cues. We compare these visual cues along multiple dimensions (in-view vs. out-of-view, static vs. dynamic, sequential vs. simultaneous) and analyze quantitative metrics such as gaze distribution, gaze duration, and gaze path distance. Our results indicate that visual cues in AR significantly affect eye gaze patterns. Specifically, we show that the effect varies depending on the type of visual cue. We discuss these empirical results with respect to visual attention theory.
Manipulating Rotational Perception in Virtual Reality
Jude Afana, University of Nottingham
Joe Marshall, University of Nottingham
Paul Tennent, University of Nottingham
People get disoriented and detached from the real world when immersed in a virtual environment; this makes them lose track of rotation in the real world. This paper studies people’s ability to maintain perception of spatial orientation in the real environment while engaged in a virtual experience and explores how visual cues affect the results. Twelve participants performed targeting tasks with rotations, followed by pointing in a known direction to observe the error in perception of real world orientation. Error was measured in three VR environments: visual cues consistent with real world rotation; visual cues slowly changing to become inconsistent with real world; no rotational visual cues. We found that visual cues are essential for people to perceive real-world orientation and removing cues results in drastic disorientation. Moreover, altering visual cues deliberately can be used to control people’s orientation perception to disorientate people in the direction we desire; in our experiment participants did not notice this manipulation. Manipulation of the presentation of visual cues may allow designers to control, correct and manipulate people’s cognitive representation of their orientation and position not only in the virtual world, but also in the real world, be it for in-place redirection or “redirected standing”, or corrective redirection for safety.
An RGB-D Refinement Solution for Accurate Object Pose Estimation
Lounès Saadi, INSA Rouen Normandie
Bassem Besbes, Diota
Sebastien Kramm, Université Rouen Normandie
Abdelaziz Bensrhair, INSA Rouen Normandie
Digital solutions are more and more employed in the industry with the aim of improving manufacturing processes. In this paper, we consider the issue of accurately localizing objects for augmented reality applications. Augmented reality has become a major asset for manufacturing processes as it enables to interactively provide relevant information about manufactured objects. These applications require a high object localization accuracy to avoid displaying wrong information. However, standard object localization methods do not always meet the industry’s accuracy constraints. We address this problem with a novel RGB-D refinement method in order to provide the optimal object localization accuracy. Given a coarse initial object localization, we iteratively refine the input by combining a geometric constraint and depth consistency. We show that fusing these two constraints enables to significantly improve the object localization accuracy. Our refinement method is evaluated on the challenging Linemod and Occlusion datasets. We demonstrate high accuracy and robustness. Furthermore, we propose quantitative and qualitative results on several industrial objects to show the contribution of our method in the applicative field.
VR Collaboration in Large Companies: An Interview Study on the Role of Avatars
Natalie Hube, Mercedes-Benz AG
Katrin Angerbauer, University of Stuttgart
Daniel Pohlandt, Mercedes-Benz AG
Kresimir Vidackovic, Hochschule der Medien – University of Applied Sciences
Michael Sedlmair, University of Stuttgart
Collaboration is essential in companies and often physical presence is required, thus, more and more Virtual Reality (VR) systems are used to work together remotely. To support social interaction, human representations in form of avatars are used in collaborative virtual environment (CVE) tools. However, up to now, the avatar representations often are limited in their design and functionality, which may hinder effective collaboration. In our interview study, we explored the status quo of VR collaboration in a large automotive company setting with a special focus on the role of avatars. We collected interview data from 21 participants, from which we identified challenges of current avatar representations used in our setting. Based on these findings, we discuss design suggestions for avatars in a company setting, which aim to improve social interaction. As opposed to state-of-the-art research, we found that users within the context of a large automotive company have an altered need with respect to avatar representations.
Indicators of Training Success in Virtual Reality Using Head and Eye Movements
Joy Gisler, ETH Zürich
Johannes Schneider, University of Liechtenstein
Joshua Handali, University of Liechtenstein
Valentin Holzwarth, University of Liechtenstein
Christian Hirt, ETH Zürich
Wolfgang Fuhl, Wilhelm Schickard Institut
Jan vom Brocke, University of Liechtenstein
Andreas Kunz, ETH Zurich
An essential aspect in the evaluation of Virtual Training Environments (VTEs) is the assessment of users’ training success, preferably in real-time, e.g. to continuously adapt the training or to provide feedback. To achieve this, leveraging users’ behavioral data has been shown to be a valid option. Behavioral data include sensor data from eye trackers, head-mounted displays, and hand-held controllers, as well as semantic data like a trainee’s focus on objects of interest within a VTE. While prior works investigated the relevance of mostly one and in rare cases two behavioral data sources at a time, we investigate the benefits of the combination of three data sources. We conduct a user study with 48 participants in an industrial training task to find correlations between training success and measures extracted from different behavioral data sources. We show that all individual data sources, i.e. eye gaze position and head movement, as well as duration of objects in focus are related to training success. Moreover, we find that simultaneously considering multiple behavioral data sources allows to better explain training success. Further, we show that training outcomes can already be predicted significantly better than chance by only recording trainees for parts of their training. This could be used for dynamically adapting a VTE’s difficulty. Finally, our work further contributes to reaching the long-term goal of substituting traditional evaluation of training success (e.g. through pen-and-paper tests) with an automated approach.
Compelling AR Earthquake Simulation with AR Screen Shaking
Chotchaicharin Setthawut, Nara Institute of Science and Technology
Johannes Schirm, Nara Institute of Science and Technology
Naoya Isoyama, Nara Institute of Science and Technology
Diego Vilela Monteiro, Nara Institute of Science and Technology
Hideaki Uchiyama, Nara Institute of Science and Technology
Nobuchika Sakata, NAIST
Kiyoshi Kiyokawa, Nara Institute of Science and Technology
In the past, virtual reality (VR) has been used for earthquake safety training, but the problem is that the simulated environment is different from the user’s imminent environment. In order for the user to take the simulation more seriously and utilize the acquired experience, we propose a video see-through augmented reality (AR) earthquake simulation with a novel AR screen shaking technique that simulates the applied force to the user’s head. The experimental results show that our AR system can increase the presence and believability of the earthquake compared to the VR system and the AR system without the screen shaking technique.
Reproduction of Environment Reflection using Extrapolation of Front Camera Images in Mobile AR
Shun Odajima, Saitama University
Takashi Komuro, Saitama University
In this paper, we propose a method to reproduce the reflection of a real scene on a virtual object using only the images captured by a camera attached to the front of a mobile device. Since it is not possible to acquire the entire scene using only the front camera, the area surrounding the front camera image is extrapolated to obtain a sufficient scene for reflection. Image transformation using deep neural networks is used for extrapolation, and high-quality and stable extrapolation is realized by using the extrapolation results in the previous frame. As a result of experiment that evaluated the quality of extrapolation and reproduction of reflection, we confirmed that both extrapolated images and reproduced reflection looked natural. We also conducted an experiment to evaluate users’ impression of the reflection. The results showed that the proposed method was effective for participants who knew about AR in naturalness of reflection and material perception.
ASAP: Auto-generating Storyboard and Previz with Virtual Humans
Hanseob Kim, Korea Institute of Science and Technology
Ghazanfar Ali, University of Science and Technology
Jae-In Hwang, Korea Institute of Science and Technology
We present a tool for Auto-generating Storyboard And Previz for screenwriters and filmmakers, called ASAP.
Our system allows users to easily simulate their stories in the form of 3D animated/visual scenes with virtual humans in a virtual environment.
We only ask users to write their script using Final Draft, an exclusive screenwriting tool, and upload them to our system.
The uploaded script is parsed into paragraphs of the action, character, and dialogue.
From those paragraphs (i.e., text data), our system uses a combination of deep learning, data-driven, and rule-based approaches to instantly generate virtual human’s physical motions and co-speech gestures, presenting natural behavior/dialogue scenes.
Thus, users can observe automatically generated pre-visualized animations (i.e., previz) from the script and can create the storyboard by capturing scenes being played.
Our ASAP can minimize time-and-money consuming and labor-intensive work in the early stages of filmmaking, and do it as soon as possible.
We believe that our tool and approach have a good potential for wide dissemination in the film industry.
Simultaneous Real Walking and Asymmetric Input in Virtual Reality with a Smartphone-based Hybrid Interface
Li Zhang, Northwestern Polytechnical University
Weiping He, Northwestern Polytechnical university
Zhiwei Cao, Northwestern Polytechnical University
Shuxia Wang, Northwestern Polytechnical University
Huidong Bai, The University of Auckland
Mark Billinghurst, The University of Auckland
Compared to virtual navigation methods like joystick-based teleportation in Virtual Reality (VR), real walking enables more natural and realistic physical behaviors and better overall user experience. This paper presents a smartphone-based hybrid interface that combines a smartphone and a handheld controller, which enables real walking by viewing the physical environment and asymmetric 2D-3D input at the same time. The phone is virtualized in VR and streams a view of the real world to a collocated virtual screen, enabling users to avoid or remove physical obstacles. The touchscreen and the controller provide an asymmetric input choice for users to improve interaction efficiency in VR. We implemented a prototype system and conducted a pilot study to evaluate its usability.
Heat Pain Threshold Modulation Through Experiencing Burning Hands in Augmented Reality
Daniel Eckhoff, City University of Hong Kong
Alvaro Cassinelli, City University, Hong Kong
Christian Sandor, City University of Hong Kong
Visual stimuli can modulate the temperature at which people perceive heat pain. However, very little research exists on the potential use of Augmented Reality (AR) to modulate the heat pain threshold (HPT). In this paper, we investigate whether participants’ HPTs can be modulated observing virtual flames on their hands through a head-mounted video see-through display (VST-HMD). In a pilot study (n = 7), we found that rendering virtual flames had a significant effect (p < 0.05) on the HPT. The virtual flames on the participant’s hand led to a decrease in the temperature at which they would perceive pain related to heat. These results indicate that AR-induced stimuli may be an effective way to achieve top-down modulation of the experience of pain.
Research on the Usability of Hand Motor Function Training based on VR System
Yang Gao, Beihang university
Yingnan Zhai, Beihang University
Mingyang Hao, Beihang University
Lizhen Wang, Beihang University
Aimin Hao, Beihang University
Virtual reality technology can provide immersive experiences to facilitate limb motor training, including visual, auditory, and force touching, etc. In this paper, we develop a VR-based hand motor rehabilitation system. This system collects and analyzes users’ limb behavior data, customizes and trains hand motor training programs according to different scenarios and different levels of difficulties, and integrates management, evaluation, customization, and feedback functions. Then we conduct several experiments to verify the effectiveness and usability of the rehabilitation system. Through the FMA, NHPT, and SUS scale assessments on healthy individuals, the analysis concluded that the average SUS score of all participants was 80, verifying that the hand rehabilitation system is practical and feasible. Upper limb motor disorder patients’ trials and data analysis will be carried out in the future.
VRSmartphoneSketch: Augmenting VR Controller With A Smartphone For Mid-air Sketching
Shouxia Wang, Northwestern Polytechnical University
Li Zhang, Northwestern Polytechnical University
Jingjing Kang, NWPU
Shuxia Wang, Northwestern Polytechnical University
Weiping He, Northwestern Polytechnical university
We propose VRSmartphoneSketch, a mid-air sketching system that combines a smartphone with VR controllers to allow hybrid 2D and 3D inputs. We conducted a user study with 12 participants to explore the utility of the hybrid inputs of the bundled smartphone and controller when there was no drawing surface. The results show that the proposed system didn’t significantly improve stroke accuracy and user experiences compared with the benchmark controller-only drawing condition. However, users have given positive feedback on the stability and sense of control of strokes brought by the smartphone’s touch.
An Empirical Study of Size Discrimination in Augmented Reality
Liwen Wang, City Univeristy of Hong Kong
Christian Sandor, City University of Hong Kong
Existing psychophysical experiments show that size perception can influence the human identification of object properties (e.g., shape or weight) in augmented reality (AR). Some recent studies have revealed the detection threshold of object size in real physical objects. However, the users’ absolute detection threshold of object size augmentation is not clear, which limits the further evaluation of AR design. In this paper, we present two two-alternative forced-choice-based experiments on size perception of virtual objects in AR to explore the detection threshold of size difference in object augmentation. Our experimental results demonstrate that the user’s point of subjective equality (PSE) is 4.00%, and the size difference could be easily detected when the virtual object is larger than 5.18%.
Watch-Your-Skiing: Visualizations for VR Skiing using Real-time Body Tracking
Xuan Zhang, Tokyo Institute of Technology
Erwin Wu, Tokyo Institute of Technology
Hideki Koike, Tokyo Institute of Technology
Correcting one’s body posture is necessary when acquiring specific skills, especially for some sports such as skiing or gymnastics. However, it is difficult to observe our posture objectively, which is the reason why a trainer is required. In this paper, we introduce a VR ski training system using full body motion capture to provide real-time feedback for the user. Two types of different visual cues are developed and qualitatively compared in a user study. This system opens the opportunity to learn alpine skiing by oneself and also has a potential to be applied to other sports or skill acquisition.
Designing a Multi-Modal Communication System for the Deaf and Hard-of-Hearing
Gi-bbeum Lee, Korea Advanced Institute of Science and Technology
Hyuckjin Jang, Korea Advanced Institute of Science and Technology
Hyundeok Jeong, KAIST
Woontack Woo, KAIST
In remote collaboration using Augmented Reality (AR), speech and gesture are major communication methods for the general public. However, the Deaf and Hard-of-Hearing (DHH) population cannot join in the communication due to the absence of a sign language interface which is their primary language. Recent works have tried to augment spoken language with sign language animations or captions, but the research to convey sign language with spoken language is still very limited. In this paper, we propose a novel multi-modal communication system that integrates sign language translation, speech recognition, and shared object manipulation in the mobile AR environment. Though the system is currently under development, we demonstrated a rapid prototype of the telemedicine app leveraging the video prototyping method to integrate the system modules. We performed preliminary interviews about our approach with DHH users, a sign language interpreter, and a physician. We discuss the insights into the future design of the DHH communication support in the AR collaboration system. This study has a socio-cultural, economic impact on the DHH population as a barrier-free design of a remote collaboration system in a practical scenario. Another contribution of this work is that we suggested a novel user-centered system for DHH users in AR by integrating the existing technologies.
Multi-scale Mixed Reality Collaboration for Digital Twin
Hyung-il Kim, KAIST
Taehei Kim, KAIST
Eunhwa Song, KAIST
Seo Young Oh, KAIST
Dooyoung Kim, KAIST
Woontack Woo, KAIST
In this poster, we present a digital twin-based mixed reality system for remote collaboration with the size-scaling of the user and the space. The proposed system supports collaboration between an AR host user and a VR remote user by sharing a 3D digital twin of the AR host user. To enhance the coarse authoring of a shared digital twin environment, we provide a size scaling of the digital twin environment with the world-in-miniature view. Also, we enable scaling the size of the VR user’s avatar to enhance both coarse (size-up) and fine-grained (size-down) authoring of the digital twin environment. We describe the system setup, input methods, and interaction methods for scaling space and user.
Focus Group on Social Virtual Reality in Social Virtual Reality: Effects on Emotion and Self-Awareness
Pat Manyuru, University of Queensland
Chelsea Dobbins, The University of Queensland
Ben Matthews, University of Queensland
Oliver Baumann, Bond University
Arindam Dey, University of Queensland
Social Virtual Reality (VR) platforms enable multiple users to be present together in the same virtual environment (VE) and interact with each other in this space. These platforms are used in different application areas including teaching and learning, conferences, and meetings. To improve the engagement, safety, and overall positive experience in such platforms it is important to understand the effect they have on users’ emotional states and self-awareness while being in the VE. In this work, we present a focus group study where we discussed users’ opinions about social VR and we ran the focus group in a social VR platform created in Hubs by Mozilla. Our primary goal was to investigate users’ emotional states and self-awareness while using this platform. We measured these effects using positive and negative affect schedule (PANAS) and Self-Assessment Questionnaire (SAQ). The experiment involved 12 adult participants who were volunteers from around the world with previous experience of VR.
XR Mobility Platform: Multi-Modal XR System Mounted on Autonomous Vehicle for Passenger's Comfort Improvement
Taishi Sawabe, Nara Institute of Science and Technology
Masayuki Kanbara, Nara Institute of Science and Technology
Yuichiro Fujimoto, Nara Institute of Science and Technology
Hirokazu Kato, Nara Institute of Science and Technology
This paper introduces a multimodal XR mobility system mounted on an autonomous vehicle, which consists of immersive displays including a cylindrical screen or HMD and a motion platform in order to improve a passenger’s comfort. It is expected that the interior environment surrounding passengers will change dramatically when autonomous vehicles are realized in the near future. For example, since the driver is freed from driving he or she becomes a passenger without steering authority, and the windshield and windows are turned into information screens. The goal of this research is to develop technology to improve passenger’s comfort during auto-driving using the XR mobility platform, which is a multimodal VR/AR system with a tilt controllable seat mounted on an autonomous vehicle. This paper introduces the configuration of the XR mobility platform and proposes a movement sense control method.
Multi-Drone Collaborative Trajectory Optimization for Large-Scale Aerial 3D Scanning
Fangping Chen, Peking University
Yuheng Lu, Peking University
Binbin Cai, Beijing Yunsheng Intelligent Technology Co., Ltd.
Xiaodong Xie, Peking University
Reconstruction and mapping of outdoor urban environment are critical to a large variety of applications, ranging from large-scale citylevel 3D content creation for augmented and virtual reality to the digital twin construction of smart cities and automatic driving. The construction of large-scale city-level 3D model will become another important medium after images and videos. We propose an autonomous approach to reconstruct the voxel model of the scene in real-time, and estimate the best set of viewing angles according to the precision requirement. These task views are assigned to the drones based on Optimal Mass Transport (OMT) optimization. In this process, the multi-level pipelining in the chip design method is applied to accelerate the parallelism between exploration and data acquisition. Our method includes: (1) real-time perception and reconstruction of scene voxel model and obstacle avoidance; (2) determining the best observation and viewing angles of scene geometry through global and local optimization; (3) assigning the task views to the drones and planning path based on the OMT optimization, and iterating continuously according to new exploration results; (4) expediting exploration and data acquisition in parallel through multi-stage pipeline to improve efficiency. Our method can schedule routes for drones according to the scene and its optimal acquisition perspective in real-time, which avoids the model void and lack of accuracy caused by traditional aerial 3D scanning using routes of cultivating land regardless of the object, and lays a solid foundation for the 3D real-life model to directly become the available 3D data source for AR and VR. We evaluate the effectiveness of our method by collecting several groups of large-scale city-level data. Facts have proved that the accuracy and efficiency of reconstruction have been greatly improved.
3D Volume Visualization and Screen-based Interaction with Dynamic Ray Casting on Autostereoscopic Display
Ruiyang Li, Tsinghua University
Tianqi Huang, Tsinghua University
Hanying Liang, Tsinghua University
Boxuan Han, Tsinghua University
Xinran Zhang, Tsinghua University
Hongen Liao, Tsinghua University
Augmented reality (AR) is an emerging technology to improve visualization experiences. However, visualizing volume data is limited in existing AR systems due to the lack of an intuitive and precise exploration scheme. In this paper, we present a 3D augmented volume visualization and screen-based interaction method. An autostereoscopic handheld display is utilized to achieve naked-eye 3D perception, and a stereo camera is adopted to track the display’s 6 DoF pose. We implement real-time ray casting with GPU acceleration and enhance the visual experience by defining the dynamic view frustum, clipping interaction and transfer function based on the pose of the handheld display. Our display system achieves real-time rendering and tracking performance with dynamic visual effects, allowing a global overview and detailed visualization of arbitrary clipping planes. We also perform a user study to compare an anatomical landmark annotation task on a 2D environment with our system. The results show a significant reduction in completion time and an improvement in depth perception and comprehension of complex topologies. Furthermore, to illustrate the applicability of our system, we present three volumes from different biological scales.
Occlusion Handling in Outdoor Augmented Reality\\using a Combination of Map Data and Instance Segmentation
Takaya Ogawa, Osaka University
Tomohiro Mashita, Osaka University
Visual consistency between virtual objects and the real environment is essential to improve user experience in Augmented Reality (AR). Occlusion handling is one of the key factors for maintaining visual consistency. In an application scenario for small areas such as indoors, various methods are applicable to acquire a depth information required for occlusion handling. However, in an application scenario in wide environment such as outdoor especially a scene including many buildings, occlusion handling is a challenging task because acquiring an accurate depth map is challenging. Several studies that have tackled this problem utilized 3D models of real buildings, but they have suffered from the accuracy of 3D models and camera localization. In this study, we propose a novel occlusion handling method using a monocular RGB camera and map data. Our method detects the regions of buildings in a camera image using an instance segmentation method and then obtains accurate occlusion handling in the image from each building instance and corresponding building map. The qualitative evaluation shows the improvement in the occlusion handling with buildings. The user study also shows the better performance of the perception of depth and distance than a model-based method.
Finding a range of perceived natural visual walking speed for stationary travelling techniques in VR
Nilotpal Biswas, Indian Institute of Technology Guwahati
Samit Bhattacharya, Indian Institute of Technology
Travelling is one of the significant interactions in virtual reality (VR).
Until now, researchers have come up with many Virtual Locomotion Techniques (VLT) to supply natural, efficient and usable ways of navigating in VR while not causing VR sickness. Stationary VLTs are those which does not demand any physical movement from the user to travel in the virtual environment. In order to improve the experience of walking in this VLT, it is essential to show the view transition speed that mimics the natural walking speed of the user. In this paper, we describe a within-subject study that was performed to establish a range of perceptually natural walking speeds while being stationary. In the study, we provided vibrotactile feedbacks behind the ears of the subjects to avoid motion sickness. The subjects were exposed to visuals with gains ranging from 1.0 to 3.0. The slowest speed was the estimated natural speed of the user, and the highest speed was three times faster than this. The perceived naturalness of the speed was evaluated using the self-report. We found the range of the visual gain to be 1.40 to 1.78.
A Japanese Character Flick-Input Interface for Entering Text in VR
Ryota Takahashi, Osaka University
Shizuka Shirai, Osaka University
Jason Orlosky, Osaka University
Yuki Uranishi, Osaka University
Haruo Takemura, Osaka University
This paper presents new flick input interfaces to improve the usability of Japanese character input in VR space. We designed three different interfaces called TouchFlick, EyeFlick, and RoundFlick, which make use of various controller interactions and eye gestures. To investigate the effectiveness of these methods, we compared them with a conventional VR QWERTY keyboard with ray-based selection. We found that TouchFlick was significantly faster and RoundFlick had a significantly lower error rate for experienced users. On the other hand, the input efficiency was the same as that of the conventional method for those who had no experience with flick input. Regarding subjective evaluation, there was no significant difference in usability and mental workload.
Learning to Perceive: Perceptual Resolution Enhancement for VR Display with Efficient Neural Network Processing
Wen-Tsung Hsieh, Graduate Institute of Electronics Engineering, National Taiwan University
Shao-Yi Chien, National Taiwan University
Even though the Virtual Reality (VR) industry is experiencing a rapid growth with ever-expanding demands today, VR applications have yet to provide a fully immersive experience. The insufficient resolution of the VR head-mounted display (HMD) hinders the user from further immersion into the virtual world. In this work, we attempt to enhance the immersive experience by improving the perceptual resolution of VR HMDs. We employ an efficient neural-network-based approach with the proposed temporal integration loss function. By taking the temporal integration mechanism of the Human Visual System (HVS) into account, our network learns the perception process of the human eye, and temporally upsamples a sequence that in turn improves its perceived resolution. Specifically, we discuss a possible scenario where we deploy our approach on a VR system equipped with the eye-tracking technology, which could save up to 75% of the computational load. Compared with the state-of-the-art in terms of the inference time analysis and a user experiment, it shows that our approach runs around 1.89 times faster and produces more favorable results.
3D Photography with One-shot Portrait Relighting
Yunfei Liu, Beihang University
Sijia Wen, Beihang University
Feng Lu, Beihang University
3D photography is a fascinating way to synthesize novel views from limited captured views by using image-based rendering techniques. However, due to the lack of consideration on the light condition, the existing methods cannot achieve vivid results for augmented reality systems and virtual reality systems. In this paper, we present a physical-based framework that explicitly models 3D photography with relighting from a one-shot portrait. Instead of directly rendering new views, we first propose a facial albedo extraction network (FAE-Net) for synthesizing new views under different light conditions. In order to render more realistic reflected light, we come up with a solution for accurate mesh reconstruction through fine-grained portrait depth estimation. By taking advantage of these two technical components, our method is capable of generating novel views with different lighting conditions, which can faithfully deliver realistic rendered results. Extensive experiments show the proposed method can achieve better visual results.
Interactive Embodied Agent for Navigation in Virtual Environments
Chong Cao, Beihang University
Te Cao, State Key Laboratory of Virtual Reality Technology and Systems
Yifan Guo, School of New Media Art and Design
Guanyi Wu, School of New Media Art and Design
Xukun Shen, Beihang University
With the rapid development of virtual reality hardware, virtual tours in galleries and museums are more and more popular in recent years. However, users sometimes get lost in the virtual space and miss displaying items during free exploration of the virtual scene. Therefore, proper navigation assistance is very important to help the user finish the tour and concentrate on the exhibits. Embodied agent can help users explore the scene effectively with higher interactivity and enhance user presence, but it cannot ensure completeness of visiting. In this paper, we investigate the effect of embodied agent on providing navigation assistance in virtual environments. We focus on the motion control of the agent and experiment on different speed, stay time and interactivity settings of the agent. The result shows that interactive agent can enhance visiting completeness and user presence. With these findings, we provide important considerations for the design of embodied agent for navigation in virtual environments.
Focus-Aware Retinal Projection-based Near-Eye Display
Mayu Kaneko, Tokyo Institute of Technology
Yuichi Hiroi, Tokyo Institute of Technology
Yuta Itoh, The University of Tokyo
The primary challenge in optical see-through near-eye displays lies in providing correct optical focus cues. Established approaches such as varifocal or light field display typically sacrifice temporal or spatial resolution of the resulting 3D images. This paper explores a new direction to address the trade-off by combining a retinal projection display (RPD) with ocular wavefront sensing (OWS). Our core idea is to display a depth of field-simulated image on an RPD to produce visually consistent optical focus cues while maintaining the spatial and temporal resolution of the image. To obtain the current accommodation of the eye, we integrate OWS. We demonstrate that our proof-of-concept system successfully renders virtual contents with proper depth cues while covering the eye accommodation range from 28.5 cm (3.5 D) to infinity (0 D).
PanoCue: An Efficient Visual Cue With a Omnidirectional Panoramic View for Finding a Target in 3D Space
SeungA Chung, Ewha Womans University
Hwayeon Joh, Ewha Womans University
Eunji Lee, Ewha womans university
Uran Oh, Ewha Womans University
Finding a specific object is one of the basic tasks in both physical and virtual environments. However, visually scanning large scenes can be inefficient and frustrating, especially when the target is outside of one’s field of view. As a result, there have been several studies for supporting target finding tasks in a three-dimensional environment with various visual cues. However, most studies focused on conveying either the relative position or the distance of the target to users.
In this study, we propose PanoCue, which is a visual cue that overlays a panorama view of the surroundings at the center of the user’s field of view for conveying both the position and the distance of a target in a 3D space with respect to the user’s location and head orientation. For evaluation, we conducted a user study with 20 participants where they were asked to find a target under different visual cue conditions: PanoCue, Radar, Arrow, and a baseline without cues. As a result, we found that the presence of visual cues improves the task performance, and that PanoCue significantly reduces the travel distance. Findings also showed that our feedback design received the positive ratings in terms of easiness, fatigue and satisfaction.
Analysis and Validation for Kinematic and Physiological Data of VR Training System
Shuwei Chen, Beihang University
Ben Hu, State Key Laboratory of Virtual Reality Technology and Systems, Beihang University
Yang Gao, State Key Laboratory of Virtual Reality Technology and System
Zhiping Liao, affiliated with Zhejiang University School of Medicine
Yang Liu, Sir Run Run Shaw Hospital (SRRSH), affiliated with the Zhejiang University School of Medicine
Jianhua Li, Sir Run Run Shaw Hospital (SRRSH), affiliated with the Zhejiang University School of Medicine
Aimin Hao, Beihang University
Virtual reality applications can provide a more immersive environment that improves users’ enthusiasm to participate. For VR-based limb motor training applications, the widespread use of VR techniques still has many challenges. On the one hand, it is not easy to evaluate the effectiveness and accuracy of VR-based programs. On the other hand, monitoring the users’ physical and mental burden during the training process is an essential but difficult task. To this end, we propose a simple and economical VR-based application for limb motor training. Kinematic data are used to monitor the user’s movements quantitatively. We also collect physiological data, including heart rate variability (HRV) and electroencephalogram (EEG) data. HRV data are used to assess physical fatigue in real-time and EEG data can be used to detect mental fatigue in the future. Based on this application, we have conducted many experiments and user studies to verify the kinematic data monitoring accuracy and the feasibility of fatigue detecting. The results have demonstrated that VR-based solutions for limb motor training have good kinematic data measurement precision. Meanwhile, the physiological data demonstrated that the VR-based rehabilitation does not cause too much physical fatigue to participants.
Novel Augmented Reality Enhanced Solution towards Vocational Training for People with Mental Disabilities
Brian Soon Wei Chiam, Singapore Institute of Technology
Ivy Leung, Singapore Institute of Technology
Oran Zane Devilly, Singapore Institute of Technology
Clemen Yun Da Ow, Singapore Institute of Technology
Yunqing Guan, Singapore Institute of Technology
Bhing Leet Tan, Singapore Institute of Technology
Augmented Reality is widely recognized as the next computing platform and has found applications in various sectors, including training, education, entertainment, engineering, etc. Although AR has been applied to cognitive training for people with neurological and psychiatric conditions, there is a huge potential in its use within the field of vocational rehabilitation for persons with psychiatric and neurodevelopmental disabilities. In this paper, we present a novel AR enhanced solution towards vocational rehabilitation for people with a range of cognitively functional levels. Multiple immersive training scenarios are designed and developed with a target of allowing users to develop their vocational skills which is evaluated by the Feasibility Evaluation Checklist. Proper user interface design approach was deployed with considerations given to the target users. The solution also supports multiplayer mode thus allowing the therapists to keep track of vital performance data of the users in the co-immersed AR environment. A user study was conducted with satisfying results achieved.
Gaze-Adaptive Subtitles Considering the Balance among Vertical/Horizontal and Depth of Eye Movement
Yusuke Shimizu, Kobe University
Ayumi Ohnishi, Kobe University
Tsutomu Terada, Kobe University
Masahiko Tsukamoto, Kobe University
Subtitles (captions displayed on the screen) are important in 3D content, such as virtual reality (VR) and 3D movies, to help users understand the content. However, an optimal displaying method and framework for subtitles have not been established for 3D content because 3D has a depth factor. To determine how to place text in 3D content, we propose four methods of moving subtitles dynamically considering the balance between the vertical/horizontal and depth of gaze shift. These methods are used to reduce the difference in depth or distance between the gaze position and subtitles. Additionally, we evaluate the readability of the text and participants’ fatigue. The results show that aligning the text horizontally and vertically to eye movements improves visibility and readability. It is also shown that the eyestrain is related to the distance between the object and subtitles. This evaluation provides basic knowledge for presenting text in 3D content.