Demos

Demos

Wednesday, 6 October
20:00 CEST UTC+2

Booths are located in Gathertown Room: Competition/Demo/Pitch-your-lab

The First Open AR Cloud Testbed

Gabor Soros, Nokia Bell Labs

Booth 71

We present the first deployment of an open, scalable and distributed spatial computing platform. We show in action its extensible protocols for discovering spatial services and spatial content in a geographic area, representing poses of real and virtual cameras and objects, exchanging content records, and interfacing with spatial computing services like visual localization. We also demonstrate the usage of the platform through a WebXR reference client in the first city-wide prototype testbed in Bari, Italy.

Co-Drive: the experience of a shared car trip between a driver and a remote passenger

Laura Boffi, University of Ferrara

Booth 72

Co-Drive is a service concept that allows social virtual travelling by car between a driver of a vehicle and a remote passenger connected via virtual reality from home. The Co-Drive concept enables novel social interactions between a driver and a remote passenger who are unknown to each other and it aims to foster new social encounters, for example intergenerational ones between elderly remote passengers (with reduced mobility and travel possibilities) and younger drivers. At ISMAR 2021, Co-Drive will be demonstrated as a way to foster casual and unfocused encounters between unknown conference attendees.

Virtual Negotiation Training "Beat the Bot"

Jan Fiedler, University of Applied Sciences Neu-Ulm

Booth 73

The VR application “Beat the Bot” has successfully combined VR and AI in a negotiation-based dialogue scenario. The purpose is to achieve real success with users, enabling access into future, modern negotiation-based training. The aim of this VR application is to experience a negotiation situation in the form of a pitch and to learn to apply an ideally optimal negotiation style in a highly competitive sales negotiation for capital goods. The user slips into the role of the seller and uses natural language to negotiate with two virtual, AI-controlled agents acting in the role of two professional buyers. The VR application motivates the repetition and consolidation of the learning content by incorporating playful elements and creating a serious game environment

Augmented Reality for Subsurface Utility Engineering, Revisited

Clemens Arth, Graz University of Technology

Booth 74

Civil engineering is a primary domain for new augmented reality technologies. In this work, the area of subsurface utility engineering is revisited, and new methods tackling well-known, yet unsolved problems are presented. We describe our solution to the outdoor localization problem, which is deemed one of the most critical issues in outdoor augmented reality, proposing a novel, lightweight hardware platform to generate highly accurate position and orientation estimates in a global context. Furthermore, we present new approaches to drastically improve realism of outdoor data visualizations. First, a novel method to replace physical spray markings by indistinguishable virtual counterparts is described. Second, the visualization of 3D reconstructions of real excavations is presented, fusing seamlessly with the view onto the real environment. We demonstrate the power of these new methods on a set of different outdoor scenarios.

Simultaneous Real Walking and Asymmetric Input in Virtual Reality with a Smartphone-based Hybrid Interface

Li Zhang, Northwestern Polytechnic University

Booth 75

Compared to virtual navigation methods like joystick-based teleportation in Virtual Reality (VR), real walking enables more natural and realistic physical behaviors and better overall user experience. This paper presents a smartphone-based hybrid interface that combines a smartphone and a handheld controller, which enables real walking by viewing the physical environment and asymmetric 2D-3D input at the same time. The phone is virtualized in VR and streams a view of the real world to a collocated virtual screen, enabling users to avoid or remove physical obstacles. The touchscreen and the controller provide an asymmetric input choice for users to improve interaction efficiency in VR. We implemented a prototype system and conducted a pilot study to evaluate its usability

Cuboid-Shaped Space Recognition from Noisy Point Cloud for Indoor AR Workspace

Ki-Sik Kim, Incheon National University

Booth 76

This paper proposes a geometric shape recognition method for a cuboid-shaped indoor space from a point cloud. We first acquire a point cloud using Visual SLAM for a spherical video. Then, we obtain a geometric model for the indoor space by finding a bestfit cuboid from the point cloud and adjusting it to the real-world environment. The geometric model for the indoor space is updated using the recognized cuboid data. We implemented the proposed method and built a prototype application, which is a simple FPS AR game. Our experiments show that the proposed method provides accurate geometric estimation even when there are a lot of noisy map points in the point cloud.

Demonstrating Spatial Exploration and Slicing of Volumetric Medical Data in Augmented Reality with Handheld Devices

Weizhou Luo, Dresden University of Technology

Booth 77

We present a concept and early prototype for exploring volumetric medical data, e.g., from MRI or CT scans, in head-mounted Augmented Reality (AR) with a spatially tracked tablet. Our goal is to address the lack of immersion and intuitive input of conventional systems by providing spatial navigation to extract arbitrary slices from volumetric data directly in three-dimensional space. A 3D model of the medical data is displayed in the real environment, fixed to a particular location, using AR. The tablet is spatially moved through this virtual 3D model and shows the resulting slices as 2D images. We present several techniques that facilitate this overall concept, e.g., to place and explore the model, as well as to capture, annotate, and compare slices of the data. Furthermore, we implemented a proof-ofconcept prototype that demonstrates the feasibility of our concepts. With our work we want to improve the current way of working with volumetric data slices in the medical domain and beyond.

Two-way Augmented Reality Co-location Under Telemedicine Context

Meng Li, Delft University of Technology

Booth 78

The medical care responsibilities are often on the shoulders of nonprofessionals such as captains who are equipped with forty hours of designated training every five years. However, this training is neither enough for the captains to handle medical incidents nor releases their stress during the treatment. Currently, captains have very limited support from a medical expert, only via phone call or email from the Radio Medical Services. Thus, the authors explored that how the two-way augmented reality (AR) can support the collaboration between captains and doctors for a better quality of care. A Human-Centred Design approach is applied in this study, including field study and user testing. The lean user experience method was applied with fast prototyping-testing loops. The main findings are AR played an essential role to boost confidence on the captain’s side, and the real value of AR is in supporting medical skills like suturing and abdominal searching. This study serves as a pilot research, thus it was limited by small sample size and qualitative method. Improving the communication between the captains and doctors is key for future studies.

Revive Family Photo Albums through a Collaborative Environment Exploiting the HoloLens 2

Gustavo Marfia, University of Bari

Booth 79

While human culture evolves, human heritage remains frozen in time, engraved in material culture as a testament to the past. Among such materials, pictures amount to a prominent example, as they yield information and clues about what happened in the past. In fact, pictures from the past represent a unique chance to revive old memories about affections, relatives, friends, special events etc. During all of the 20th century people printed and collected pictures in photo albums, namely family albums. Even if this phenomenon is not as popular as before, due to the advent of digital photography and the spread of social media, such kind of photos are still of interest, for all those who like to look back and discover their families’ pasts. At the time of social distancing, such photo albums may represent a link between people who are forced to stay away from each other and a distraction from worries and fears. For this reason, we here propose an augmented reality application, as a digital application that may bring people together and support the exploration of the content of photo albums with the aid of artificial intelligence paradigms.

Designing VRPT experience for empathy toward out-groups using critical incidents and cultural explanations

Daniela Hekiert, SWPS University

Booth 80

First person perspective taking presented in head-mounted displays make it a perfect interface to experience empathy toward other people. Since some intercultural misunderstandings stem from ethnocentrism, it is worth considering the possibilities given by the VR experiences to explain behaviors toward out-groups and induce empathetic actions. In this paper we present the design process of ethnoVR, a 7-minutes 360-degree film, which allows taking the perspective of two students – a Chinese and a Pole – who face a problem in efficient communication. The scenario was created with the usage of critical incident technique, adopting user-centered design paradigm.

Prototype of Force Feedback Tool for Mixed Reality Applications

Brad Zhenhong Lei, Brown University

Booth 81

This prototype demonstrates the viability of manipulating both physical and virtual objects with the same tool in order to maintain object permanence across both modes of interaction. Using oppositional force feedback, provided by a servo, and an augmented visual interface, provided by the user’s smartphone, this tool simulates the look and feel of a physical object within an augmented environment. Additionally, the tool is also able to manipulate physical objects that are not part of the augmented reality, such as a physical nut. By integrating both modes of interaction into the same tool, users can fluidly move between these different modes of interaction, manipulating both physical and virtual objects as the need arises. By overlaying this kind of visual and haptic augmentation onto a common tool such as a pair of pliers, we hope to further explore scenarios for collaborative telepresence in future work.

MusiKeys: Investigating Auditory-Physical Feedback Replacement Technique for Mid-air Typing

Alexander Krasner, Virginia Tech

Booth 82

Augmented reality headsets have great potential to transform the modern workplace as the technology improves. However, a major obstacle in bringing AR headsets into workplaces is the need for a precise, virtual, mid-air typing solution. Transitioning from physical to virtual keyboards is difficult due to loss of many physical affordances, such as the ability to tell between touching and pressing a key. We present our system, MusiKeys, as an investigation into the effects of presenting a user with auditory tones and effects as replacements for every kind of feedback that could ordinarily be perceived through touching a keyboard.

Augmented Reality in Chinese Language Pronunciation Practice

Daria Sinyagovskaya, University of Central Florida

Booth 83

Augmented reality (AR) has the unique ability to situate an activity within a physical context, making it valuable for training and education. The affordances of AR include multimodal visualizations and auditory support. The disciplines of phonetics and phonology in applied linguistics concern how sounds are produced and articulated. We have designed and developed a pronunciation training app for the novice second language student learning Mandarin pinyin by applying linguistic theory while adhering to usability principles. This paper describes the requirements that drove the development of the app and the planned online experiment design that compares the same training under two conditions: AR and non-AR. The design features peer assessment, drills, in-app assessment, surveys, and a posttest, and is designed for a remote study without having participants come to the lab.

Deepfake Portraits in Augmented Reality for Museum Exhibits

Nathan Wynn, University of Georgia

Booth 84

In a collaboration with the Georgia Peanut Commission’s Education Center and museum in Georgia, USA, we developed an augmented reality app to guide visitors through the museum and offer immersive educational information about the artifacts, exhibits, and artwork displayed therein. Notably, our augmented reality system applies the First Order Motion Model for Image Animation to several portraits of individuals influential to the Georgia peanut industry to provide immersive animated narration and monologue regarding their contributions to the peanut industry.

Mobile3DScanner: An Online 3D Scanner for High-quality Object Reconstruction with a Mobile Device

Guofeng Zhang, Sensetime

Booth 85

We present a novel online 3D scanning system for high-quality object reconstruction with a mobile device, called Mobile3DScanner. Using a mobile device equipped with an embedded RGBD camera, our system provides online 3D object reconstruction capability for users to acquire high-quality textured 3D object models. Starting with a simultaneous pose tracking and TSDF fusion module, our system allows users to scan an object with a mobile device to get a 3D model for real-time preview. After the real-time scanning process is completed, the scanned 3D model is globally optimized and mapped with multi-view textures as an efficient postprocess to get the final textured 3D model on the mobile device. Unlike most existing state-of-the-art systems which can only scan homeware objects such as toys with small dimensions due to the limited computation and memory resources of mobile platforms, our system can reconstruct objects with large dimensions such as statues. We propose a novel visual-inertial ICP approach to achieve real-time accurate 6DoF pose tracking of each incoming frame on the front end, while maintaining a keyframe pool on the back end where the keyframe poses are optimized by local BA. Simultaneously, the keyframe depth maps are fused by the optimized poses to a TSDF model in real-time. Especially, we propose a novel adaptive voxel resizing strategy to solve the out-of-memory problem of large dimension TSDF fusion on mobile platforms. In the post-process, the keyframe poses are globally optimized and the keyframe depth maps are optimized and fused to obtain a final object model with more accurate geometry. The experiments with quantitative and qualitative evaluation demonstrate the effectiveness of the proposed 3D scanning system based on a mobile device, which can successfully achieve online high-quality 3D reconstruction of natural objects with larger dimensions for efficient AR content creation.