Publications

2020

Paper

Grégoire Dupont de Dinechin, Alexis Paljic
From Real to Virtual: An Image-Based Rendering Toolkit to Help Bring the World Around Us Into Virtual Reality
6th Workshop on Everyday Virtual Reality (WEVR), 2020

The release of consumer-grade head-mounted displays has helped bring virtual reality (VR) to our homes, cultural sites, and workplaces, increasingly making it a part of our everyday lives. In response, many content creators have expressed renewed interest in bringing the people, objects, and places of our daily lives into VR, helping push the boundaries of our ability to transform photographs of everyday real-world scenes into convincing VR assets. In this paper, we present an open-source solution we developed in the Unity game engine as a way to make this image-based approach to virtual reality simple and accessible to all, to encourage content creators of all kinds to capture and render the world around them in VR. We start by presenting the use cases of image-based virtual reality, from which we discuss the motivations that led us to work on our solution. We then provide details on the development of the toolkit, specifically discussing our implementation of several image-based rendering (IBR) methods. Finally, we present the results of a preliminary user study focused on interface usability and rendering quality, and discuss paths for future work.

Please cite the 6-page paper as:

@inproceedings{deDinechin2020From,
        title = {From Real to Virtual: An Image-Based Rendering Toolkit to Help Bring the World Around Us Into Virtual Reality},
        booktitle = {6th Workshop on Everyday Virtual Reality ({WEVR})},
        publisher = {{IEEE}},
        author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
        month = mar,
        year = {2020}
}

Open access: HAL page || PDF file

The 6th Workshop on Everyday Virtual Reality took place on 22 March 2020, as a virtual event (due to the lockdown situation).
The workshop was co-located with the IEEE VR 2020 conference. It focused “on the investigation of well-known VR/AR/MR (XR) research themes in everyday contexts and scenarios other than research laboratories and specialist environments”.

Poster

Grégoire Dupont de Dinechin, Alexis Paljic
Presenting COLIBRI VR, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2020

From image-based virtual tours of apartments to digital museum exhibits, transforming photographs of real-world scenes into visually faithful virtual environments has many applications. In this paper, we present our development of a toolkit that places recent advances in the field of image-based rendering (IBR) into the hands of virtual reality (VR) researchers and content creators. We map out how these advances can improve the way we usually render virtual scenes from photographs. We then provide insight into the toolkit’s design as a package for the Unity game engine and share details on core elements of our implementation.

Please cite the 2-page poster abstract as:

@inproceedings{deDinechin2020Presenting,
        title = {Presenting {COLIBRI VR}, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality},
        booktitle = {2020 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
        publisher = {{IEEE}},
        author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
        month = mar,
        year = {2020}
}

Open access: HAL page || PDF file

IEEE VR 2020 (the 27th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 22-26 March 2020, as a virtual event (due to the lockdown situation).
IEEE VR is “the premier international event for the presentation of research results in the broad area of virtual reality (VR)”.

Research Demonstration

Grégoire Dupont de Dinechin, Alexis Paljic
Demonstrating COLIBRI VR, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2020

This demonstration showcases an open-source toolkit we developed in the Unity game engine to enable authors to render real-world photographs in virtual reality (VR) with motion parallax and view-dependent highlights. First, we illustrate the toolset’s capabilities by using it to display interactive, photorealistic renderings of a museum’s mineral collection. Then, we invite audience members to be rendered in VR using our toolkit, thus providing a live, behind-the-scenes look at the process.

Please cite the 2-page abstract of the research demonstration as:

@inproceedings{deDinechin2020Demonstrating,
        title = {Demonstrating {COLIBRI VR}, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality},
        booktitle = {2020 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
        publisher = {{IEEE}},
        author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
        month = mar,
        year = {2020}
}

Open access: HAL page || PDF file

IEEE VR 2020 (the 27th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 22-26 March 2020, as a virtual event (due to the lockdown situation).
IEEE VR is “the premier international event for the presentation of research results in the broad area of virtual reality (VR)”.

Video Submission

Grégoire Dupont de Dinechin, Alexis Paljic
Illustrating COLIBRI VR, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2020

This video submission illustrates the Core Open Lab on Image-Based Rendering Innovation for Virtual Reality (COLIBRI VR), an open-source toolkit we developed to help authors render photographs of real-world people, objects, and places as responsive 3D assets in VR. We integrated COLIBRI VR as a package for the Unity game engine: in this way, the toolset’s methods can easily be accessed from a convenient graphical user interface, and be used in conjunction with the game engine’s built-in tools to quickly build interactive virtual reality experiences. Our primary goal is to help users render real-world photographs in VR in a way that provides view-dependent rendering effects and compelling motion parallax. For instance, COLIBRI VR can be used to render captured specular highlights, such as the bright reflections on the facets of a mineral. It also enables providing motion parallax from estimated geometry, e.g. from a depth map associated to a 360° image. We achieve this by implementing efficient image-based rendering methods, which we optimize to run at high framerates for VR. We make the toolkit openly available online, so that it might be used to more easily learn about and apply image-based rendering in the context of virtual reality content creation.

Please cite the 1-page abstract of the video submission as:

@inproceedings{deDinechin2020Illustrating,
        title = {Illustrating {COLIBRI VR}, an Open-Source Toolkit to Render Real-World Scenes in Virtual Reality},
        booktitle = {2020 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
        publisher = {{IEEE}},
        author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
        month = mar,
        year = {2020}
}

Open access: HAL page || PDF file

IEEE VR 2020 (the 27th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 22-26 March 2020, as a virtual event (due to the lockdown situation).
IEEE VR is “the premier international event for the presentation of research results in the broad area of virtual reality (VR)”.

2019

Paper

Grégoire Dupont de Dinechin, Alexis Paljic
Virtual Agents from 360° Video for Interactive Virtual Reality
International Conference on Computer Animation and Social Agents (CASA), 2019

Creating lifelike virtual humans for interactive virtual reality is a difficult task. Most current solutions rely either on crafting synthetic character models and animations, or on capturing real people with complex camera setups. As an alternative, we propose leveraging efficient learning-based models for human mesh estimation, and applying them to the popular form of immersive content that is 360° video. We demonstrate an implementation of this approach using available pre-trained models, and present user study results that show that the virtual agents generated with this method can be made more compelling by the use of idle animations and reactive verbal and gaze behavior.

Please cite the 4-page paper as:

@inproceedings{deDinechin2019Virtual,
        location = {Paris, France},
        series = {{CASA} '19},
        title = {Virtual Agents from 360° Video for Interactive Virtual Reality},
        isbn = {978-1-4503-7159-9},
        url = {https://doi.org/10.1145/3328756.3328775},
        doi = {10.1145/3328756.3328775},
        booktitle = {Proceedings of the 32nd International Conference on Computer Animation and Social Agents},
        publisher = {{ACM}},
        author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
        month = jul,
        year = {2019},
        pages = {75--78}
}

Published version: DOI
Open access: HAL page || PDF file

CASA 2019 (32nd International Conference on Computer Animation and Social Agents) took place on 1-3 July in Paris, France.
Organised in cooperation with ACM-SIGGRAPH and jointly with the 2019 International Conference on Intelligent Virtual Agents (IVA 2019), the conference presented works on “computer animation, embodied agents, social agents, virtual and augmented reality, and visualization”.

Poster

Grégoire Dupont de Dinechin, Alexis Paljic
Automatic Generation of Interactive 3D Characters and Scenes for Virtual Reality from a Single-Viewpoint 360-Degree Video
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR), 2019

This work addresses the problem of using real-world data captured from a single viewpoint by a low-cost 360-degree camera to create an immersive and interactive virtual reality scene. We combine different existing state-of-the-art data enhancement methods based on pre-trained deep learning models to quickly and automatically obtain 3D scenes with animated character models from a 360-degree video. We provide details on our implementation and insight on how to adapt existing methods to 360-degree inputs. We also present the results of a user study assessing the extent to which virtual agents generated by this process are perceived as present and engaging.

Please cite the 2-page poster abstract as:

@inproceedings{deDinechin2019Automatic,
        title = {Automatic Generation of Interactive {3D} Characters and Scenes for Virtual Reality from a Single-Viewpoint 360-Degree Video},
        url = {https://doi.org/10.1109/vr.2019.8797969},
        doi = {10.1109/vr.2019.8797969},
        booktitle = {2019 {IEEE} Conference on Virtual Reality and {3D} User Interfaces ({VR})},
        author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
        publisher = {{IEEE}},
        month = mar,
        year = {2019},
        pages = {908--909}
}

Published version: DOI
Open access: HAL page || 2-page abstract || Poster

IEEE VR 2019 (26th IEEE Conference on Virtual Reality and 3D User Interfaces) took place on 23-27 March in Osaka, Japan.
Focused on “all areas related to virtual reality (VR), including augmented reality (AR), mixed reality (MR),and 3D user interfaces (3DUIs)”, the conference was an opportunity to present works on technologies and applications (computer graphics, immersive 360° video, modelling and simulation, …), multi-sensory experiences (virtual humans, haptics, perception and cognition, …) and interaction (collaborative interaction, locomotion and navigation, multimodal interaction, …).

2018

Paper

Grégoire Dupont de Dinechin, Alexis Paljic
Cinematic Virtual Reality With Motion Parallax From a Single Monoscopic Omnidirectional Image
3rd Digital Heritage International Congress (DigitalHERITAGE) held jointly with the 24th International Conference on Virtual Systems & Multimedia (VSMM), 2018

Complementary advances in the fields of virtual reality (VR) and reality capture have led to a growing demand for VR experiences that enable users to convincingly move around in an environment created from a real-world scene. Most methods address this issue by first acquiring a large number of image samples from different viewpoints. However, this is often costly in both time and hardware requirements, and is incompatible with the growing selection of existing, casually-acquired 360-degree images available online. In this paper, we present a novel solution for cinematic VR with motion parallax that instead only uses a single monoscopic omnidirectional image as input. We provide new insights on how to convert such an image into a scene mesh, and discuss potential uses of this representation. We notably propose using a VR interface to manually generate a 360-degree depth map, visualized as a 3D mesh and modified by the operator in real-time. We applied our method to different real-world scenes, and conducted a user study comparing meshes created from depth maps of different levels of accuracy. The results show that our method enables perceptually comfortable VR viewing when users move around in the scene.

Please cite the 8-page paper as:

@inproceedings{deDinechin2018Cinematic,
        title = {Cinematic Virtual Reality With Motion Parallax From a Single Monoscopic Omnidirectional Image},
        url = {https://doi.org/10.1109/digitalheritage.2018.8810116},
        doi = {10.1109/digitalheritage.2018.8810116},
        booktitle = {2018 3rd Digital Heritage International Congress ({DigitalHERITAGE}) held jointly with 2018 24th International Conference on Virtual Systems \& Multimedia ({VSMM} 2018)},
        author = {Gr{\'e}goire Dupont de Dinechin and Alexis Paljic},
        publisher = {{IEEE}},
        month = oct,
        year = {2018},
        pages = {1--8}
}

Published version: DOI
Open-access: HAL page || PDF file

Digital Heritage 2018 (New Realities – Authenticity and Automation in the Digital Age, 3rd International Congress and Expo) took place on 26-30 October 2018 in San Francisco, USA.
Focused on “digital technology for documenting, conserving and sharing heritage”, it included the 24th International Conference on Virtual Systems and MultiMedia (VSMM 2018) and the 25th Conference of the Pacific Neighborhood Consortium (PNC 2018).
Research tracks included works on reality capture (digitization, scanning, remote sensing, …), reality computing (databases and repositories, GIS, CAD, …) and reality creation (VR, AR, MR, …).

The four color and depth 360° image pairs used for the user study (Garden, Bookshelves, Snow, Museum) can be downloaded by clicking here.