VR and VR Experiments
Jackson Kruger and Jackson Turner - CSCI5607 Final Project
Code: https://github.com/OmegaJak/UMN_CSCI5607_FinalProject (compiled executable can be found in the repository’s releases)
It’s almost unheard of for VR games to be created currently without using an engine like Unity or Unreal Engine. Both of these have very good support for VR, and handle most OpenVR interactions for the game developer. Thus, we were curious what it would be like to access the C++ OpenVR interface directly in our own engine to create a VR game. Our goal with this project was to adapt one of our HW4 maze games (we used Jackson Kruger’s) to support OpenVR and thus work with any SteamVR-compatible VR headset. Our most basic goal was ensuring that all functionality in HW4 was adapted to VR and worked nicely. This meant full room-scale interaction - using only the VR headset and VR controllers to complete the maze. Assuming we met this goal, our plan was then to investigate non-photorealistic rendering techniques, fractals, and, time permitting, handling out-of-bounds situations or anything else that came up.
Ultimately, we achieved all of these goals, except for handling out-of-bounds situations, which there simply wasn’t time for. Our final result supports all aspects of room-scale interaction: the VR controllers can be used to move through the maze, the camera moves appropriately as the user moves in the real-world, the controllers can be used to pick up and use keys, collision-detection works as it did before, etc. Getting this functionality working in VR proved to be much more difficult than expected. These challenges will be discussed below.
Several non-photorealistic rendering techniques were added to the game, toggleable by the press of a button on the controller. The primary NPR technique pursued was imitating halftoning, and three slightly different versions of this technique are included. An additional colorful (untextured) rendering style is also included.
We also included fractal rendering (specifically the Mandelbrot set) as a possible “texture” for objects in the environment. Ultimately this was used as the maze walls and on a resizable cube so users could examine a fractal at high detail.
Simply getting anything to output to the VR headset was a challenge in itself, even though Valve (thankfully) has a ‘HelloVR’ sample demonstrating the basics of rendering to an OpenVR headset using the C++ API (found here). We first tried simply copy/pasting relevant snippets from this sample into the existing main bootstrapper in place for the MazeGame. Try as we might, nothing but the glClearColor would render to the screen and companion window. We still don’t know exactly why this didn’t work. So we changed tactics--we took all of Valve’s code and imported it into the maze game project. We then modified it as necessary to render the maze game in place of Valve’s default scene. After some finicking with VAOs and the projection matrix, we finally got the maze to render to the companion window, then to the headset. Here is a video of our maze only rendering to the companion window.. After some more trial-and-error matrix transformation adjustment for the world to view matrix, we got things oriented properly and rendering to both the headset and companion window (though the companion window was upside-down).
The next step (after significantly refactoring Valve’s code to integrate into the existing codebase better) was properly supporting in-game movement control. In the existing MazeGame infrastructure, every game object’s position, rotation, and scale is stored in a Transform matrix, which exists in a hierarchy. So previously, the camera’s position was the parent of the player’s, which was in turn the parent of the keys when picked up, of the player’s bounding box, etc. This hierarchy had to be reworked for VR, as the ‘camera’ (the VR headset), can move at-will in the real world, and the camera in the game must also move appropriately. The new hierarchy had three primary levels - the ‘world anchor’, representing the center of the real world playspace’s location in the virtual world, the headset’s offset from the world anchor (which had to be updated continuously based on headset position from OpenVR), then the player. To move in the world, the world anchor is moved, which propagates down the transform hierarchy to the player and its bounding box.
A major challenge that arose for the second time while adding in-world movement had to do with the difference in coordinate systems used by OpenVR vs the existing MazeGame code. The existing MazeGame code used a z-up, y-forward, x-right system, while OpenVR uses a y-up, negative z-forward, x-right system. This difference was the likely cause much of the initial trouble getting anything to render to the headset. Even after we got output to the headset, the world was rotated as in this video. After much digging, we finally found a bit in OpenVR’s main header explaining their coordinate system, so we applied a 90-degree rotation about the x-axis to the view-projection matrix and called it good. This issue popped up again, however, when mapping the headset’s position to a transform for use in-game. The transformation didn’t work as expected, so we hacked things together so it would work and moved on. The issue popped up again for the third time when we adding the VR controllers to the game. After more troubleshooting, we finally realized why transformations generally weren’t working as expected: glm’s transformation functions right-multiply their input matrix parameter by the transformation instead of left-multiplying. After verifying that our transformation matrices for OpenVR to world and the other way were multiplied in the correct order, the coordinate-system issue finally solved. After this, development was still generally slow but went much more smoothly (adding OpenVR controller inputs, dealing with bounding-box collision issues, etc).
Fractals were surprisingly easy to implement once some minor issues with textures and incorrect uniform declarations were resolved. We did have a challenge in creating the resizable fractal display cube that is featured in the game, as getting the top plane of the cube to not come constantly closer to the player required some clever hacking.
When we started this project we were inspired by lectures like the ones on 3d displays, VR, non photorealism, and others. Our project eventually included many topics of graphics like shaders, transform hierarchies, the VR specific rendering pipeline, unique sampling and reconstruction methods, and more.
Demonstrations of all of these features are provided in this presentation. The earlier parts of this presentation show development difficulties, while the later parts more fully demonstrate the achieved features.
Much of the work done on this project was less about making a game, and more about basically creating a game engine that supports VR. As such, the existing game could be significantly extended, or different games altogether could be created using this same engine, now that interaction with OpenVR is largely taken care of. For the existing game, there’s also lots of room for improvement when it comes to performance - many decisions were made during development for quick & dirty solutions instead of highly performant ones.
There is also lots of room to improve the rendering methods used in the game. Bump maps, normal maps, reflection and refraction, more lights, depth rendering, procedural textures, and more could all be added to this project. We also wanted to investigate how to handle out-of-bounds situations when the user physically moves out of valid virtual space and what to do when that occurs. Additionally, the fractals could be expanded into 3d which would be extremely interesting to investigate in VR. Better controls for scaling fractals could also be implemented to great effect.