Oculus Rift in Action
Bradley Austin Davis, Karen Bryla
Format: PDF / Kindle (mobi) / ePub
Oculus Rift in Action introduces the powerful Oculus Rift headset and teaches you how to integrate its many features into 3D games and other virtual reality experiences. You'll start by understanding the capabilities of the Rift hardware. Then you'll follow interesting and instantly-relevant examples that walk you through programming real applications using the Oculus SDK. Examples are provided for both using the Oculus C API directly and for using Unity, a popular development and 3D graphics engine, with the Oculus Unity integration package.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the Book
Virtual reality has long been the domain of researchers and developers with access to specialized hardware and proprietary tools. With the appearance of the Oculus Rift VR headset, the game has changed. Using standard programming tools and the intuitive Oculus SDKs, you can deliver powerful immersive games, simulations, and other virtual experiences that finally nail the feeling of being in the middle of the action.
Oculus Rift in Action teaches you how to create 3D games and other virtual reality experiences for the Oculus Rift. You'll explore the Rift hardware through examples of real applications using the Oculus SDK and both the Oculus C API and the Unity 3D graphics engine. Along the way, you'll get practical guidance on how to use the Rift's sensors to produce fluid VR experiences.
Experience with C++, C#, or another OO language is assumed.
- Creating immersive VR
- Integrating the Rift with the Unity 3D SDK
- Implementing the mathematics of 3D
- Avoiding motion-sickness triggers
About the Authors
Brad Davis is an active VR developer who maintains a great set of example Rift applications on Github. Karen Bryla is a freelance developer and writer. Alex Benton is a lecturer in 3D graphics at the University of Cambridge and a software engineer at Google.
Table of Contents
- Meet the Oculus Rift
- Creating your first Rift interactions
- Pulling data out of the Rift: working with the head tracker
- Sending output to the Rift: working with the display
- Putting it all together: integrating head tracking and 3D rendering
- Performance and quality
- Unity: creating applications that run on the Rift
- Unity: tailoring your application for the Rift
- UI design for VR
- Reducing motion sickness and discomfort
- Using the Rift with Java and Python
- Case study: a VR shader editor
- Augmenting virtual reality
PART 1 GETTING STARTED
PART 2 USING THE OCULUS C API
PART 3 USING UNITY
PART 4 THE VR USER EXPERIENCE
PART 5 ADVANCED RIFT INTEGRATIONS
else in the world your camera can take you! Figure 13.5. The Piazza del Campo, Sienna, Italy, as seen through the Oculus Rift 13.2. Using live webcam video in the Rift Panoramas that enclose us are fine for static imagery, but for truly interactive content, nothing beats a live video stream. Be it from a video feed, remote videoconferencing, or a head-mounted webcam attached to the Rift, live video presents a unique set of challenges and opportunities. The potential for telepresence and
coordinate system won’t change when the Leap is attached to the faceplate of the Rift, but the device itself will be rotated forward 90° (figure 13.14). Figure 13.14. The Leap’s coordinate system, as reported when the Leap is on a desk (left) and when it’s on the Rift (right) The challenge is to map from the Leap’s rotated coordinate system into the Rift’s own basis. In the Rift, the flexibility of OpenGL means that your coordinate system is up to you, but most commonly we choose to place the
correctness and precision. In the case of the Leap, specifying the current context (“we expect to see the back of the user’s hand”) helps the Leap to choose image analysis parameters to most quickly interpret its stereo camera imagery. For example, it will know that the fingers are most likely to curl away from its point of view, not toward it. Once we’ve set up our connection to the Leap SDK, the actual capture is easy. We override their API method onFrame(): void onFrame(const
connected, or returns a debug Rift if one isn’t detected. You locate the headset through the call hmd = ovrHmd_Create(0); This call fetches the first headset it finds. Make sure you check that a headset was actually found, or hilarity will ensue. If users haven’t correctly connected their device or don’t own a Rift to begin with, hmd will be zero. As in the sensor example, always check your state before continuing, and try to give your users a “friendly” fallback experience in the event of
you’re dividing both the height and width of the texture in half, and thus rendering only a quarter of the pixels, not half. In your onKey() method, you’re looking for presses of the HOME and END keys. The HOME key will increase the texRes value by a constant scale, up to a maximum of 1.0. The END key will decrease the texRes value by the inverse of that constant scale: static const float ROOT_2 = sqrt(2.0f); static const float INV_ROOT_2 = 1.0f / ROOT_2; Choosing the square root of 2 means