The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

dcoetzee (1) [Avatar] Offline
#1
Hi all, I'm a software developer and Rift enthusiast with a DK1 just getting into development. I run a YouTube channel showcasing games/demos (eVRyday VR). I read through the book so far and wanted to just paste you guys my notes on some specific criticisms. As always feel free to ignore any of my comments if I'm wrong or out of line with your vision. smilie

General: This book gets into a lot of details about low-level Rift development against the SDK, and that's useful for people who need to do that (which is mainly people who need to integrate the Rift with custom game engines or new toolkits), but the vast majority of Rift developers are going to be using a toolkit like Unity that has good out-of-the-box Rift support that just works. Even for those who are planning to use the SDK, I think showing them how to get a quick demo with some real geometry up and running in a toolkit like Unity or Torque3D could be really motivating and help them see the light at the end of the tunnel and get through the other more detailed chapters. So I would really suggest a reorganization in which developers first use the high-level toolkits - at least a little - before diving into any low-level details.

General note about images: I read the .mobi version on the Nexus 7 in the Kindle app and the images are displayed fixed at 100% size at 320 dpi and consequently have super tiny text that is almost impossible to read.

Now here's my detailed feedback:

1.1.2: The description of the affordability of the Rift here is kind of weird. "at a price that makes it available, if not to the average consumer, at least to the average consumer who would already have the kind of computing or gaming system that can support the Rift." This isn't very informative - the price of the DK1 is very unlikely to change so you could just give that number, or you could give some rough estimate of the potential size of the audience based on number of PC gamers, or something.

1.2: It's weird here that you mention Crystal Cove as a 1080p prototype, but don't mention the earlier HD prototype which was also 1080p, or any differentiating characteristics of CC from HD. Although I know that's not the focus here.

Similarly, footnote 6 says: "The next version of the Rift after the DK1, the Crystal Cove, is scheduled to include position tracking as well." This doesn't make sense unless you replace "Crystal Cove" with "DK2". Crystal Cove was a prototype, and is already obsolete.

1.2.4. Could expand on this to note that GPUs are just reaching the point needed for VR - since it requires at least 1080p resolution at 80 FPS and previous gen GPUs would struggle to meet that.

1.3. Although the main features of the Rift are head tracking and a wide FOV stereoscopic view, I think it's underselling it to describe it merely as adding those features, because the second order effects that emerge from those features are so important, like the dramatically enhanced sense of scale, sense of atmosphere, and the ability of your eyes to dilate to adjust to the environmental light level (since the external world is blocked out).

1.3.1. Here and in 2.2 you use the term 6DOF to refer to use of 3 rotational and 3 acceleration sensors, which is quite confusing, because the term is also used to refer to systems that track both position and rotation, which the DK1 is not (Palmer Luckey even used 6DOF in this sense in a recent interview). I would avoid using this term.

You also mention drift correction and prediction here before motivating them or explaining what they are.

Figure 1.5. This diagram is a bit odd/asymmetrical because there's an arrow from the user to the Rift sensors but no arrow from the screen back to the user.

Figure 1.6. I know this diagram is from Oculus but it is notoriously exaggerated. Clearly it does not accurately depict a field of view that is only a bit over twice as wide as other systems. Really need a better one made from real numbers.

1.3.2. Horizontal FOV - I'm not sure if the horizontal FOV is known to be 110 degrees. I think this is the diagonal FOV.

Maximizing field of view: "actual depth information is only really available for
items in the crossover area, where your brain computes parallax" This ignores monoscopic depth cues which also provide depth information.

1.4.2. The idea that games should only run at 1280x800 is really controversial. A lot of people like to run in Duplicate mode without sacrificing monitor resolution (as mentioned later in the book). A lot of people like to run higher resolutions for the sake of supersampling antialiasing (although I personally think running native resolution with MSAA turned up is preferable). People recording Rift gameplay footage for YouTube often run at 1080p. And last but not least, running at higher resolutions is a really convenient way to performance test for future versions of the Rift.

I would avoid describing 1280x800 as 720p, in a lot of circles it has become synonymous with 1280x720.

Although I don't have a latency tester with which to verify your claims, I am extremely skeptical of the claim that you are getting 20 ms latency in extended mode. In interviews, Oculus says they're still struggling to make their 20 ms target before release, and that Crystal Cove is improved over DK1 but still running at least 30 ms from motion to photons. So I'm not sure what's going on here.

You mention that in Clone mode it's difficult to control which monitor tearing occurs on, but at least on NVIDIA GPUs this is really easy by setting the clone source in the NVIDIA Control Panel. Also, I routinely run the Rift at a different refresh rate from the monitor (71 Hz vs 60 Hz).

1.4.5. You say it's hard to figure out which cables need to be checked, but there are some pretty simple step-by-step procedures to figure this out. If the blue LED is on it has power. If you see two monitors in your graphics settings then the DVI/HDMI cable is good. If Rift demos don't move viewpoint when you move the Rift, then either the USB isn't plugged in or it's in use by another application (Oculus Config Util and vr.js are particularly easy to forget about).

I am really surprised you don't step people through the IPD config in Oculus Config Util and settings viewer before telling them to try Tuscany. For people with unusual IPDs, running Tuscany with the wrong settings is a recipe for a terrible experience. They also need to set height to get the right sense of scale.

Because of the risk of VR sickness and to avoid the "using controller instead of head" syndrome, I recommend people do not do anything except look around for at least the first few minutes in their first demo. Blocked In is a good first demo for that reason, but Tuscany is okay if they just keep their hands off the gamepad.

1.6. "wait the month or two for your development kit from Oculus" This is out of date, there is no longer a long queue to get the DK1, although there is now a production part shortage so who knows what it'll be like in the future. Probably best to just not mention waiting time.

2: General: When I read this I couldn't figure out which bits were from your code, which bits were from the community SDK, and which bits were from the official SDK. I also don't have a clear idea which of these are being supported and kept up-to-date and for how long. These are big practical concerns for a developer - I was very much afraid I was "learning the wrong thing" here (I'd prefer to primarily learn the Oculus SDK which I know will be supported long-term).

2.1.1. There are some details of the SensorFusion failure here (including a diagram) which will become rapidly outdated and seem unimportant to the main point which is to just remember to init and destroy at the right point.

2.2.5. Near the end: orthogonal views of the head from top, right, and back would really help understand what the counter clockwise setting means.

2.2.6. You use the word parse in the sense of "understand" but it might be misunderstood as referring to a software parser.

The inline code sample doesn't seem to match full one (the radians to degrees conversion macro) - if you can, use tools in your workflow to avoid this in the future.

2.2.7. You mention race conditions but it's not clear to me how it's related to correct destruction order. I know sensor fusion runs a separate thread but I'm pretty vague on what it does.

In your samples you fail if the Rift isn't present, but from a user perspective this can be kind of annoying since some people like to download demos and run them to check them out and check performance on their system before getting their Rift. Unity does not fail when the Rift is not present (in fact it doesn't even check which display is the Rift display which can be quite aggravating in Extended mode).

Hope this helps! I think it's great you're helping people learn to do low-level development for the Rift, this really isn't something anybody else is working on right now. Here's looking forward to the VR Age.
bradley.davis (18) [Avatar] Offline
#2
Re: Feedback on Part 1
Thanks for your feedback. I have gone back over the text based on some of your points, although the next version of the MEAP may not reflect the changes I'm making. Probably the one after that though.

> the vast
> majority of Rift developers are going to be using a
> toolkit like Unity that has good out-of-the-box Rift
> support that just works. Even for those who are
> planning to use the SDK, I think showing them how to
> get a quick demo with some real geometry up and
> running in a toolkit like Unity or Torque3D could be
> really motivating and help them see the light at the
> end of the tunnel and get through the other more
> detailed chapters.

We are going to cover unity in chapter 12, and I'd also like to incorporate information about the SteamVR API as well. Unfortunately there are so many potential game engines and toolkits we could be talking about that we could probably fill the entire book with nothing but that. We're trying to focus on the information people will need to be able to do integration with other toolkits on their own, or to understand what the problem is if they're having issues with such integration. However, including a link to a unity project in Chapter 2 might not be a bad idea.

> General note about images: I read the .mobi version
> on the Nexus 7 in the Kindle app and the images are
> displayed fixed at 100% size at 320 dpi and
> consequently have super tiny text that is almost
> impossible to read.

The images will all be redone and go through Manning's in house productions systems before the final release. I'm sure they'll make sure the e-book versions of the images are appropriately scaled for that. For now these are just roughs made by the authors, so they're not representative of the final quality. Sorry for that. I thought the Kindle app would let you tap on an image and then use two finger zoom to take a closer look, but I can't check that this second. If it's not the case I'll be sure to file a ticket with the Kindle app team. smilie


> 1.1.2: The description of the affordability of the
> Rift here is kind of weird.

I've added a footnote that mentions the actual price and that OVR has said they want to target the consumer version at the same price point. However, the section is about affordability of the Rift as a point in favor of supporting it and I don't want to derail the flow too much.

> 1.2: It's weird here that you mention Crystal Cove as
> a 1080p prototype, but don't mention the earlier HD
> prototype which was also 1080p, or any
> differentiating characteristics of CC from HD.

I've updated the text, mentioning the earlier prototype as well as removing the references to Crystal Cove being 1080 (OVR never actually said what it was).

> 1.2.4. Could expand on this to note that GPUs are
> just reaching the point needed for VR - since it
> requires at least 1080p resolution at 80 FPS and
> previous gen GPUs would struggle to meet that.

I don't really think that's true. The burden the distortion shader places on the rendering is pretty minimal, so it shouldn't really be a limiting factor. Existing GPUs have supported 3D stereoscopic monitors running at 120 hz, providing a 60hz view for each eye for at least a couple generations now. While it's true some of the most recent games would have to take a quality hit to render at the frame rates and resolutions dictated by the needs of VR, there are tons of games that would work just fine on hardware several generations behind the current stuff.

> 1.3. Although the main features of the Rift are head
> tracking and a wide FOV stereoscopic view, I think
> it's underselling it to describe it merely as adding
> those features, because the second order effects

Since the section is just focused on the hardware differences from conventional displays, we don't want to get too deep into things that relate to the feel of a scene.

> since the
> external world is blocked out).

This is an important point, I'll make sure to reflect that.

> 1.3.1. Here and in 2.2 you use the term 6DOF to refer
> to use of 3 rotational and 3 acceleration sensors,
> which is quite confusing,

I've added a footnote, clarifying the distinction between tracking position/orientation and tracking acceleration/rotation rate.

> You also mention drift correction and prediction here
> before motivating them or explaining what they are.

Good point. I've removed the last bit, since we'll talk about them at length in Chapter 3.

> Figure 1.5. This diagram is a bit odd/asymmetrical
> because there's an arrow from the user to the Rift
> sensors but no arrow from the screen back to the
> user.

I'll see if we can correct that.

> Figure 1.6. I know this diagram is from Oculus but it
> is notoriously exaggerated. Clearly it does not
> accurately depict a field of view that is only a bit
> over twice as wide as other systems. Really need a
> better one made from real numbers.

That's a placeholder image and was created before the DK1 was finalized. We'll be using one created ourselves that more accurately reflects the actual field of view as compared to a real monitor.

> 1.3.2. Horizontal FOV - I'm not sure if the
> horizontal FOV is known to be 110 degrees. I think
> this is the diagonal FOV.

One of the example pieces of code I've used draws angle ticks on the screen at 15 degree intervals, and the horizontal ticks at 60 degrees are still visible on the outer edges of the screen. Each eye has less than the total FOV, and the very edge of the screen may not be visible depending on your IPD or how you're wearing and adjusting the Rift, but overall horizontal limits do appear to be ~110 to 120 degrees. On the other hand I may have messed up the math in the rendering. Will check with something more comprehensive and verify the value.

> Maximizing field of view: "actual depth information
> is only really available for
> items in the crossover area, where your brain
> computes parallax" This ignores monoscopic depth cues
> which also provide depth information.

I'll update the text to reflect that we're referring on

> 1.4.2. The idea that games should only run at
> 1280x800 is really controversial.

I've added a footnote about that. When we get into chapter 4-6 we'll discuss the issues with resolution in more detail.

> A lot of people like to run higher resolutions for
> the sake of supersampling antialiasing (although I
> personally think running native resolution with MSAA
> turned up is preferable).

Unfortunately no promises are made about what resolutions the hardware will accept beyond the native one, or what the mechanism is for the downscaling, since it's likely implemented on the display hardware and largely outside of OVR's control.

> And
> last but not least, running at higher resolutions is
> a really convenient way to performance test for
> future versions of the Rift.

This can easily be done via choosing the appropriate framebuffer (or render target) size for the scene rendering, since the rendering of the scene will be far more more resource intensive than the distortion effect from the framebuffer to the physical display buffer. We'll cover this in chapter 6 or 7.

> I would avoid describing 1280x800 as 720p,

Yeah, that's incorrect. 720p specifically means 720 lines of vertical resolution.

> Although I don't have a latency tester with which to
> verify your claims, I am extremely skeptical of the
> claim that you are getting 20 ms latency in extended
> mode. In interviews, Oculus says they're still
> struggling to make their 20 ms target before release,
> and that Crystal Cove is improved over DK1 but still
> running at least 30 ms from motion to photons. So I'm
> not sure what's going on here.

I'll double check the values.

> You mention that in Clone mode it's difficult to
> control which monitor tearing occurs on, but at least
> on NVIDIA GPUs this is really easy by setting the
> clone source in the NVIDIA Control Panel. Also, I
> routinely run the Rift at a different refresh rate
> from the monitor (71 Hz vs 60 Hz).

Interesting. My Rift doesn't list 71hz as an available option, although I am able to set the clone source. Not sure if we'll want to get into that much OS/hardware specific detail in this section. Thanks for the tip though.

> 1.4.5. You say it's hard to figure out which cables
> need to be checked, but there are some pretty simple
> step-by-step procedures to figure this out. If the
> blue LED is on it has power.

Even with power, I've seen situations where the LED will come on, and then turn back off, because the USB connection wasn't present. I'll review the section though.

> I am really surprised you don't step people through
> the IPD config in Oculus Config Util and settings
> viewer before telling them to try Tuscany.

> 1.6. "wait the month or two for your development kit
> from Oculus"

Yeah, writing a book for something that's in active development is fun. We'll probably have to do multiple passes for issues like this.

> 2: General: When I read this I couldn't figure out
> which bits were from your code, which bits were from
> the community SDK, and which bits were from the
> official SDK.

The API for the community SDK isn't any different than from the official SDK, and I intend to keep it that way. The biggest reasoning behind the use of community SDK is to make sure that the example code can be checked out and built in its entirety from the Git repository, so using the official SDK and it's binaries would mean walking people through the installation, and adding platform specific configuration steps to pointing the examples at the SDK as well. Once you get to a code level though, there should be no distinction between community and official. It's just the SDK interface.

We've tried to make the distinction clear between the use of SDK code and our own code by not including a 'using namespace OVR' statement. This means that every OVR object we reference has OVR:: in front of it. However, while some of the variables have 'ovr' as a prefix, not all of them do, so it might be worthwhile to go back and make sure that we do so in order to make it even clearer when you're just looking at a variable, rather than going back to find it's declaration.

> These are big practical concerns for a
> developer - I was very much afraid I was "learning
> the wrong thing" here (I'd prefer to primarily learn
> the Oculus SDK which I know will be supported
> long-term).

Well, supported isn't the same thing as static. I've already been told that the next version of the official SDK will take a significantly different approach to the distortion mechanism, and I suspect that means it will be similar to the SteamVR API.

> 2.1.1. There are some details of the SensorFusion
> failure here (including a diagram) which will become
> rapidly outdated and seem unimportant to the main
> point which is to just remember to init and destroy
> at the right point.

Well, it's just intended to be illustrative of the kind of problem you can have if you don't take care to scope the init and destroy methods correctly. I'll take a look at it though.

> The inline code sample doesn't seem to match full one
> (the radians to degrees conversion macro) - if you
> can, use tools in your workflow to avoid this in the
> future.

Yeah, keeping the text concurrent with the actual examples and all of the individual
snippets of code concurrent with the code listings is a bit of a challenge. Our tool set is mostly dictated by the tools that allow easiest collaboration between three authors in very different time zones, and what Manning requires for submissions. For us that means MS word documents. There is an XML format for documents that would be much more amenable to using tools to create the final versions by integrating information directly from a Git repository, but it's not quite fully baked, so we opted for this approach, even though it's a bit more labor intensive and subject to errors like these. Hopefully as we approach a finished book and have more finalized examples it will be less of an issue.

> 2.2.7. You mention race conditions but it's not clear
> to me how it's related to correct destruction order.

Virtually all of the work the SDK does occurs in it's own thread, but yeah, the text doesn't immediately make that clear. I'll see if we can reword that.

> In your samples you fail if the Rift isn't present,
> but from a user perspective this can be kind of
> annoying since some people like to download demos and
> run them to check them out and check performance on
> their system before getting their Rift. Unity does
> not fail when the Rift is not present (in fact it
> doesn't even check which display is the Rift display
> which can be quite aggravating in Extended mode).

I'm hoping that the bulk of the examples will end up running regardless of whether the Rift is present, although some, like the sensor output, won't be very thrilling.

Thanks again for the feedback.