The Author Online Book Forums are Moving

The Author Online Book Forums will soon redirect to Manning's liveBook and liveVideo. All book forum content will migrate to liveBook's discussion forum and all video forum content will migrate to liveVideo. Log in to liveBook or liveVideo with your Manning credentials to join the discussion!

Thank you for your engagement in the AoF over the years! We look forward to offering you a more enhanced forum experience.

OculusLearner (7) [Avatar] Offline
#1
Hi,

I have been reading and learning a lot from the examples from your book. Thanks!

Is there any example for rendering the point cloud (the output of Kinect camera) onto the RIFT?
bradley.davis (18) [Avatar] Offline
#2
Re: How to render Kinect point cloud on Oculus RIFT DK2?
No, there's currently no examples focusing on depth sources, except for some examples on integrating with the Leap motion in chapter 14. I have a number of depth sensing cameras, so I may add something about this to the example repository, but I'm not sure it will end up covered in the book as it's not on our current roadmap.
OculusLearner (7) [Avatar] Offline
#3
Re: How to render Kinect point cloud on Oculus RIFT DK2?
Thanks.

I am having trouble in reading the point cloud correctly to be able to convert it into stereoscopic view for Oculus. It has the following information in its header -

header:
height: 480
width: 640
fields:
There are four fields - x,y,z and rgb

Any suggestions? Any example would be really helpful. Are you planning to write it anytime soon?
OculusLearner (7) [Avatar] Offline
#4
Re: How to render Kinect point cloud on Oculus RIFT DK2?
Also, I didn't receive the Chap 14 in the ebook? Is it still in the writing phase?
bradley.davis (18) [Avatar] Offline
#5
Re: How to render Kinect point cloud on Oculus RIFT DK2?
Yes, chapter 14 is still being written.

As for what to do with the Kinect data, that's mostly a matter of context. How exactly do you want to use the Kinect? Simply rendering the points in a 3D scene isn't really a hard problem, but it's also not particularly specific to the Rift.

My example will probably cover something like using a Rift mounted depth sensor ( a true depth sensor, as opposed to something like the Leap motion) to render a point cloud into the user's field of view. Not sure when I'll get around to it though, since the book specific examples take priority. I do want to support the hacking community and people playing around with gestural inputs and depth sensing though.
OculusLearner (7) [Avatar] Offline
#6
Re: How to render Kinect point cloud on Oculus RIFT DK2?
I simply want to render the points in a 3D scene. I have the X,Y,Z and RGB information.

I am having trouble with performing the stereoscopic rendering for RIFT.
OculusLearner (7) [Avatar] Offline
#7
Re: How to render Kinect point cloud on Oculus RIFT DK2?
I am able to get the points using GL_POINTS with their X,Y,Z values.

How do I get the color of those points using RGB value? Would it using Texture2dDepth?
bradley.davis (18) [Avatar] Offline
#8
Re: How to render Kinect point cloud on Oculus RIFT DK2?
If you're using glBegin/glEnd with GL_POINTS and presumably, glVertex3f, you can add a call to glColor4b or one of the other glColor methods to set the color of each vertex. Mind you this is a very slow way to render using OpenGL.
OculusLearner (7) [Avatar] Offline
#9
Re: How to render Kinect point cloud on Oculus RIFT DK2?
Yes, its very slow.

Is there a faster way to do that? I am modifying the HelloRift example.
OculusLearner (7) [Avatar] Offline
#10
Re: How to render Kinect point cloud on Oculus RIFT DK2?
How can I render it if I don't want to make use of mesh like all the examples given in the book?

This doesn't work as I don't know where to apply the transformation.

glBegin(GL_POINTS);
for (int i = 0; i < width*height; ++i) {
glColor3f(colorarr[i*3], colorarr[i*3+1], colorarr[i*3+2]);
glVertex3f(vertexarr[i*3], vertexarr[i*3+1], vertexarr[i*3+2]);
}
glEnd();
bradley.davis (18) [Avatar] Offline
#11
Re: How to render Kinect point cloud on Oculus RIFT DK2?
> How can I render it if I don't want to make use of
> mesh like all the examples given in the book?

I'm not sure what you mean by 'mesh', but I'm going to assume you mean its the use of the OpenGL 3.x core profile along with using shaders, vertex buffers and vertex array objects to do the rendering.

This is fine, you should still be able to use OpenGL compatibility contexts to make calls to deprecated functions like glBegin/glEnd and the like. You should be aware that you may have performance issues as these functions will never have the same level of performance as calls where you simply copy the entire buffer to the GPU in one go.

You should also be aware that covering generic OpenGL techniques is generally outside the scope of the book. There are a variety of online and offline resources for learning OpenGL, and if you're having problems with a particular thing, you can typically ask a question on Stack Overflow to try and find an answer.

In this particular case, you're sending raw vertex data to the old fixed function pipeline. If you want to transform that data, then you should be using the fixed function matrix stacks to do so. Investigate some examples that demonstrate the use of glMatrixMode, glLoadMatrix, and the like. Typically to render a scene you need to set a projection matrix and a modelview matrix. The projection matrix defines the camera frustum, and the modelview matrix defines how vertices are transformed into camera space coordinate before they're rendered.

Hope that gives you some starting points. If I create a depth field based example, I'll be sure to update this thread.