Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all articles
Browse latest Browse all 6

Comment by Markus for The openni driver does this already. It doesn't just produce one depth image. Check here and here. image_raw contains the raw uint16 depths in mm from the kinect. image contains float depths in m. And finally image_rect also contains float depths in m, but is rectified like you want. The driver even publishes points, a point cloud that puts all the coordinates together for you. If you really need to know the formula, maybe just look at the code in the driver.

Next: Answer by thebyohazard for Hi All,I need a small help with converting kinects depth image to the real world coordinates of each depth pixel.I used the below mentioned formulae found from a paper. But, I assume the Z value is not the correct real world z. Rather, it gives the distance of the pixel from the camera centre. In this (x,y) on the right hand side represent the pixel values and z is the corresponding depth in meters. inv_fx,inv_fy,ox,oy are from the camera matrix.const float inv_fx = 1.f/cameraMatrix(0,0); const float inv_fy = 1.f/cameraMatrix(1,1); const float ox = cameraMatrix(0,2); const float oy = cameraMatrix(1,2); realworld.x = z; //The depth value from the Kinect in meters realworld.y = -(x - ox) * z * inv_fx; realworld.z = -(y - oy) * z * inv_fy; If I am wrong in the formulae, plese correct the formulae mentioned above for calculating the real world point coordinates for each pixel in the depth image.Regards, Subhasis
Previous: Answer by barcelosandre for Hi All,I need a small help with converting kinects depth image to the real world coordinates of each depth pixel.I used the below mentioned formulae found from a paper. But, I assume the Z value is not the correct real world z. Rather, it gives the distance of the pixel from the camera centre. In this (x,y) on the right hand side represent the pixel values and z is the corresponding depth in meters. inv_fx,inv_fy,ox,oy are from the camera matrix.const float inv_fx = 1.f/cameraMatrix(0,0); const float inv_fy = 1.f/cameraMatrix(1,1); const float ox = cameraMatrix(0,2); const float oy = cameraMatrix(1,2); realworld.x = z; //The depth value from the Kinect in meters realworld.y = -(x - ox) * z * inv_fx; realworld.z = -(y - oy) * z * inv_fy; If I am wrong in the formulae, plese correct the formulae mentioned above for calculating the real world point coordinates for each pixel in the depth image.Regards, Subhasis
$
0
0
Well if I subscribe to /camera/depth/image_rect I just get values from 0 - 255 and no floats .... --> Do I have to read in that data in a particular way? cv_bridge::CvImageConstPtr cv_ptr; cv_ptr = cv_bridge::toCvShare(current_image_); cv::convertScaleAbs(cv_ptr->image, mono8_img, 100,0.0);

Viewing all articles
Browse latest Browse all 6

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>