Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all articles
Browse latest Browse all 6

Comment by Mahyar for Someone knows how to discover the pointcloud "pixel"(x,y,z in real world) giving a pixel from RGB camera?I am using a feature descriptor (ORB) in RGB frame (/camera/rgb/image_color) its returns the x,y from the plannar image, I want a relation of this RGB pixel (feature) to get the real world coordinate in pointcloud topic (/camera/depth_registered/points).

Next: Answer by barcelosandre for Hi All,I need a small help with converting kinects depth image to the real world coordinates of each depth pixel.I used the below mentioned formulae found from a paper. But, I assume the Z value is not the correct real world z. Rather, it gives the distance of the pixel from the camera centre. In this (x,y) on the right hand side represent the pixel values and z is the corresponding depth in meters. inv_fx,inv_fy,ox,oy are from the camera matrix.const float inv_fx = 1.f/cameraMatrix(0,0); const float inv_fy = 1.f/cameraMatrix(1,1); const float ox = cameraMatrix(0,2); const float oy = cameraMatrix(1,2); realworld.x = z; //The depth value from the Kinect in meters realworld.y = -(x - ox) * z * inv_fx; realworld.z = -(y - oy) * z * inv_fy; If I am wrong in the formulae, plese correct the formulae mentioned above for calculating the real world point coordinates for each pixel in the depth image.Regards, Subhasis
$
0
0
do you see this topic: http://answers.ros.org/question/90696/get-depth-from-kinect-sensor-in-gazebo-simulator/

Viewing all articles
Browse latest Browse all 6

Trending Articles