Getting Real Distance

OpenCV, Python and other image processing questions
Post Reply
noob
Posts: 7
Joined: Wed Feb 19, 2020 8:37 am

Getting Real Distance

Post by noob »

I have been trying to obtain the real distance of an object from a disparity map. The disparity map used is the one generated from 6_dm_video.py from the Stereopi tutorial. As far as I understand, the focal length of a pi camera in terms of pixel is 2571.4, while the distance betweeen my two cameras is 0.065m. so using this formula based on the triangulation method,

Distance = (focal length * distance between 2 camera)/disparity map.

I also generated a disparity map with the following code:

stereo = cv2.StereoBM_create(numDisparities=48, blockSize=25)
disparity = steeo.compute(rectified_pair[0], rectified_pair[1])
norm_image = cv2.normalize(disparity, None, alpha = 0, beta =1, norm_type=cv2.NORM_MINMAX, dtype=cv.CV_32F)

cv2.imshow(norm_image)
.
.
f = 2571.4 b = 0.065
distance = (b*f)/disparity

I realize my 'disparity' image range starts from -16.
I am getting negative values for both set of codes and I am unsure if this is a correct method to calculate distance. I have calibrated my cameras.

Thank you very much.

angelyn.mercado
Posts: 5
Joined: Thu Dec 05, 2019 5:22 am

Re: Getting Real Distance

Post by angelyn.mercado »

Had a similar question here: https://forum.stereopi.com/viewtopic.ph ... 1414#p1414
That might help =)

User avatar
Realizator
Site Admin
Posts: 594
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Getting Real Distance

Post by Realizator »

Hi Noob and Angelyn.mercado,
As I mentioned, my next script for the point cloud is not yet polished. But I can share it now "as is" to help you with all that stuff. But, before this, let me highlight some keypoints on moving from depth map to the point cloud.

1. You should know, that Depth Map actually returns you a numbers 16 times bigger. You can use them for 3D reconstruction, but in this case Z coordinates will have a very slight variations, but X and Y will be adequate (link to the cordinate system description).
So in my latest code I use this conversion to get a real depth map data:

Code: Select all

disparity.astype(np.float32) / 16.0
2. Normalization is a good idea just for the visualization of your disparity. Do not use it for the real disparity data you use for the points coordinates calculations.

3. Angelyn.mercado mentioned a link to another thread, where I mention "square_size = 2.5". Actually this data is not used in the OpenCV 4, it is a row from the old code used in OpenCV 3. So after the calibration you get X, Y and Z coordinates in "squares" of your chessboard.

4. In my case Point Cloud has a big quantity of points. If you use Python cycles to calculate X, Y and Z for each point, you will have a very poor performance. That's why I use a masking to remove invalid points (to decrease a number of calculations). And also I use cv2.reprojectImageTo3D, as it gives you X, Y and Z for all your points from disparity map in a very effective way.


5. Some highlights on my dirty code for the first play:
- use it as a last script, after finihsing all calibrations and depth map tuning from the stereopi-fisheye-robot tutorial
- use W, A, S, D keys to rotate pointcloud. Rotation angle steps can be set in lines 251-254
- use 1, 2 keys to "move" point cloud to/from camera (like "zoom")
- row 283: points_3, colors = calc_point_cloud(disparity, native_disparity, QQ) If you replace "disparity" with "imgRcut", your pointcloud will be colorized with the images from your camera, but not with the depth map colors
- rows 275 and 276 is a cut of images for creating depth map. You can use non-cutted frames
- line 193 is a calc_point_cloud definition. Here you get actual points with X, Y and Z coordinates

Sorry, this code is in alfa version now, but I think it is better to share it now with you, so you can get some useful info.

Noob, you can get X, Y and Z points for the test objects you put on, say, 1 meter from the camera. It will help you to calibrate real distance measurement.
Attachments
22.py.zip
(4.08 KiB) Downloaded 127 times
Eugene a.k.a. Realizator

noob
Posts: 7
Joined: Wed Feb 19, 2020 8:37 am

Re: Getting Real Distance

Post by noob »

Thank you very much for your reply, Do you mind informing us when you have uploaded the codes for 3D generation?
Thank you in advance.

User avatar
Realizator
Site Admin
Posts: 594
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Getting Real Distance

Post by Realizator »

Noob, I've uploaded current version as an attachment to my previous post here.
Eugene a.k.a. Realizator

User avatar
Realizator
Site Admin
Posts: 594
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Getting Real Distance

Post by Realizator »

We have a lot of related questions for the real world coordinates. I'm planning to update our Gthub code, but will be able to do it in a few weeks only.
To simplify your first experiments, I'm attaching a test code from one of our current projects. You will find two sections here, which might help you in your code:
1. Saving point cloud in the "PLY" format. After that you can open it with Meshlab application (free), which you can rotate/zoom/review it with more comfort
2. Updated depth map function "stereo_depth_map(rectified_pair):" with added /16.0 division, so your 3D coordinates will be calculated in a right way.
Please note, that in this file function returns 3 arrays

Code: Select all

 return disparity_color, disparity_fixtype, disparity.astype(np.float32) / 16.0
You can leave the only one you need for the calculation, and replace this row with

Code: Select all

 return disparity.astype(np.float32) / 16.0
to work with your code.

Sorry for sharing a draft code. Yes, it is for 1280x720 resolution, and also builds a pointcloud for the single image only. But I hope it can help...
Attachments
6_pointcloud.py.zip
(3.49 KiB) Downloaded 1 time
Eugene a.k.a. Realizator

Post Reply