use full width of cameras in SLP image?

S.L.P. image questions, stereoscopic video livestream and recording, stereoscopic photo capture etc.
Post Reply
User avatar
stereomii
Posts: 27
Joined: Mon May 13, 2019 3:11 pm
Location: Apeldoorn, The Netherlands
Contact:

use full width of cameras in SLP image?

Post by stereomii » Mon May 13, 2019 3:26 pm

The full "field of view" of both cams is not used in the livestream image. I tried setting width=2560 (being 2 times 1280) in stereopi.conf but got no image in the web interface. If I set the stereomode to 2D I can see that the effective fov of one camera is wider than displayed in the videostream.
Is there a way to stream and record full width images?
Jan a.k.a. stereomii
StereoPi with Waveshare 160deg cams

User avatar
Realizator
Site Admin
Posts: 295
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: use full width of cameras in SLP image?

Post by Realizator » Mon May 13, 2019 9:21 pm

stereomii wrote:
Mon May 13, 2019 3:26 pm
The full "field of view" of both cams is not used in the livestream image. I tried setting width=2560 (being 2 times 1280) in stereopi.conf but got no image in the web interface. If I set the stereomode to 2D I can see that the effective fov of one camera is wider than displayed in the videostream.
Is there a way to stream and record full width images?
Livestream is optimized for stream to 3D viewing devices, that's why we did not included non-cropping side ratios. If you will try to look at 2560x720 video in Oculus Go or Google Cardboard, you will see huge black lines on the top and bottom of your video. Non-cropping is critical for computer vision.

If you want to save full FOV, you can use "decimate" parameter. You can turn it in Administration panel, or use "-dec" if you are using raspistill.
WITHOUT decimate, left and right part of stereoscopic image is a crop from left and right full-FOV images.
WITH decimate, both cameras will use their full FOV, but left and right images will be scaled (compressed) horizontally. For example, for the left image left camera will capture 1280x720 image, than scale it to 640x720. And the same thing with the right image, and after that system combines them to one 1280x720 image. I always use decimation for the drones setup, as FOV is critical.
Eugene a.k.a. Realizator

User avatar
stereomii
Posts: 27
Joined: Mon May 13, 2019 3:11 pm
Location: Apeldoorn, The Netherlands
Contact:

Re: use full width of cameras in SLP image?

Post by stereomii » Tue May 14, 2019 12:12 pm

Thanks for the quick reply Realizator. I tried decimate and now can record full FOV HSBS videos.

The only issue that remains is the low effective resolution. Probably no problem for FPV drone vision but it severely limits video editing possibilities (especially zooming and panning).

I want to use StereoPi for videorecording and if possible high res 3D pictures, using my phone as viewfinder (see "3d cam with mobile phone as viewfinder and controller" under "Your Projects). Trying to run raspivid and raspistill result in errors while streaming to the phone. So I understand that only one process can have the cameras attached. As I need the stream to view the image on my phone Livestream seems to be a logical starting point to obtain my objectives. From some inspection I made up that the videostream is split and that recording is done on one of the split streams.

Is optimising obtained in the used binaries (I noticed two instances of raspivid in (I think) /usr/sbin and /opt/StereoPi/bin) or is it in the used parameters? Can you point me to any documentation on how the running processes function and interact?
Jan a.k.a. stereomii
StereoPi with Waveshare 160deg cams

Post Reply