Overlay Sensor Data onto Video Feed

S.L.P. image questions, stereoscopic video livestream and recording, stereoscopic photo capture etc.
Post Reply
phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Overlay Sensor Data onto Video Feed

Post by phowe »

Hello,

I recently received the Stereopi and I want to incorporate it into a VR-headset type project. I have the Stereopi connected to a 5inch display, which serves as the display for my VR-headset. The native SLP runs flawlessly and I'll easily be able to configure the lenses to get the optics lined up for my eyes. However, I also want a way to display real-time sensor data in one of the corners, over the video feed.

From my understanding, the SLP is a specialized version of Raspian made for streaming and may not be ideal for this solution. I have attempted to recreating the SLP video feed on Raspbian Buster, but I am advanced enough in Python or C++ to do so. I can get the example codes to work just fine, but when I try to modify the codes, I run into issues. I can only get one camera to display unless I use the sample codes.

The direct display of the video feed in the SLP image is perfect for my purposes (it is fast and takes up the whole screen), I just also need a way to overlay real-time sensor data in one of the corners. What would be the best way to go about this?

Thanks!

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Overlay Sensor Data onto Video Feed

Post by Realizator »

Hi Phowe,
Your project looks extremely interesting! Would you mind sharing it as a guide when you finish it? We can post it on our blog for other users to repeat! If you do a GitHub repo for this - we'll post it too!
You wrote, "I can only get one camera to display unless I use the sample codes". The issue is simple. For the HDMI image display, raspivid is used. Unfortunately, stock raspivid is unable to show the stereoscopic image on the HDMI screen. As Raspberry engineer explained on the RPF forum, Raspberry try to detect 3D TV on HDMI, and turn on the 3D mode in this case only. We were unable to get this to work.
So we modified raspivid code to support HDMI 3D output. As you are an advanced user with Python/C++ skills, you can read our tech discussion with RPF engineer (6by9 nickname) at Raspberry forum here: https://www.raspberrypi.org/forums/view ... 5#p1438225 We put a link to the patch we used. So you can repeat it by yourself.
How we did it. We added a new key '-fs' to our modified raspivid. So if you add this key, you get a stereoscopic image on HDMI. If not - you are working with the 'stock' raspivid. If you take a look at our file /opt/StereoPi/scripts/video-source.sh file, you can find a place where we are adding this '-fs' option to the launch parameters of raspivid.
And the last tip. If you decide to go a shorter way and not to fix sources and recompile raspivid, you can try to use our binary. It sits in the 'opt/StereoPi/bin' folder and named 'raspivid'. You can notice we are using the './bin/raspivid' path to run raspivid - it means we are running our version.

You can also mention we are using the 'splitter' binary. This binary is doing two things:
1. It manages UDP packet length to fit MTU size for more efficient bandwidth use
2. It "splits" video stream for a few copies, as we need to use them for the simultaneous stream to the browser, oculus, several clients, and video recording.
As you are working with your own VR headset and use HDMI output only, you can remove this part.

p.s. As for the data overlay - for the live stream solutions we are using gstreamer options. It can display a lot of data as an overlay. As for the raspivid - it is extremely optimized thing, and you can use advanced techniques. The best way to understand how it works is to look at the PiCamera docs. Yep, it is Python lib, but it is the best place to find the description of the under-the-hood things. Look at two sections: Advanced receipts and API (sections 10-16, especially MMAL).
Eugene a.k.a. Realizator

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Overlay Sensor Data onto Video Feed

Post by phowe »

Hey Realizator,

I mistyped in my initial post. I meant to say that I am not advanced enough to rewrite everything from scratch. I have some experience with coding in C from working with Arduino, and some experience with coding in Python from a class I took last Spring. However, without the proper examples and comments in the code, I am unable to reproduce results effectively until I know what each part of the code does.

It seems my easiest option would be to replace the Raspivid bin file in Raspbian Buster with the one from the SLP, that way the -fs command will work (if this is not the case please correct me here). I would also be open to recompiling the Raspivid with this command already included in Raspbian Buster but I am not entirely sure how. Would I just take the source code on GitHub and add a command section for -fs like all the other commands have?

As for the video overlay, the data will be coming directly from I2C into the raspberry pi. I don't think streaming it with gstreamer would be the most effective option since the data is coming from I2C directly into the pi. Does PiCamera allow me to directly overlay sensor data from I2C over the video feed?

Post Reply