Livestream: audio and scaling?

S.L.P. image questions, stereoscopic video livestream and recording, stereoscopic photo capture etc.
Post Reply
FKED
Posts: 3
Joined: Wed Jan 08, 2020 4:16 pm

Livestream: audio and scaling?

Post by FKED »

First of all I'd like to say thank you for such an amazing project!
I've bought the deluxe kit and it really is almost plug and play.

A few years ago I already did some experiments with telepresence robots using USB camera modules and an Oculus DK2 using Unity, but this was quite complicated and required a pc obviously. My plan is to make a revision of my previous telepresence robot. And I'm amazed how easy and lightweight the setup has become with the StereoPi. Excellent job!! :!: :D

I've been succesfull at getting the stereopi to livestream to the oculus go, but I have a few questions in that respect.

- 1. Is it possible to livestream (preferably stereo) audio to the go?
I'm able to record mono audio to video using a logilink usb souncard + cheap lavalier mic.
I'll solder two mics together in a stereo setup and use the tips from here (https://forum.stereopi.com/viewtopic.php?f=10&t=65) to see if I can get stereo sound to work in recording first.

- 2. Can the video feed in the oculus Go be scaled?
When looking at the feed through the Oculus it seems to me the image is just a bit big with reference to real world. For example when mounting the camera's to the faceplate of the oculus go and looking at your hands the hands in the image seem quite big). If the image would be scaled down a little bit the FPV might feel even more natural. I understand the sources for the android app/Oculus Go are not available, correct?

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Livestream: audio and scaling?

Post by Realizator »

Hi FKED,
We hope StereoPi will fit your needs with your Oculus experiment! :-)

As for your questions:
1. Sound livestream is a bit tricky, and we didn't pay too many attention to this, as usually we livestream a video from a drones (like planes and copters), where you can hear nothing except propellers. For the livestream we prefer to use raw H264 data, and it is difficult to put a synchronized sound with it. If you'll look at alternative ways, you can find ready-to-use containers and protocols with incapsulated sound (like RTSP), but all of them has a big latency. And the last moment - in our 3rd person view Oculus experiment both user and StereoPi are placed nearby, so sound transmission was not necessary.
As an idea, you can use two independent livestreams, one for video and another one for sound. As we use low-latency video transmission, and sound also needs a relatively low channel width, I can suggest you can avoid synchronization problems. But for this thing it is necessary to do a serious code changes in both livestreaming part (StereoPi) and receiving one (Oculus or other device).

2. As for the cameras FOV you mentioned - here are some tricks. For example, when you set FullHD resolution, RPi cameras actually do a crop of a captured image, and your FOV is decreased. Try to set HD (1280) resolution for comparison. Also using 1280x960 instead of 1280x720 can change your FOV. These camera features has a great description here in PiCamera docs.
Android application code is closed at this moment, as it appears not to be a silver bullet. We're still in a process of finding a universal way to create approach and application for all platforms (Android/Oculus Go/Oculus Quest), as we want to find and create a good tool, but not a "patch" like our Android app.
Eugene a.k.a. Realizator

FKED
Posts: 3
Joined: Wed Jan 08, 2020 4:16 pm

Re: Livestream: audio and scaling?

Post by FKED »

Hi Realizator,
Thanks for the super quick reply and info!

1. Based on your suggestions I'll look into setting up a second audio stream. To avoid excessive changes to the stereopi and android code (I'm not super confident in either linux or java) I'm contemplating setting up a separate pi with microphones for streaming to a bluetooth headset. Think for me this may be the easiest solution. I'll just have to try it to see if the differences in latency will give me problems.

2. Thanks for the link and tips. I'll play around with that!

Will keep you posted on my progress.

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Livestream: audio and scaling?

Post by Realizator »

FKED, if your approach with the separate Pi and microphones will work, we can help you to settle your code and mics to the StereoPi near our scripts as a next step. I can suggest this sound workflow should not affect on a video processing part.
Eugene a.k.a. Realizator

FKED
Posts: 3
Joined: Wed Jan 08, 2020 4:16 pm

Re: Livestream: audio and scaling?

Post by FKED »

Just a little update on my experiments.

I managed to pair a bose studio3 bluetooth headset to a Rpi 3 B using bluetoothctl and set it up as a virtual cmp using bluealsa.
I also managed to set up a usb soundcard with a microphone - this was easy.

I can pipe back the incoming audio from the soundcard to the bluetooth speakers like so:

Code: Select all

 $ arecord -D "hw:1,0" --format=S16_LE -B 10000 |aplay -D  bluealsa:SRV=org.bluealsa,DEV=XX:XX:XX:XX:XX:XX,PROFILE=a2dp
(where XX:XX:XX:XX:XX:XX is the address for the headphones)

However there's about a 0,5 - 1 second delay - which is way too much.
Still figuring out how I can minimize this.

With a wired setup - headphones plugged into the port on the usb soundcard - the delay can be close to zero.

Code: Select all

 $ arecord -D "hw:1,0"  --format=S16_LE -B 10000 | aplay -D "hw:1,0"  --format=S16_LE -B 10000
-B is a parameter for the buffer in microseconds. So in a wired setup this approach pretty much allows you to match the video latency.

At this point I'm trying to figure out of the latency over bluetooth is due to the bluetooth protocol or wether it has something to do with my settings in bluealsa / alsa. When watching a youtube video on the rpi with the bluetooth headset there's no noticable lipsyncing delay so I'm guessing it has to do with the bluealsa pipe somehow.

To be continued.

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Livestream: audio and scaling?

Post by Realizator »

FKED, I'm not a guru in a sound part, but for reducing a latency I'd look into buffers details and real-time system priority features. For example, at the bottom of this page you can find some notices on low-latency. This notice concerns both buffer size and priority (--sced flag), able to dramatically improve sound latency.
Eugene a.k.a. Realizator

Geraldol
Posts: 1
Joined: Sat Jan 04, 2020 10:23 am

Re: Livestream: audio and scaling?

Post by Geraldol »

Is it even possible to accomplish close to zero audio latency?

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Livestream: audio and scaling?

Post by Realizator »

Geraldol wrote:
Mon Jan 13, 2020 11:51 am
Is it even possible to accomplish close to zero audio latency?
WIth the video we can have 120-150ms latency with the 3 Mbit bitrate. Bandwidth needed for audio is 50-100 times smaller, so theoretically we can have a tiny latency. It depends on software and settings.
Eugene a.k.a. Realizator

Post Reply