Gstreamer Pipeline from Mac to Pi

S.L.P. image questions, stereoscopic video livestream and recording, stereoscopic photo capture etc.
Post Reply
phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Gstreamer Pipeline from Mac to Pi

Post by phowe »

Hello, I am continuing work on my VR display using a Raspberry Pi. After thorough testing, I have determined the Pi is not powerful enough to do the processing I need locally, so I have turned to Gstreamer. I want to stream the camera feed on the Pi to my Mac, process the camera feed on my Mac, and then stream the processed camera feed back to the Pi. I have gotten the first part to work without any issues, and I can display the processed camera feed on my Mac. However, I am not sure of the pipeline I need to use to send the processed camera feed back to the Pi to be viewed. I have tried searching the online forums, but I can only find examples for how to stream from a Pi to another Pi/to a computer. Any ideas on what the pipeline should be to stream from the Mac back to the Pi? Thank you in advance!

Code I am working with

Create video stream on raspberry pi:

Code: Select all

raspivid -fps 40 -h 1088 -w 1920 -n -t 0 -0 - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=192.168.1.124 port=5000
Receive stream and process using OpenCV:

Code: Select all

import cv2
import time
import numpy as np
from threading import Thread

class VideoGet(): 
    def __init__(self):
         self.cap = cv2.VideoCapture('tcpclientsrc host=192.168.1.124 port=5000 ! gdpdepay ! rtph264depay ! \
         decodebin ! videoconvert ! appsink', cv2.CAP_GSTREAMER)
         self.grabbed, self.frame = self.cap.read()
         self.stopped = False

    def start(self):
        Thread(target=self.get, args=()).start()
        return self

    def get(self):
        while not self.stopped:
            if not self.grabbed:
                self.stop()
            else:
                self.grabbed, self.frame = self.cap.read()


    def stop(self):
        self.stopped = True

def BarrelDistortion():
    #define the distortion matrices here

def main():
    cap = VideoGet().start()
    gst_out = "appsrc ! videoconvert ! queue ! x264enc ! queue ! rtph264pay ! tcpserversink host=192.168.1.250 port=6000"
    out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, 40, (1920, 1088), True)

    frames=[]
    start = time.time()

    BarrelDistortion()

    while True:
        frame = cap.frame
        
        #apply the distortion to the frame, calling the processed frame dst

        out.write(dst)
        cv2.imshow("Test", dst)
        frames.append(dst)

        if cv2.waitKey(1) == 27:
            break

    cap.stop()
    out.release()
    cv2.destroyAllWindows()
    stop=time.time()
    print("FPS: ", str(len(frames) / (stop-start)))

if __name__ == "__main__":
    main()
Attempt to view video stream on the Pi:

Code: Select all

gst-launch-1.0 -v tcpclientsrc host-192.168.1.250 port=6000 ! rtph264depay ! avdec_h264 ! video convert ! autovideosink sync=false
Error it throws:

Code: Select all

ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: 
Internal data stream error. 
Additional debug info: 
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: 
streaming stopped, reason error (-5)

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Gstreamer Pipeline from Mac to Pi

Post by Realizator »

Hi phowe,

As a rule, embedded systems usually do the output streaming, and your use case is a bit unusual.

Also, Python is not the best choice for video processing (especially for high resolution and simple things like barrel distortion).

I guess you need the low latency too (VR display) - in this case, using the TCP transport is not the best option, but this should work in the local network with the high bandwidth network. Since you have the "capture / encoding / outsreaming / receiving / decoding / undistorting / encoding / backstreaming / receiving / decoding / showing" queue, you'll get the additional at least the 0.3-0.5 seconds delay (or 1-1.5 seconds in reality).

If your main aim is an HDMI video output from the Pi camera without the barrel distortion, the best way is to use the OpenGL post-processing (or shaders) onboard. In this case, all heavy parts are done by means of Pi's GPU/VPU, and this works fine (but requires some programming skills).
Eugene a.k.a. Realizator

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Gstreamer Pipeline from Mac to Pi

Post by phowe »

Hi Realizator,

Thanks so much for the response! I'll take it that you don't know of a proper pipeline to stream back to the Pi or why the error for the current pipeline is being thrown.

I ported the code to C++, but there was barely any performance gain since OpenCV is just a wrapper for C++ in Python. I also tried using the UDP protocol, but it was not working on my Wifi at home and I couldn't figure out why. I will try again since I moved back to college and have access to a different Wifi network. The latency isn't ideal, but it would at least allow me to stream 1080p at 30fps with barrel distortion applied, something I am nowhere near close to achieving natively on the Pi.

The main aim is to apply barrel distortion to the camera feed while maintaining 1080p at 30fps for the purposes of using it as a VR display. I would be open to using the OpenGL shaders to apply the distortion, but I cannot find any up-to-date examples on how to apply shaders. Most are a few years old and thus the dependencies and file locations have changed with the new versions of Raspian OS. If you have any links to current examples, could you please attach them?

Also, do you know if the GPU is powerful enough to apply a distortion at 1080p and maintain 30fps? Most examples I have found have been downscaled to 640x480, which is well below what I am seeking. If it's not, I don't really see the need to try the OpenGL shaders and instead will rely on Gstreamer or not applying a distortion at all.

On a sidebar, how much do you know about the new camera API, libcamera?

From my research, it appears to be based in the CPU rather than the GPU, which is what MMAL, Picamera, and the raspistill and raspivid applications were built on. Performance looks to be very similar due to hardware acceleration, but I have yet to test it myself. I'm wondering if using libcamera would allow for a performance boost using OpenCV since the data is already in the CPU and doesn't need to be copied over from the GPU. However, I don't know if this would help much since I know the arm CPU is the bottleneck with image processing as is and likely can't process 1080p regardless of if the data needs to be copied or not.

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Gstreamer Pipeline from Mac to Pi

Post by Realizator »

Hello phowe,
phowe wrote:
Tue Jan 11, 2022 9:14 pm
I also tried using the UDP protocol, but it was not working on my Wifi at home and I couldn't figure out why.
This is a regular network magic to figure out. You should have the single network (not mixed with other sub-networks or bridges), check "wifi clients isolation" settings on your router, etc. - too many potential issues to debug :)
phowe wrote:
Tue Jan 11, 2022 9:14 pm
The main aim is to apply barrel distortion to the camera feed while maintaining 1080p at 30fps for the purposes of using it as a VR display. I would be open to using the OpenGL shaders to apply the distortion, but I cannot find any up-to-date examples on how to apply shaders. Most are a few years old and thus the dependencies and file locations have changed with the new versions of Raspian OS. If you have any links to current examples, could you please attach them?
The best article on digging out the OpenGL performance on Pi I found is this one. It has a lot of links to performance-related investigations.
phowe wrote:
Tue Jan 11, 2022 9:14 pm
Also, do you know if the GPU is powerful enough to apply a distortion at 1080p and maintain 30fps?
You can take a look at the article I referenced above. In our own experience for VR implementation, we need a lot of image processing (distortion/undistortion, scaling, hemisphere-sphere texturing, FOV adjustment, etc.). It's hard to write all of this in the native code, so we used a "heavy" framework (Unity). As a result, our current approach is to do all that on the receiver's side (i.e. process by VR helmet, which is Oculus Gp/Q1/Q2 now in our case). As you guess, most mobile VR helmets have an advanced VPU for doing all that stuff. And we are using the most optimized Pi workflow (video capture/encoding), without GUI, to get the maximum of this part.

phowe wrote:
Tue Jan 11, 2022 9:14 pm
On a sidebar, how much do you know about the new camera API, libcamera?
I didn't dig deep enough, since libcamera has no native stereoscopic support now and is useless for most of the "stereoscopy beginners". For advanced users it's fine (since you can access multiple cameras from the different processes and use hardware sync between cameras).
phowe wrote:
Tue Jan 11, 2022 9:14 pm
However, I don't know if this would help much since I know the arm CPU is the bottleneck with image processing as is and likely can't process 1080p regardless of if the data needs to be copied or not.
Well... maybe the very optimized approach can work for you. You see, RPi VPU has a specific ISP, extremely optimized for some operations. Under "optimization" I mean all hardware parts (including memory-optimized operations with no extra copying). One of these ISP features is "lens correction", used for the real-time fix of the image captured by the camera. You can fix a lot of aspects (color-related and distortion-related). I'm not sure it can be used for extreme distortion values, but you can play with it. You can use this post as an entry point to this scope :).
Eugene a.k.a. Realizator

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Gstreamer Pipeline from Mac to Pi

Post by phowe »

Hey Realizator,

I wanted to provide you with an update with my progess. I tried using gstreamer in my college dorm and the latency was horrible due to the lower internet speed, so I have abandoned that method entirely :lol: . Instead I have turned to OpenGL and using the GPU, where I have found some early success. After a ton of google searches and digging through various forums, I found a GitHub repo that actually compiled on Buster which implements the camera into the GPU using OpenGL, as well as the QPU. I have been messaging back and forth with the author of the repo, and he has been extremely responsive and helpful.

Early tests indicate that this may be a promising route, as I was able to achieve 12fps at 1080p while doing the slowest distortion method possible, fragment-based. Currently, I am attempting to implement a mesh based or vertex based distortion, which should improve performance significantly. If I cannot get this to work, I will turn to the QPU to see if I can get better performance out of that.

I'm really hoping that I can squeeze 30fps out of the GPU, since it would allow for all the necessary VR processing to occur on the Pi, which is something I have yet to see achieved anywhere else. I will keep you updated as I continue to try things out and communicate with the repo author.

Here is a link to my fork of the repo, in case you or anyone else wants to try it out themselves without having to write everything from scratch. It includes a few changes to make the code work correctly after compiling.
https://github.com/peytonicmaster6/VC4CV

Here is a link to the original repo:
https://github.com/Seneral/VC4CV

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Gstreamer Pipeline from Mac to Pi

Post by Realizator »

Hi phowe,

Wow, this is the rabbit hole! But, to say the truth, if it works - it is the fastest solution possible, as is it using the most powerful (and underused) part of the Pi - QPU.

I see the only potential issue here. A quick look at the code shows that it relies on MMAL stack, which is about to deprecate (and is already out in Bullseye). Anyways, if it works - this will be great! I hope that in the libcamera RPF will expose the same access to the QPU, and we can repeat this with the latest software.
Eugene a.k.a. Realizator

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Gstreamer Pipeline from Mac to Pi

Post by phowe »

Hi Realizator,

Things are looking rather promising, I have successfully implemented vertex based distortion and am able to get 30fps at 1080p, with about 300ms of latency. There might be a way for me to reduce the latency later on, but I am looking to push forward with stereo vision and text annotation before trying to lower latency. However, I have run into an issue that is holding things up, and this is outside the expertise of the author of the repo.

The program was built to use one camera and display it via openGL, and I would like to display both cameras on the screen and apply distortion via a shader to both. The issue I am running into is trying to change which camera is being accessed. The initial code didn't include any control over this since the system it was made for could only use one camera. Looking through the source code of raspivid, I thought I found my answer for how to change the camera number, but when I implemented it within the code, it has absolutely no effect on which camera is selected. The code compiles without any errors, it just doesn't actually do anything.

I added in these lines to the gcs.c file (check my repository I linked earlier), which controls the camera parameters, where 1 should select the second camera, and 0 should select the first.

Code: Select all

MMAL_PARAMETER_INT32_T camera_num = {{MMAL_PARAMETER_CAMERA_NUM, sizeof(camera_num)}, 1};
mmal_port_parameter_set(gcs->camera->control, &camera_num.hdr);
I'm not sure what the issue is because I was able to add in a brightness control parameter, which wasn't included in the initial repository and it worked flawlessly. I also based this code off the raspivid source code:

Code: Select all

MMAL_RATIONAL_T value = {60, 100};
mmal_port_parameter_set_rational(gcs->camera->control, MMAL_PARAMETER_BRIGHTNESS, value);	
I'm not really sure what the issue is. Do you have any ideas?

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Gstreamer Pipeline from Mac to Pi

Post by Realizator »

Hi Phowe,

Wow, congrats on your implementation success!

As for the two cameras - I have a few ideas to check

1. Please check that you are using the correct dt-blob, and your system can see two cameras by vcgencmd get_camera
2. I took a look at your code here. You see, choosing the camera is not the only step to do - you should later initiate other parameters to use for the camera chosen. I see you are not checking the camera number assignment success (like in the RaspiVid.c) At your line 111 you are choosing the camera number, but you already set the brightness, for example. Double-check the initialization sequence, please.
Eugene a.k.a. Realizator

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Gstreamer Pipeline from Mac to Pi

Post by phowe »

Hey Realizator,

Turns out it was a really simple fix, I had the code in the wrong place. I was trying to change the camera number after I had already enabled it. Putting the lines to change the camera number before enabling the camera worked like a charm and I can now select which camera is being accessed! Next step is to try and get both cameras running and distorting at the same time, then onto adding text annotation. I'll keep you updated as I progress!

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Gstreamer Pipeline from Mac to Pi

Post by phowe »

Now that I am able to select which camera is being accessed, I am trying to get both cameras running at the same time to produce stereoscopic vision. Unfortunately, I am running into an issue when trying to display both cameras at the same time. By reworking the glViewport function, I was able to get two copies of the video feed displayed, each taking up 960x1080. Now, I just need to get one of the glViewport functions to display the camera feed from camera 1, and the other to display the camera feed from camera 2.

I figured it would be similar to duplicating the glViewport function, so I created another instance of the camGL_create function, and changed the camera number parameter to be camera 1. I assumed this would be the only change needed since the only thing I have to change is the mmal parameter controlling the camera number in order to switch which camera is being accessed. However, I am getting the following error messages:

Code: Select all

mmal_camera_component_enable: failed to enable component: ENOSPC
gcs_create: Failed to enable camera: ENOSPC
This would appear that either the camera number change is not going into effect (which I don't think is the case because a print statement after changing the camera number indicates it was indeed changed), or that I am missing something else to differentiate the cameras since the error is indicating I am trying to access the same camera stream twice. Could it be related to the MMAL camera ports?

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Gstreamer Pipeline from Mac to Pi

Post by Realizator »

Hey Phowe,
In 90% of cases of ENOSPC error, the issue is in the lack of GPU memory.

What is the "gpu_mem" value in your config.txt?
Eugene a.k.a. Realizator

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Gstreamer Pipeline from Mac to Pi

Post by Realizator »

p.s. Phowe, your experiments remind me of our old idea of creating equirectangular projection from the fisheye image. I guess this is theoretically possible...
Eugene a.k.a. Realizator

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Gstreamer Pipeline from Mac to Pi

Post by phowe »

It was in fact a GPU memory error. I originally had the memory at 128mb, and as soon as I upped it to 256, the ENOSPC errors went away. I wish my google searches on the error would have indicated a memory issue as it would've saved me a lot of time in troubleshooting, but at least now I know to always check the simple things first :lol:

I also figured out how to display each camera via openGL, and now I am able running stereo vision with barrel distortion applied to each camera at 1080p and about 26fps :D ! I am now adding in text annotation so I can get sensor data overlaid onto the video feed, and then I am going to see if I can improve performance by about 15% to achieve 30fps. I may have to look into moving the code from openGL to the QPU, but I would rather not if I can squeeze an extra 15% performance out of openGL.

User avatar
Realizator
Site Admin
Posts: 900
Joined: Tue Apr 16, 2019 9:23 am
Contact:

Re: Gstreamer Pipeline from Mac to Pi

Post by Realizator »

Hey phowe,

I'm glad it works now! :-)

Btw I wrote you an email. Please check your inbox.
Eugene a.k.a. Realizator

phowe
Posts: 16
Joined: Sun Feb 28, 2021 4:40 am

Re: Gstreamer Pipeline from Mac to Pi

Post by phowe »

Hey Realizator,

I figured out how to improve the fps up to 30, it was as simple as forcing the pi into turbo mode so that the CPU runs at 1.2GHz and the GPU runs at 400Hz. I am now able to get a stable 32fps with text annotation and distortion being applied.

The last step for this project is just to integrate reading data from serial and splitting it up to display on each camera!

P.S. I read and responded to your email!

Post Reply