Realizator wrote: ↑Fri Apr 03, 2020 1:13 pm
Hi Fraserbaton,
While taking a still images Pi works in a special mode, and high FPS is a problem here.
Another point is that raspividyuv actually can give you a high frame rate over pipe, but Python is unable to process this stream fast enough.
Can you please tell me the background of your task?
1. What is your target FPS and resolution?
2. If we get this FPS/resolution, are you sure your Python code will be able to process it in a real-time?
3. If real-time processing is your aim? Or you just need to record a captured images with/without flash?
JFYI: Picamera has a
flash support, but just for the still port. Also here is a discussion
of using it with a stereoscopic setup.
Hi Realizator,
I am using the StereoPi to detect bright spots where the light from a multi LED flash reflects particularly intensely from the curved shiny surfaces, at the moment I am using an array of evenly spaced sewing pins (please see the image attached). Both of the cameras on the stereoPi are spaced approximately 150 mm apart and angled inwards at some angle so that their image centres cross approximately 200 mm away.
For each capture cycle I take two exposures, the first with the LEDs on and the second with the LEDs off. The reason for this is that I subtract the off image from the on image to reduce noise and make an image in which the bright reflections of the LEDs on the sewing pins are the only prominent feature remaining in the image.
My analysis is fairly lightweight, I take a horizontal intensity strip from the centre of the image (i.e. for a 1280 x 720 capture I take a 1280 x 1 strip from the 360th row). I then run a simple scipy.find_peaks() on the intensity strip and plot the data. This process is looped and the graph is refreshed for every capture.
1. I have a preference for resolution over frame rate as this is my limiting factor for the precision of the final measurement. I would like to ideally aim for an image with a horizontal resolution of 1500 and perhaps 5-10 full capture cycles (10-20 exposures) per second if possible.
2. I have tried my python code with and without the analysis and plotting operations and have found that they are fairly negligible with respect to the time it takes to simply call the camera.capture(output, 'yuv') command. I am therefore fairly confident that my analysis can deal with a much faster capture rate.
3. Real time processing is my aim as although sensor will only take measurements of stationary objects it will need to be finely positioned by hand when it is fully housed, a reasonable capture rate makes this process considerably easier. As above 5 full cycles (10 exposures) a second would be more than enough, anything beyond this would be considered a bonus.
Please see my code below (it is a work in progress and is yet to be polished to please bear with me)
Code: Select all
#Standard python modules
import time
import picamera
import picamera.array
import numpy as np
import matplotlib.pyplot as plt
from gpiozero import LED
#Project specific modules
import scan
#Initialise the camera
camera = scan.initialise_camera()
#Begin capturing live scans and displaying them as an intensity plot
scan.plot_scan_live(camera)
This is the scan.py module:
Code: Select all
#Scan module
from gpiozero import LED
import numpy as np
import time
import picamera
import matplotlib.pyplot as plt
from scipy import signal
######################
#Variable declarations
######################
#Define LED GPIO pins
red_led_0 = LED(14)
red_led_1 = LED(15)
red_led_2 = LED(18)
red_led_3 = LED(3)
red_led_4 = LED(4)
#Define scan resolution
h_resolution = 1280
v_resolution = 720
h_resolution_stereo = 2 * h_resolution
#For unencoded formats:
#Horizontal resolution is rounded to nearest multiple of 32
#Vertical resolution is rounded to nearest multiple of 16
#Define number of rows to be compiled into intensity strip
strip_rows = 1
#Define minimum peak height and width for detection
minimum_peak_height = 20
minimum_peak_width = 1
minimum_peak_distance = 10
#Define matplotlib style
plt.style.use('dark_background')
#Post processing toggle
post_processing_enable = True
#Use video port toggle
video_port_enable = False
######################
#Function definitions
######################
#Enable all Red LEDs
def enable_scan_leds():
red_led_0.on()
red_led_1.on()
red_led_2.on()
red_led_3.on()
red_led_4.on()
#Disable all Red LEDs
def disable_scan_leds():
red_led_0.off()
red_led_1.off()
red_led_2.off()
red_led_3.off()
red_led_4.off()
#Custom output for np.frombuffer method
class MyOutput(object):
def _init_(self):
self.scan_output = np.empty(1, h_resolution_stereo, dtype=np.uint8)
def write(self, buf):
# write will be called once for each frame of output. buf is a bytes
# object containing the frame data in YUV420 format; we can construct a
# numpy array on top of the Y plane of this data quite easily:
y_data = np.frombuffer(buf, dtype=np.uint8, count=h_resolution_stereo*v_resolution).reshape((v_resolution, h_resolution_stereo))
self.scan_output = y_data[int(v_resolution/2):int(v_resolution/2) + strip_rows, :h_resolution_stereo]
#print(scan_output)
#print(scan.shape)
def flush(self):
# this will be called at the end of the recording; do whatever you want
# here
pass
#Initialise camera for nparray Method
def initialise_camera():
camera = picamera.PiCamera(stereo_mode ='side-by-side', stereo_decimate=True, resolution=(h_resolution_stereo, v_resolution))
camera.hflip = True
camera.vflip = True
camera.shutter_speed = 30000
camera.awb_mode = 'off'
camera.exposure_mode = 'off'
#time.sleep(2)
return camera
#Grab scan from camera
def grab_scan_buffer(camera):
#Create y_data (np.frombuffer) to capture output
output = MyOutput()
#Enable LEDs
enable_scan_leds()
try:
camera.capture(output, 'yuv', use_video_port=video_port_enable)
except IOError:
pass
scan_leds_on = output.scan_output
#Disable LEDs
disable_scan_leds()
try:
camera.capture(output, 'yuv',use_video_port=video_port_enable)
except IOError:
pass
scan_leds_off = output.scan_output
#Subtract the LEDs off scan from the LEDs on scan to reduce noise
scan_output = np.subtract(scan_leds_on, scan_leds_off.astype(np.int16)).clip(0, 255).astype(np.uint8)
return scan_output
#A function to display the left and right scans in top-bottom format. Left is top and right is bottom
def plot_scan_live(camera):
#Can improve efficieny of plotting function using knowledge at: https://bastibe.de/2013-05-30-speeding-up-matplotlib.html
#Define pixel array for plotting purposes
pixel_number = np.linspace(1, h_resolution, num=h_resolution)
#Generate figure and axes and set size of figure
fig, axs = plt.subplots(2,1, squeeze=True, sharex=True)
fig.set_size_inches(12, 4)
#Initialise left and right lines with random data for scan profile. Left is top and right is bottom, from perspective of sensor.
scan_line_left, = axs[0].plot(pixel_number, np.random.rand(h_resolution), color='red')
scan_line_right, = axs[1].plot(pixel_number, np.random.rand(h_resolution), color='red')
peak_markers_left, = axs[0].plot(np.random.rand(1), np.random.rand(1), 'y|')
peak_markers_right, = axs[1].plot(np.random.rand(1), np.random.rand(1), 'y|')
#Set axis limits on the graphs
axs[0].set_ylim(0, 265)
axs[0].set_xlim(0, h_resolution)
axs[1].set_ylim(0, 265)
axs[1].set_xlim(0, h_resolution)
#Show the figure (block=False disables the program break that is default for plt.show())
plt.show(block=False)
while True:
#Grab scan data from camera
scan_output = np.transpose(grab_scan_buffer(camera))
scan_output_left = scan_output[:1280, 0]
scan_output_right = scan_output[1280:, 0]
#Find peaks in scan output
scan_peaks_left, _ = signal.find_peaks(scan_output_left.flatten(), height=minimum_peak_height, width=minimum_peak_width, distance=minimum_peak_distance)
scan_peaks_right, _ = signal.find_peaks(scan_output_right.flatten(), height=minimum_peak_height, width=minimum_peak_width, distance=minimum_peak_distance)
#Refresh data for scan lines
scan_line_left.set_ydata(scan_output_left)
scan_line_right.set_ydata(scan_output_right)
#Refresh data for left and right peaks
peak_markers_left.set_xdata(scan_peaks_left)
peak_markers_left.set_ydata(np.full(scan_peaks_left.shape, 260))
peak_markers_right.set_xdata(scan_peaks_right)
peak_markers_right.set_ydata(np.full(scan_peaks_right.shape, 260))
#Redraw the current figure with the new data
fig.canvas.draw()
fig.canvas.flush_events()
I am aware that it is inefficient to turn all 5 IO on for the LEDs, I am working on a second version of the LED board in which all LEDs will be toggled on with a single GPIO.
As always I really appreciate your input.