Multiple Object Detection with Python and Raspberry Pi
“As an Amazon Associates Program member, clicking on links may result in Maker Portal receiving a small commission that helps support future projects.”
This is the third entry into the Raspberry Pi and Python image processing tutorial series. In parts I and II, the Raspberry Pi’s picamera was introduced along with some edge detection routines. The aforementioned tutorials were the stepping stones needed to understanding the working of the picamera, Python, and identifying individual objects. Click here to explore Part I (picamera introduction and simple image processing) and click here to explore Part II (edge detection using machine learning and clustering algorithms).
In this tutorial, the picamera and edge detection routines introduced in parts I and II will be employed to identify individual objects, predict each object’s color, and approximate each object’s orientation (rotation). By the end of the tutorial, the user will be capable of dividing an image into multiple objects, determining the rotation of the object, and drawing a box around the subsequent object.
Picking up from the final code in Part II of the tutorial series (link to github code here), we can start by analyzing the DBSCAN clustering algorithm provided by the Sklearn toolbox. Thus, using the code below, we can produce the following scatter showing object boundaries in an image:
The tools developed in part II of this series, if it’s not obvious in the plot and code above, allows us to now analyze each individual object in the image frame. In the case above, four objects have been identified, and what we want to do next is approximate the rotation of each object, and subsequently find the closest boundaries for drawing boxes around each object. Once the boundaries are approximated (after rotation), we can finally sum up the contributions of each RGB channel and approximate the color of each object.
If we look closely at the clustering variable, we can see how the computation is being largely affected by the variable passed into the scatter points. In the next section, I downscale the object boundaries so that we can more readily approximate object boundaries in real time.
There are many ways to upscale an image, however, for our particular case, we want to ensure that we are using the most efficient methods (for real-time analysis) and also ensuring that the proper object boundaries are being preserved (within some degree of error). Therefore, the methods used can be altered based on specific uses, particularly with respect to object sizes in the image frame. The ndimage toolbox in Python facilitates image scaling quite nicely, with a function called ‘zoom.’
The ‘ndimage’ zoom function uses spline interpolation based on the request order of zoom. This means, for an order less than 1, we can scale our image to improve computational performance. The amount of scaling depends on the object size in reference to the image frame, however, the impacts are significant to any degree (non-linear because of 2-D scaling). Below is a simple implementation and the resulting scaled nearest neighbor analysis. Specifically for nearest neighbor learning, scaling can reduce the computations by orders of magnitude. In my particular case, the machine learning DBSCAN went from 1.95s to 0.07s, a reduction of 96%! Depending on the particular application, this may be acceptable or highly inaccurate - so test on each potential object size and shape and frame of interest first.
There are several side effects that are important to note when scaling:
Object boundaries will blur more due to spline interpolation between pixels
Alterations may need to be made in DBSCAN parameters if lines become too close between objects
The histogram percent cutoff may need to be increased in order to catch more points between blurred segments
Now that we have an efficient tool for marking object boundaries, we can begin to approximate object orientations for accurately drawing boundaries around each object. The method for determining the rotation of an object lies in the covariance of the scatter points and the eigenvector decomposition of the covariance. We can take the eigenvector, compute the arctan of the norm, and get an approximate of the orientation ‘spread’ of the scatter points. A great resource for understanding scatter covariance, orientation, and eigenvectors, see the following tutorial “A geometric interpretation of the covariance matrix.”
2-D scatter covariance can be computed using the following matrix:
Now, if we use a single object and rotate it with respect to the image frame, we can see the importance of the covariance:
We can see a few things in the covariance results:
Anti-symmetry and symmetry in the scatter
Geometric analog to the covariance vector
Relationship to rotation matrix
As a result of the two above, we can use the inverse of the covariance matrix to approximate the angle of rotation of the object scatter. Assuming the data matrix, D, is the object orientation with no rotation, we can assume the rotated object data matrix, D’, can be unrotated using the following equation:
This method will also be used to find the width and height of the objects for drawing boxes around object boundaries. The general inverse rotation multiplication can be done as shown in the code below:
In the next section, I demonstrate how to draw boxes around the objects using the width and height of the rotated objects, then re-rotating the boxes to fit the original images.
Python also has a toolbox (patches.Rectangle) that allows for drawing of shapes like ellipses and rectangles. Two toolboxes are needed to start drawing object boundaries:
The basic usage for drawing a rectangle is as follows:
Draw the rectangle by identifying the left corner coordinates, angle of rotation, and width and height of the object
Attach the rectangle to a patch collection variable
Add the patch collection variable to the plot axis
The basic process is show below as an example:
Now, if we want to draw the rectangles around the objects, we go back to the rotation, calculate the width and height of objects, then input the left corner coordinates, width, height, and angle of rotation. This is all implemented below in code and in the plot that follows:
The routine can also be applied to images not taken by the Raspberry Pi picamera:
In the third installment of the Python and Raspberry Pi image processing tutorial series I explored image scaling, rotation, and establishing object boundaries. First, I introduced the zoom function for images in order to improve the processing speed required for object identification. Then, object orientation and the idea of scatter and covariance was explored with the intention of quantifying the rotation of each object. Lastly, the object orientation was utilized in order to draw boxes around each object, resulting in the ability to explore each individual object. There are many uses to this type of tool, specifically with image analysis, facial recognition, machine vision, modern applications in robotics, and so much more. In the future, computer vision will be widespread, and many of the tools discussed in this series will be utilized to a higher and more efficient degree to enable more computer interaction with similar experiences observed by humans.
In the next and (likely) final entry, I will explore individual object recognition and real-time image analysis utilizing many of the tools introduced thus far.
Stay Connected
See More in Image Processing and Raspberry Pi: