Optical Flow with OpenCV

Optical Flow is a technique for tracking flow of image objects in the scene. The output of Optical Flow is a series of flow 2D vectors which in turn is called as the Flow Field. It has numerous advantages and has been extensively used in self-driving cars, autonomous robots, and in assistive devices for the visually impaired.

The key concept behind Optical Flow is to calculate the intensity displacement through time and space. That said, optical flow can be thought as a function of tracking brightness across time and space. For example, given a point p0at time t0 and its intensity as i0,

i0 = I(p0, t0)

where I is the intensity function of a given point at a given time. Then let’s assume the intensity is i1 for the point p1 at time t1 during which a motion has occurred that made the original point now locate into p1. Now,

i1 = I(p1, t1)

According to Optical Flow, it is considered that,

i0 = i1

In other words it assumes that image brightness (intensity) is independent from camera motion.

Having followed a tutorial in OpenCV, I managed to put together my own Python code for a simple demonstration of Optical Flow using the Lucas Kanade algorithm [1].

[1] https://docs.opencv.org/3.2.0/d7/d8b/tutorial_py_lucas_kanade.html

Color Object Tracking with OpenCV

Color tracking is fairly a simple activity in OpenCV and you may find a variety of articles in the Internet which describe different mechanisms and implementations. However the common steps involve the following.

  1. Convert the source frame into HSV color space.
  2. Define the upper bounds and lower bounds for the color you intend to track. In my example I mind the blue color and so the lower bound and upper bounds are respectively (100, 150, 0) and (140,255, 255) where the lower bound being the darker blue.
  3. Find the image mask that contain pixels which fall within the lower and upper bounds as mentioned above.
  4. Find contours in the mask image.
  5. Find the maximum contour in terms of the contour area. So that we can omit trivial contours and focus only on the most dominating blue color object in the scene.
  6. Approximate the maximum contour into a polygon.
  7. Find the bounding rectangle of the above polygon.
  8. Draw the rectangle from above step on the source image.

The code that follows the above series of steps is provided below.

#This is an exercise to track blue color from camera.

import numpy as np
import cv2

cap = cv2.VideoCapture(0)
while(True):
ret, source = cap.read()
if(not ret):
print('Error in video capture')
break
hsv = cv2.cvtColor(source, cv2.COLOR_BGR2HSV)
lowerblue = np.array([100,150,0])
upperblue = np.array([140,255,255])
mask = cv2.inRange(hsv, lowerblue, upperblue)

#find contours in the mask
im, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

maxContour = max(contours, key=cv2.contourArea)
epsilon = 0.1*cv2.arcLength(maxContour, True)
approxpoly = cv2.approxPolyDP(maxContour, epsilon, True)
x,y,w,h = cv2.boundingRect(approxpoly)
cv2.rectangle(source,(x,y),(x+w, y+h),(0,255,0), 2)
cv2.imshow('original', source)

if (cv2.waitKey(1) & 0xFF == 113):
break

cap.release()
cv2.destroyAllWindows()

The demonstration is shown below.