Motion Analysis in Video with OpenCV

Learn how to perform optical flow analysis on video using OpenCV. Follow our step-by-step guide with code examples to start exploring the world of motion analysis and video processing today.

Updated March 20, 2023


Hey! If you love Computer Vision and AI, let's connect on Twitter or LinkedIn. I talk about this stuff all the time!

Welcome to this tutorial on motion analysis in video with OpenCV, one of the most widely used computer vision libraries. Motion analysis involves tracking the movement of objects in a video sequence and is an important task in many applications such as surveillance, traffic monitoring, and human-computer interaction.

In this tutorial, we will explore how to perform motion analysis in video with OpenCV. We will discuss the theory behind motion analysis and provide multiple code examples to illustrate the concept.

Theory

Motion analysis in video involves detecting and tracking moving objects. This can be achieved using various techniques such as background subtraction, optical flow, and feature tracking.

Background subtraction involves separating the moving objects from the static background. This can be done using methods such as Gaussian Mixture Models and Median Filtering. Optical flow, on the other hand, tracks the movement of pixels between consecutive frames in a video sequence. Feature tracking, as the name suggests, tracks specific features such as corners and edges in an image.

OpenCV provides a range of functions and algorithms to perform motion analysis in video. These include the BackgroundSubtractor class for background subtraction, the calcOpticalFlowPyrLK() function for optical flow, and the GoodFeaturesToTrack() function for feature tracking.

Now that we have a basic understanding of the theory, let’s move on to the code examples.

Code Examples

We will use Python for our examples, but the concept applies to other programming languages supported by OpenCV.

First, let’s start by importing the necessary libraries:

import cv2
import numpy as np

Next, let’s load a sample video file and read the first frame:

cap = cv2.VideoCapture('sample_video.mp4')
ret, frame = cap.read()

Background Subtraction

To perform background subtraction, we can use the following code:

fgbg = cv2.createBackgroundSubtractorMOG2()
fgmask = fgbg.apply(frame)

The createBackgroundSubtractorMOG2() function creates a background subtractor object, and the apply() function applies the background subtractor to the current frame. The resulting binary image fgmask contains the foreground pixels.

Optical Flow

To perform optical flow, we can use the following code:

import cv2
import numpy as np

# Open the video file
cap = cv2.VideoCapture('sample_video.mp4')

# Get the first frame
ret, frame1 = cap.read()

# Convert the frame to grayscale
prvs = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)

# Create a mask for the optical flow
mask = np.zeros_like(frame1)

# Define the parameters for the optical flow algorithm
lk_params = dict(winSize=(15, 15),
                 maxLevel=4,
                 criteria=(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))

while True:
    # Read the next frame
    ret, frame2 = cap.read()
    if not ret:
        break

    # Convert the frame to grayscale
    next = cv2.cvtColor(frame2, cv2.COLOR_BGR2GRAY)

    # Calculate the optical flow
    flow = cv2.calcOpticalFlowFarneback(prvs, next, None, 0.5, 3, 15, 3, 5, 1.2, 0)

    # Convert the optical flow to polar coordinates
    mag, ang = cv2.cartToPolar(flow[..., 0], flow[..., 1])

    # Scale the magnitude of the optical flow between 0 and 255
    mag_scaled = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX)

    # Convert the angle to hue
    ang_degrees = ang * 180 / np.pi / 2
    ang_scaled = cv2.normalize(ang_degrees, None, 0, 255, cv2.NORM_MINMAX)

    # Convert the hue and magnitude to an RGB image
    hsv = np.zeros_like(frame1)
    hsv[..., 0] = ang_scaled
    hsv[..., 1] = 255
    hsv[..., 2] = cv2.convertScaleAbs(mag_scaled)

    # Convert the HSV image to BGR
    bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)

    # Display the optical flow
    cv2.imshow('Optical Flow', bgr)

    # Wait for a key press
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

    # Set the current frame as the previous frame for the next iteration
    prvs = next.copy()

# Release the video capture and close all windows
cap.release()
cv2.destroyAllWindows()

In this example, we first open the video file and read the first frame. We convert the first frame to grayscale and create a mask for the optical flow. We also define the parameters for the Lucas-Kanade optical flow algorithm using a dictionary.

We then loop through the video frames and perform the following steps for each frame:

  1. Read the next frame.
  2. Convert the frame to grayscale.
  3. Calculate the optical flow using the calcOpticalFlowFarneback() function.
  4. Convert the optical flow to polar coordinates using the cartToPolar() function.
  5. Scale the magnitude of the optical flow to a range between 0 and 255 using the normalize() function.
  6. Convert the angle to hue and scale it to a range between 0 and 255 using the normalize() function.
  7. Convert the hue and magnitude to an RGB image using the cvtColor() function.
  8. Display the optical flow.

Finally, we release the video capture using the release() method and close all windows using the destroyAllWindows() method.

Feature Tracking

To perform feature tracking, we can use the following code:

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
prev_gray = gray.copy()
prevPts = cv2.goodFeaturesToTrack(prev_gray, maxCorners=200, qualityLevel=0.01, minDistance=30)
mask = np.zeros_like(frame)

while True:
    ret, frame = cap.read()
    if not ret:
        break

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    nextPts, status, err = cv2.calcOpticalFlowPyrLK(prev_gray, gray, prevPts, None)

    goodNew = nextPts[status == 1]
    goodOld = prevPts[status == 1]

    H, _ = cv2.findHomography(goodOld, goodNew, cv2.RANSAC, 3.0)

    h, w = frame.shape[:2]
    pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
    dst = cv2.perspectiveTransform(pts, H)

    img = cv2.polylines(frame, [np.int32(dst)], True, (0, 255, 0), 3, cv2.LINE_AA)

    cv2.imshow('Feature Tracking', img)
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

    prev_gray = gray.copy()
    prevPts = goodNew.reshape(-1, 1, 2)

In the above code, we first detect good features to track using the GoodFeaturesToTrack() function. We then calculate the optical flow using the calcOpticalFlowPyrLK() function, which returns the next points, status, and error between the previous and current frames.

We filter the next points and status using the status == 1 condition and find the homography matrix using the findHomography() function. We then use the homography matrix to transform the points of a rectangle and draw a polygon using the polylines() function.

Conclusion

In conclusion, motion analysis in video is a powerful technique that can be used to detect and track moving objects in a video sequence. OpenCV provides a range of functions and algorithms to perform motion analysis, including background subtraction, optical flow, and feature tracking.

In this tutorial, we discussed the theory behind motion analysis and provided multiple code examples to illustrate the concept. We hope that this tutorial has been helpful and informative for beginners and those looking to explore the world of computer vision and video processing.

Feel free to explore the OpenCV documentation for more information on these techniques and other computer vision applications.