Image Stitching in OpenCV

This tutorial will guide you through the entire process of Image Stitching in OpenCV from start to finish.

Updated March 25, 2023


Hey! If you love Computer Vision and AI, let's connect on Twitter or LinkedIn. I talk about this stuff all the time!

Welcome to this comprehensive tutorial on image stitching with OpenCV! Whether you’re working on creating panoramic images, combining multiple photos, or developing a computer vision application that requires image stitching, this tutorial will guide you through the entire process. Let’s dive in!

What is Image Stitching?

Image stitching is the process of combining multiple overlapping images to create a seamless, high-resolution output image. This technique is commonly used to create panoramic images, virtual tours, and even some medical imaging applications.

Image stitching involves several steps:

  1. Feature detection: Identifying and extracting unique features (e.g., corners, edges) from each input image.
  2. Feature matching: Finding correspondences between features in the overlapping regions of the input images.
  3. Homography estimation: Estimating the transformation (e.g., rotation, scaling, translation) that aligns the input images.
  4. Warping: Applying the estimated transformation to the input images.
  5. Blending: Combining the warped images into a single seamless output image.

So let’s jump in and make this work!

How to Perform Image Stitching with OpenCV: A Step-by-Step Guide

Now that we have a basic understanding of image stitching, let’s see how to perform it using OpenCV. We’ll be using Python for our examples, but you can also use the OpenCV C++ API.

Step 1: Install OpenCV and Other Dependencies

First, let’s install OpenCV and other required libraries:

pip install opencv-python opencv-python-headless numpy

Step 2: Load Input Images

Let’s start by loading the input images using OpenCV:

import cv2

img1 = cv2.imread('path/to/image1.jpg')
img2 = cv2.imread('path/to/image2.jpg')

Step 3: Detect and Match Features

Next, we’ll detect and match features between the input images using OpenCV’s built-in ORB feature detector and BFMatcher:

def detect_and_match_features(img1, img2):
    orb = cv2.ORB_create()
    keypoints1, descriptors1 = orb.detectAndCompute(img1, None)
    keypoints2, descriptors2 = orb.detectAndCompute(img2, None)

    bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
    matches = bf.match(descriptors1, descriptors2)
    matches = sorted(matches, key=lambda x: x.distance)

    return keypoints1, keypoints2, matches

keypoints1, keypoints2, matches = detect_and_match_features(img1, img2)

Step 4: Estimate Homography

Now, we’ll estimate the homography matrix that aligns the input images:

def estimate_homography(keypoints1, keypoints2, matches, threshold=3):
    src_points = np.float32([keypoints1[m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
    dst_points = np.float32([keypoints2[m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)

    H, mask = cv2.findHomography(src_points, dst_points, cv2.RANSAC, threshold)
    return H, mask

H, mask = estimate_homography(keypoints1, keypoints2, matches)

Step 5: Warp Images

With the estimated homography, we can now warp the input images:

def warp_images(img1, img2, H):
    h1, w1 = img1.shape[:2]
    h2, w2 = img2.shape[:2]

    corners1 = np.float32([[0, 0], [0, h1], [w1, h1], [w1, 0]]).reshape(-1, 1, 2)
    corners2 = np.float32([[0, 0], [0, h2], [w2, h2], [w2, 0]]).reshape(-1, 1, 2)
    warped_corners2 = cv2.perspectiveTransform(corners2, H)

    corners = np.concatenate((corners1, warped_corners2), axis=0)
    [xmin, ymin] = np.int32(corners.min(axis=0).ravel() - 0.5)
    [xmax, ymax] = np.int32(corners.max(axis=0).ravel() + 0.5)

    t = [-xmin, -ymin]
    Ht = np.array([[1, 0, t[0]], [0, 1, t[1]], [0, 0, 1]])

    warped_img2 = cv2.warpPerspective(img2, Ht @ H, (xmax - xmin, ymax - ymin))
    warped_img2[t[1]:h1 + t[1], t[0]:w1 + t[0]] = img1

    return warped_img2

warped_img = warp_images(img1, img2, H)

In this step, we first calculate the size of the resulting stitched image by finding the transformed corners of the second image. Then, we apply the homography transformation using cv2.warpPerspective() to warp the second image onto the first image’s plane.

Step 6: Blend Images

Finally, we’ll blend the warped images to create a seamless output image. For simplicity, we’ll use a basic blending technique in this tutorial. However, more advanced techniques like multi-band blending can also be employed for better results:

def blend_images(img1, img2):
    mask = np.where(img1 != 0, 1, 0).astype(np.float32)
    blended_img = img1 * mask + img2 * (1 - mask)
    return blended_img.astype(np.uint8)

output_img = blend_images(warped_img, img1)

Step 7: Display and Save the Result

Now, let’s display the resulting stitched image:

cv2.imshow('Stitched Image', output_img)
cv2.waitKey(0)
cv2.destroyAllWindows()

And save it to a file:

cv2.imwrite('stitched_image.jpg', output_img)

Congratulations! You’ve now learned how to perform image stitching with OpenCV. By understanding the underlying theory and using OpenCV’s powerful functions, you can create stunning panoramic images and develop advanced computer vision applications.

Remember to experiment with different feature detectors, matching algorithms, and blending techniques to achieve the best results for your specific use case. Happy coding!