Master the Art of Training Your Own Models Using OpenCV

Master the art of training custom models with OpenCV in this comprehensive tutorial. Learn preprocessing, feature extraction, and model training with step-by-step code examples. Perfect for beginners!

Updated March 24, 2023


Hey! If you love Computer Vision and OpenCV as much as I do let's connect on Twitter or LinkedIn. I talk about this stuff all the time and build cool projects.


Welcome to this engaging tutorial on training your own models using OpenCV! Our goal is to make this tutorial informative, easy to understand, and accessible even for beginners. We’ll delve deep into the theory behind the process, provide multiple code examples, and discuss why and how you might want to use custom-trained models. Let’s get started!

Introduction to Training Models with OpenCV

OpenCV, short for Open Source Computer Vision Library, is a powerful library that provides tools and functionalities for various computer vision tasks, including image and video processing, feature extraction, and machine learning. In this tutorial, we will focus on training your own models using OpenCV’s machine learning module.

Training your own models can be beneficial when working with specific datasets, unique object classes, or when you need to optimize the model for specific hardware constraints. In this tutorial, we’ll train a custom model for object recognition using the Support Vector Machine (SVM) algorithm provided by OpenCV’s machine learning module.

Setting Up the Environment

Before diving into the tutorial, ensure that you have OpenCV installed. If you haven’t installed it yet, follow the instructions on the official OpenCV installation guide.

Preparing the Dataset

To train a model, we need a dataset. In this tutorial, we’ll use the Caltech 101 dataset, which contains images of objects belonging to 101 categories. Download the dataset and extract it.

Step 1: Preprocessing the Images

First, we’ll preprocess the images by resizing them to a fixed size and converting them to grayscale. This ensures that all images have the same dimensions and color channels.

import cv2
import os
import glob

def preprocess_images(input_folder, output_folder, size=(128, 128)):
    if not os.path.exists(output_folder):
        os.makedirs(output_folder)

    for category_folder in glob.glob(os.path.join(input_folder, '*')):
        category = os.path.basename(category_folder)
        output_category_folder = os.path.join(output_folder, category)
        if not os.path.exists(output_category_folder):
            os.makedirs(output_category_folder)

        for image_path in glob.glob(os.path.join(category_folder, '*.jpg')):
            image = cv2.imread(image_path)
            gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            resized_image = cv2.resize(gray_image, size)
            output_image_path = os.path.join(output_category_folder, os.path.basename(image_path))
            cv2.imwrite(output_image_path, resized_image)

input_folder = 'path/to/Caltech101'
output_folder = 'path/to/preprocessed_dataset'
preprocess_images(input_folder, output_folder)`

Step 2: Splitting the Dataset

Next, we’ll split the dataset into a training set and a test set.

import shutil
import random

def split_dataset(input_folder, train_folder, test_folder, test_ratio=0.2):
    if not os.path.exists(train_folder):
        os.makedirs(train_folder)
    if not os.path.exists(test_folder):
        os.makedirs(test_folder)

    for category_folder in glob.glob(os.path.join(input_folder, '*')):
        category = os.path.basename(category_folder)
        train_category_folder = os.path.join(train_folder, category)
        test_category_folder = os.path.join(test_folder, category)
        if not os.path.exists(train_category_folder):
            os.makedirs(train_category_folder)
        if not os.path.exists(test_category_folder):
            os.makedirs(test_category_folder)

            image_paths = glob.glob(os.path.join(category_folder, '*.jpg'))
            random.shuffle(image_paths)
            num_test_samples = int(len(image_paths) * test_ratio)

            for i, image_path in enumerate(image_paths):
                if i < num_test_samples:
                    output_folder = test_category_folder
                else:
                    output_folder = train_category_folder

            shutil.copy(image_path, os.path.join(output_folder, os.path.basename(image_path)))
            input_folder = 'path/to/preprocessed_dataset'
            train_folder = 'path/to/train_dataset'
            test_folder = 'path/to/test_dataset'
            split_dataset(input_folder, train_folder, test_folder)

Feature Extraction

To train our SVM, we’ll first extract features from the images. In this tutorial, we’ll use the Histogram of Oriented Gradients (HOG) descriptor. You can learn more about HOG from the OpenCV documentation.

import numpy as np

def extract_hog_features(image_folder, win_size=(128, 128), block_size=(16, 16), block_stride=(8, 8), cell_size=(8, 8), nbins=9):
    hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, nbins)
    features = []
    labels = []

    for category_folder in glob.glob(os.path.join(image_folder, '*')):
        category = os.path.basename(category_folder)
        for image_path in glob.glob(os.path.join(category_folder, '*.jpg')):
            image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
            feature = hog.compute(image)
            features.append(feature)
            labels.append(category)

    features = np.array(features).squeeze()
    labels = np.array(labels)
    return features, labels

train_features, train_labels = extract_hog_features(train_folder)
test_features, test_labels = extract_hog_features(test_folder)

Training the SVM

Now that we have our features and labels, we can train the SVM using OpenCV.

def train_svm(train_features, train_labels, kernel=cv2.ml.SVM_LINEAR, C=1, gamma=1, degree=3):
    svm = cv2.ml.SVM_create()
    svm.setType(cv2.ml.SVM_C_SVC)
    svm.setKernel(kernel)
    svm.setC(C)
    svm.setGamma(gamma)
    svm.setDegree(degree)

    train_data = cv2.ml.TrainData_create(train_features, cv2.ml.ROW_SAMPLE, train_labels)
    svm.train(train_data)

    return svm

svm = train_svm(train_features, train_labels)

Evaluating the Model

After training the SVM, we can evaluate its performance on the test set.

def evaluate_svm(svm, test_features, test_labels):
    predicted_labels = svm.predict(test_features)[1].ravel()
    accuracy = np.mean(predicted_labels == test_labels)
    return accuracy

accuracy = evaluate_svm(svm, test_features, test_labels)
print(f'Test accuracy: {accuracy:.2f}')

Conclusion

In this tutorial, we covered the process of training your own models using OpenCV, from preprocessing images and splitting the dataset to feature extraction and training an SVM. By training your own models, you can tailor the model to your specific requirements, improving performance and solving unique problems.

We hope you found this tutorial engaging, informative, and accessible. Continue exploring different feature extraction techniques, training other machine learning algorithms, and experimenting with various datasets to further your understanding of training custom models with OpenCV. Happy coding!


Stay up to date on the latest in Computer Vision and AI

Intuit Mailchimp