Video Detection Demo with PytorchWildlife

This tutorial guides you on how to use PyTorchWildlife for video detection and classification. We will go through the process of setting up the environment, defining the detection and classification models, as well as performing inference and saving the results in an annotated video.

Prerequisites

Install PytorchWildlife running the following commands:

conda create -n pytorch_wildlife python=3.8 -y
conda activate pytorch_wildlife
pip install PytorchWildlife

Also, make sure you have a CUDA-capable GPU if you intend to run the model on a GPU. This notebook can also run on CPU.

Importing libraries

First, let’s import the necessary libraries and modules.

[ ]:
from PIL import Image
import numpy as np
import supervision as sv
import torch
from PytorchWildlife.models import detection as pw_detection
from PytorchWildlife.models import classification as pw_classification
from PytorchWildlife.data import transforms as pw_trans
from PytorchWildlife import utils as pw_utils

Setting GPU

If you are using a GPU for this exercise, please specify which GPU to use for the computations. By default, GPU number 0 is used. Adjust this as per your setup. You don’t need to run this cell if you are using a CPU.

[2]:
torch.cuda.set_device(0) # Use only if you are running on GPU

Model Initialization

We’ll define the device to run the models and then we will initialize the models for both video detection and classification.

[8]:
DEVICE = "cuda" # Use "cuda" if you are running on GPU. Use "cpu" if you are running on CPU
SOURCE_VIDEO_PATH = "./demo_data/videos/opossum_example.MP4"
TARGET_VIDEO_PATH = "./demo_data/videos/opossum_example_processed.MP4"
detection_model = pw_detection.MegaDetectorV5(device=DEVICE, pretrained=True)
classification_model = pw_classification.AI4GOpossum(device=DEVICE, pretrained=True)
Fusing layers...
Fusing layers...
Model summary: 733 layers, 140054656 parameters, 0 gradients, 208.8 GFLOPs
Model summary: 733 layers, 140054656 parameters, 0 gradients, 208.8 GFLOPs

Transformations

Define transformations for both detection and classification. These transformations preprocess the video frames for the models.

[4]:
trans_det = pw_trans.MegaDetector_v5_Transform(target_size=detection_model.IMAGE_SIZE,
                                               stride=detection_model.STRIDE)
trans_clf = pw_trans.Classification_Inference_Transform(target_size=224)

Video Processing

For each frame in the video, we’ll apply detection and classification, and then annotate the frame with the results. The processed video will be saved with annotated detections and classifications.

[ ]:
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)

def callback(frame: np.ndarray, index: int) -> np.ndarray:
    results_det = detection_model.single_image_detection(trans_det(frame), frame.shape, index)
    labels = []
    for xyxy in results_det["detections"].xyxy:
        cropped_image = sv.crop_image(image=frame, xyxy=xyxy)
        results_clf = classification_model.single_image_classification(trans_clf(Image.fromarray(cropped_image)))
        labels.append("{} {:.2f}".format(results_clf["prediction"], results_clf["confidence"]))
    annotated_frame = box_annotator.annotate(scene=frame, detections=results_det["detections"], labels=labels)
    return annotated_frame

pw_utils.process_video(source_path=SOURCE_VIDEO_PATH, target_path=TARGET_VIDEO_PATH, callback=callback, target_fps=5)

Licensed under the MIT License.