YOLOv8 - Object Detection Framework

YOLOv8 Overview

YoloV8 Overview

YOLOv8 is the latest iteration in the YOLO series of real-time object detectors. It offers cutting-edge performance in terms of accuracy and speed. Building upon the advancements of previous YOLO versions, YOLOv8 introduces new features and optimizations that make it an ideal choice for various object detection tasks in a wide range of applications.

YOLOv8 Architecture

The architecture of YOLOv8 builds upon the previous versions of YOLO algorithms. YOLOv8 utilizes a convolutional neural network that can be divided into two main parts: the backbone and the head. A modified version of the CSPDarknet53 architecture forms the backbone of YOLOv8. This architecture consists of 53 convolutional layers and employs cross-stage partial connections to enhance information flow between different layers.

The head of YOLOv8 comprises multiple convolutional layers followed by a series of fully connected layers. These layers are responsible for predicting bounding boxes, objectness scores, and class probabilities for the objects detected in an image. One of the key features of YOLOv8 is the use of a self-attention mechanism in the head of the network. This mechanism enables the model to focus on different parts of the image and adjust the importance of different features based on their relevance to the task.

Another significant feature of YOLOv8 is its ability to perform multi-scaled object detection. The model employs a feature pyramid network to detect objects of varying sizes and scales within an image. This feature pyramid network comprises multiple layers that detect objects at different scales, allowing the model to identify both large and small objects within an image.

YoloV8 Architecture

Key Features

Supported Tasks

Model Type Pre-trained Weights
yolov8n.pt, yolov8s.pt, yolov8m.pt, yolov8l.pt, yolov8x.pt Detection
YOLOv8-seg (yolov8n-seg.pt, yolov8s-seg.pt, yolov8m-seg.pt, yolov8l-seg.pt, yolov8x-seg.pt) Instance Segmentation
YOLOv8-pose (yolov8n-pose.pt, yolov8s-pose.pt, yolov8m-pose.pt, yolov8l-pose.pt, yolov8x-pose.pt, yolov8x-pose-p6.pt) Pose/Keypoints
YOLOv8-cls (yolov8n-cls.pt, yolov8s-cls.pt, yolov8m-cls.pt, yolov8l-cls.pt, yolov8x-cls.pt) Classification

YOLOv8 Performance Metrics

YOLOv8, the latest iteration in the YOLO series, boasts impressive performance metrics in object detection. With a perfect balance between speed and accuracy, it stands out as a preferred choice for real-time detection tasks. The model's metrics, including mAP, FPS, and computational complexity, highlight its efficiency and robustness in diverse scenarios.


Model Size (pixels) mAPval50-95 Speed (CPU ONNX in ms) Speed (A100 TensorRT in ms) Parameters (M) FLOPs (B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8

Training YOLOv8 with Roboflow

  1. Setup: Ensure you have all the required libraries and dependencies installed.
  2. Clone YOLOv8 Repository: Clone the official YOLOv8 repository from GitHub.
  3. Prepare Dataset with Roboflow: Organize your dataset using Roboflow. Label the images and export them in the appropriate format.
  4. Export Dataset for YOLOv8: Once your dataset is ready in Roboflow, export it in a format compatible with YOLOv8.
  5. Train the Model: Use the YOLO command line utility to train the model on your custom dataset.
  6. Validate the Model: Periodically validate the model on a separate validation set to ensure it's performing well.
  7. Run Inference: Use the YOLO command line application to run inference and detect objects in new images.

For a detailed guide, you can refer to the Roboflow Blog.

Yolov8 training notebook Google Colab.