This repository provides a partially implemented package called ultrahelper, designed to extend and customize the Ultralytics YOLOv8 framework without modifying its source code. The goal is to override and extend certain modules while still leveraging the flexibility of Ultralytics’ configuration system.
Custom modules can be defined and referenced through the configuration file:
ultrahelper/cfg/yolov8-pose.yaml.
The infrastructure for this mechanism is already implemented in ultrahelper and demonstrated across multiple modules.
- Identified and debugged a symbolic tracing error in the YOLOv8 model using
torch.fx. - Traced the root cause to runtime-dependent logic within the
C2fmodule fromultralytics.nn.modules.block. - Implemented a traceable version of the module (
ModifiedC2f) inultrahelper.nn.block, ensuring compatibility with PyTorch's symbolic tracer.
- Enhanced the model’s flexibility by modifying the
ConvandSPPFmodules to support configurable activation functions (e.g., SiLU, ReLU). - Extended the model's YAML config (
ultrahelper/cfg/yolov8-pose.yaml) to support activation selection without altering Ultralytics' core code.
-
Refactored the
ModifiedPoseclass inultrahelper.nn.poseto separate hardware-incompatible operations. -
Created two deployable components:
ModifiedPoseHead: optimized for hardware execution, retaining all convolutional layers.ModifiedPosePostprocessor: runs on CPU and handles tensor reshaping and unsupported operations.
-
Ensured compliance with hardware constraints (e.g., only 4D tensor operations on device).
-
Developed a real-time parallel inference pipeline with two decoupled components:
- A hardware model executing on the GPU.
- A postprocessing module running on the CPU.
-
Utilized
load_hardware_model()andload_postprocessor()fromultrahelper.load. -
Implemented real-time performance monitoring, displaying FPS and inference latency while processing video frames continuously.
- Install the
ultralyticspackage.
pip install ultralytics- Run the following to download the COCO8 dataset and ensure the training pipeline is functional:
python -m ultrahelper --train
python -m ultrahelper --pipeline
python -m ultrahelper --traceFor parallel processing inference and FPS and CPU and GPU time, run:
python -m ultrahelper --pipeline- Make sure you have Pytorch version above 2.0 in order to use symbolic tracing.