YOLOv8n -- SAR Vessel Detection

YOLOv8 nano model trained for vessel detection in Synthetic Aperture Radar (SAR) imagery. Quantized and exported for embedded edge deployment (Jetson, Google Coral).

Model Details

  • Base model: YOLOv8n
  • Task: Object Detection (vessel/ship)
  • Dataset: SAR-Ship (Roboflow)
  • Input size: 640x640
  • Target hardware: Jetson Orin Nano, Google Coral Edge TPU, RTX GPUs

Performance

Model mAP50 mAP50-95 Precision Recall Size (MB)
PyTorch FP32 0.9168 0.6646 0.9183 0.84 6.26
TorchScript FP16 0.9124 0.6271 0.9278 0.8274 12.43

Repository Structure

unquantized/
  best.pt               (PyTorch FP32 - original trained weights)

quantized/
  best.onnx             (ONNX FP32 - cross-platform inference)
  best.torchscript      (TorchScript FP16 - Jetson / RTX GPUs)
  best_int8.tflite      (TFLite INT8 - Google Coral / MCU, Linux only)

evaluation_results.json
evaluation_results.csv

Available Formats

File Format Use case
unquantized/best.pt PyTorch FP32 Training / fine-tuning / full accuracy inference
quantized/best.onnx ONNX FP32 Cross-platform CPU/GPU inference
quantized/best.torchscript TorchScript FP16 Jetson Orin / RTX GPU deployment
quantized/best_int8.tflite TFLite INT8 Google Coral Edge TPU / microcontrollers

Usage

PyTorch (unquantized)

from ultralytics import YOLO

model = YOLO("unquantized/best.pt")
results = model("your_sar_image.png")
results[0].show()

ONNX (quantized)

from ultralytics import YOLO

model = YOLO("quantized/best.onnx")
results = model("your_sar_image.png")
results[0].show()

TorchScript FP16 (quantized, RTX / Jetson)

from ultralytics import YOLO

model = YOLO("quantized/best.torchscript")
results = model("your_sar_image.png")
results[0].show()

Training Details

  • Optimizer: AdamW
  • Epochs: 100 (early stopping, patience=20)
  • Augmentation: mosaic, rotation (+-15 degrees), flip -- color aug disabled (SAR is grayscale)
  • Quantization: Post-Training Quantization (PTQ) via TorchScript FP16 and TFLite INT8
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support