Skip to content

Model CompatabilityΒΆ

The table below shows on what devices you can deploy models supported by Inference.

See our Docker Getting Started guide for more information on how to deploy Inference on your device.

Table key:

  • βœ… Fully supported
  • 🟑 Not TRT accelerated
  • 🚫 Not supported
  • 🚧 On roadmap, not currently supported
Model CPU GPU TensorRT Jetson 4.5.x Jetson 4.6.x Jetson 5.x Roboflow Hosted Inference
YOLOv8 Object Detection βœ… βœ… βœ… 🚫 🚫 βœ… βœ…
YOLOv8 Classification βœ… βœ… βœ… 🚫 🚫 βœ… βœ…
YOLOv8 Segmentation βœ… βœ… βœ… 🚫 🚫 βœ… βœ…
YOLOv5 Object Detection βœ… βœ… βœ… βœ… βœ… βœ… βœ…
YOLOv5 Classification βœ… βœ… βœ… βœ… βœ… βœ… βœ…
YOLOv5 Segmentation βœ… βœ… βœ… βœ… βœ… βœ… βœ…
DocTR βœ… βœ… 🟑 βœ… βœ… βœ… 🚧
CLIP βœ… βœ… βœ… βœ… βœ… βœ… βœ…
SAM βœ… βœ… 🟑 🚫 🚫 🚫 🚫
ViT Classification βœ… βœ… βœ… βœ… βœ… βœ… βœ…
YOLACT βœ… βœ… βœ… βœ… βœ… βœ… βœ…