Skip to content

What Devices Can I Use?ΒΆ

You can deploy Inference on the edge, in your own cloud, or using the Roboflow hosted inference option.

Supported Edge DevicesΒΆ

You can set up a server to use computer vision models with Inference on the following devices:

  • ARM CPU (macOS, Raspberry Pi)
  • x86 CPU (macOS, Linux, Windows)
  • NVIDIA GPU
  • NVIDIA Jetson (JetPack 4.5.x, JetPack 4.6.x, JetPack 5.x, JetPack 6.x)

Model CompatabilityΒΆ

The table below shows on what devices you can deploy models supported by Inference.

See our Docker Getting Started guide for more information on how to deploy Inference on your device.

Table key:

  • βœ… Fully supported
  • 🚫 Not supported
  • 🚧 On roadmap, not currently supported
Model CPU GPU Jetson 4.5.x Jetson 4.6.x Jetson 5.x Roboflow Hosted Inference
YOLOv8 Object Detection βœ… βœ… 🚫 🚫 βœ… βœ…
YOLOv8 Classification βœ… βœ… 🚫 🚫 βœ… βœ…
YOLOv8 Segmentation βœ… βœ… 🚫 🚫 βœ… βœ…
YOLOv5 Object Detection βœ… βœ… βœ… βœ… βœ… βœ…
YOLOv5 Classification βœ… βœ… βœ… βœ… βœ… βœ…
YOLOv5 Segmentation βœ… βœ… βœ… βœ… βœ… βœ…
CLIP βœ… βœ… βœ… βœ… βœ… βœ…
DocTR βœ… βœ… βœ… βœ… βœ… βœ…
Gaze βœ… βœ… 🚫 🚫 🚫 βœ…
SAM βœ… βœ… 🚫 🚫 🚫 🚫
ViT Classification βœ… βœ… βœ… βœ… βœ… βœ…
YOLACT βœ… βœ… βœ… βœ… βœ… βœ…

Cloud Platform SupportΒΆ

You can deploy Inference on any cloud platform such as AWS, GCP, or Azure.

The installation and setup instructions are the same as for any edge device, once you have installed the relevant drivers on your cloud platform. We recommend deploying with an official "Deep Learning" image from your cloud provider if you are running inference on a GPU device. "Deep Learning" images should have the relevant drivers pre-installed so you can set up Inference without configuring GPU drivers manually

Use Hosted Inference from RoboflowΒΆ

You can also run your models in the cloud with the Roboflow hosted inference offering. The Roboflow hosted inference solution enables you to deploy your models in the cloud without having to manage your own infrastructure. Roboflow's hosted solution does not support all features available in Inference that you can run on your own infrastructure.

To learn more about device compatability with different models, refer to the model compatability matrix.