Skip to content

Models

Roboflow Inference banner

Roboflow Inference enables you to deploy computer vision models faster than ever.

With a pip install inference and inference server start, you can start a server to run a fine-tuned model on images, videos, and streams.

Inference supports running object detection, classification, instance segmentation, and foundation models (i.e. SAM, CLIP).

You can train and deploy your own custom model or use one of the 50,000+ fine-tuned models shared by the Roboflow Universe community.

You can run Inference on an edge device like an NVIDIA Jetson, or on cloud computing platforms like AWS, GCP, and Azure.

Get started with our "Run your first model" guide

Here is an example of a model running on a video using Inference:

๐Ÿ’ป Featuresยถ

Inference provides a scalable method through which you can use computer vision models.

Inference is backed by:

  • A server, so you donโ€™t have to reinvent the wheel when it comes to serving your model to disperate parts of your application.

  • Standard APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.

  • Model architecture implementations, which implement the tensor parsing glue between images and predictions for supervised models that you've fine-tuned to perform custom tasks.

  • A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.

  • Data management integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.

And more!

๐Ÿ“Œ Install pip vs Docker:ยถ

  • pip: Installs inference into your Python environment. Lightweight, good for Python-centric projects.
  • Docker: Packages inference with its environment. Ensures consistency across setups; ideal for scalable deployments.

๐Ÿ’ป installยถ

With ONNX CPU Runtime:ยถ

For CPU powered inference:

pip install inference

or

pip install inference-cpu

With ONNX GPU Runtime:ยถ

If you have an NVIDIA GPU, you can accelerate your inference with:

pip install inference-gpu

Without ONNX Runtime:ยถ

Roboflow Inference uses Onnxruntime as its core inference engine. Onnxruntime provides an array of different execution providers that can optimize inference on differnt target devices. If you decide to install onnxruntime on your own, install inference with:

pip install inference-core

Alternatively, you can take advantage of some advanced execution providers using one of our published docker images.

Extras:ยถ

Some functionality requires extra dependencies. These can be installed by specifying the desired extras during installation of Roboflow Inference. e.x. pip install inference[extra]

extra description
clip Ability to use the core CLIP model (by OpenAI)
gaze Ability to use the core Gaze model
http Ability to run the http interface
sam Ability to run the core Segment Anything model (by Meta AI)
doctr Ability to use the core doctr model (by Mindee)

Note: Both CLIP and Segment Anything require PyTorch to run. These are included in their respective dependencies however PyTorch installs can be highly environment dependent. See the official PyTorch install page for instructions specific to your enviornment.

Example install with CLIP dependencies:

pip install inference[clip]

๐Ÿ‹ dockerยถ

You can learn more about Roboflow Inference Docker Image build, pull and run in our documentation.

  • Run on x86 CPU:
docker run -it --net=host roboflow/roboflow-inference-server-cpu:latest
  • Run on NVIDIA GPU:
docker run -it --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
๐Ÿ‘‰ more docker run options - Run on arm64 CPU:
docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
- Run on NVIDIA Jetson with JetPack `4.x`:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
- Run on NVIDIA Jetson with JetPack `5.x`:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest


๐Ÿ“Ÿ CLIยถ

To use the CLI you will need python 3.7 or higher. To ensure you have the correct version of python, run python --version in your terminal. To install python, follow the instructions here.

After you have python installed, install the pypi package inference-cli or inference:

pip install inference-cli

From there you can run the inference server. See Docker quickstart via CLI for more information.

inference server start

CLI supports also stopping the server via:

inference server stop

To use the CLI to make inferences, first find your project ID and model version number in Roboflow.

See more detailed documentation on HTTP Inference quickstart via CLI.

inference infer {image_path} \
    --project-id {project_id} \
    --model-version {model_version} \
    --api-key {api_key}

Enterprise Licenseยถ

With a Roboflow Inference Enterprise License, you can access additional Inference features, including:

  • Server cluster deployment
  • Device management
  • Active learning
  • YOLOv5 and YOLOv8 model sub-license

To learn more, contact the Roboflow team.

More Roboflow Open Source Projectsยถ

Project Description
supervision General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation.
Autodistill Automatically label images for use in training computer vision models.
Inference (this project) An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Notebooks Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone.
Collect Automated, intelligent data collection powered by CLIP.