Inference Landing Page
The Roboflow Inference server hosts a landing page. This page contains links to helpful resources including documentation and examples.
Visit the Inference Landing Page¶
The Inference Server runs in Docker. Before we begin, make sure you have installed Docker on your system. To learn how to install Docker, refer to the official Docker installation guide.
The easiest way to start an inference server is with the inference CLI. Install it via pip:
pip install inference-cli
Now run the inference sever start
command.
inference server start
Now visit localhost:9001 in your browser to see the inference
landing page. This page contains links to resources and examples related to inference
.
Inference Notebook¶
Roboflow Inference Servers come equipped with a built in Jupyterlab environment. This environment is the fastest way to get up and running with inference for development and testing. To use it, first start an inference server. Be sure to specify the --dev
flag so that the notebook environment is enabled (it is disabled by default).
inference server start --dev
Now visit localhost:9001 in your browser to see the inference
landing page. From the landing page, select the button labeled "Jump Into an Inference Enabled Notebook" to open a new tab for the Jupyterlab environment.
This Jupyterlab environment comes preloaded with several example notebooks and all of the dependencies needed to run inference
.