Skip to content

How to Create and Run a Workflow

In this example, we are going to build a Workflow from scratch that detects dogs, classifies their breeds and visualizes results.

Step 1: Create a Workflow

Open https://app.roboflow.com/ in your browser, and navigate to the Workflows tab to click the Create Workflows button. Select Custom Workflow to start the creation process.

Workflow start

Step 2: Add an object detection model

We need to add a block with an object detection model to the existing workflow. We will use the yolov8n-640 model.

Add object detection model

Step 3: Crop each detected object to run breed classification

Next, we are going to add a block to our Workflow that crops the objects that our first model detects.

Add crop

Step 4: Classify dog breeds with second stage model

We are then going to add a classification model that runs on each crop to classify its content. We will use Roboflow Universe model dog-breed-xpaq6/1. Please make sure that in the block configuration, the Image property points to the crops output of the Dynamic Crop block.

Add OCR

Step 5: Replace Bounding Box classes with classification model predictions

When each crop is classified, we would like to assign the class predicted for each crop (dog breed) as a class of the dog bounding boxes from the object detection model . To do this we use Detections Classes Replacement block, which accepts a reference to predictions of a object detection model, as well as a reference to the classification results on the crops.

Add Classes Replacement

Step 6: Visualise predictions

As a final step of the workflow, we would like to visualize our predictions. We will use two visualization blocks: Bounding Box Visualization and Label Visualization chained together. At first, add Bounding Box Visualization referring to $inputs.image for the Image property (that's the image sent as your input to workflow), the second step (Label Visualization) however, should point to the output of Bounding Box Visualization step. Both visualization steps should refer to predictions from the Detections Classes Replacement step.

Add Visualisation

Step 7: Construct output

You have everything ready to construct your workflow output. You can use any intermediate step output that you need, but in this example we will only select bounding boxes with replaced classes (output from Detections Classes Replacement step) and visualisation (output from Label Visualization step).

Step 8: Running the workflow

Now your workflow, is ready. You can click the Save button and move to the Run Preview panel.

We will run our workflow against the following example image https://media.roboflow.com/inference/dog.jpeg. Here are the results

Results

Clicking on the Show Visual button you will find results of our visualization efforts.

Different ways of running your workflow

Your workflow is now saved on the Roboflow Platform. This means you can run it in multiple different ways, including:

  • HTTP request to Roboflow Hosted API

  • HTTP request to your local instance of inference server

  • on video

To see code snippets, click the Deploy Workflow button:

Workflow definition for quick reproduction

To make it easier to reproduce the workflow, below you can find a workflow definition you can copy-paste to UI editor.

Workflow definition
{
  "version": "1.0",
  "inputs": [
    {
      "type": "InferenceImage",
      "name": "image"
    }
  ],
  "steps": [
    {
      "type": "roboflow_core/roboflow_object_detection_model@v1",
      "name": "model",
      "images": "$inputs.image",
      "model_id": "yolov8n-640"
    },
    {
      "type": "roboflow_core/dynamic_crop@v1",
      "name": "dynamic_crop",
      "images": "$inputs.image",
      "predictions": "$steps.model.predictions"
    },
    {
      "type": "roboflow_core/roboflow_classification_model@v1",
      "name": "model_1",
      "images": "$steps.dynamic_crop.crops",
      "model_id": "dog-breed-xpaq6/1"
    },
    {
      "type": "roboflow_core/detections_classes_replacement@v1",
      "name": "detections_classes_replacement",
      "object_detection_predictions": "$steps.model.predictions",
      "classification_predictions": "$steps.model_1.predictions"
    },
    {
      "type": "roboflow_core/bounding_box_visualization@v1",
      "name": "bounding_box_visualization",
      "predictions": "$steps.detections_classes_replacement.predictions",
      "image": "$inputs.image"
    },
    {
      "type": "roboflow_core/label_visualization@v1",
      "name": "label_visualization",
      "predictions": "$steps.detections_classes_replacement.predictions",
      "image": "$steps.bounding_box_visualization.image"
    }
  ],
  "outputs": [
    {
      "type": "JsonField",
      "name": "detections",
      "coordinates_system": "own",
      "selector": "$steps.detections_classes_replacement.predictions"
    },
    {
      "type": "JsonField",
      "name": "visualisation",
      "coordinates_system": "own",
      "selector": "$steps.label_visualization.image"
    }
  ]
}

Next Steps

Now that you have created and run your first workflow, you can explore our other supported blocks and create a more complex workflow.

Refer to our Supported Blocks documentation to learn more about what blocks are supported. We also recommend reading the Understanding workflows page.