Skip to content

Example Workflows - Basic Workflows

Below you can find example workflows you can use as inspiration to build your apps.

Workflow with bounding rect

This is the basic workflow that only contains a single object detection model and bounding rectangle extraction.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        }
    ],
    "steps": [
        {
            "type": "InstanceSegmentationModel",
            "name": "detection",
            "image": "$inputs.image",
            "model_id": "yolov8n-seg-640"
        },
        {
            "type": "roboflow_core/bounding_rect@v1",
            "name": "bounding_rect",
            "predictions": "$steps.detection.predictions"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "result",
            "selector": "$steps.bounding_rect.detections_with_rect"
        }
    ]
}

Workflow with CLIP model

This is the basic workflow that only contains a single CLIP model block.

Please take a look at how batch-oriented WorkflowImage data is plugged to detection step via input selector ($inputs.image) and how non-batch parameters (reference set of texts that the each image in batch will be compared to) is dynamically specified - via $inputs.reference selector.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        },
        {
            "type": "WorkflowParameter",
            "name": "reference"
        }
    ],
    "steps": [
        {
            "type": "ClipComparison",
            "name": "comparison",
            "images": "$inputs.image",
            "texts": "$inputs.reference"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "similarity",
            "selector": "$steps.comparison.similarity"
        }
    ]
}

Workflow with static crop and object detection model

This is the basic workflow that contains single transformation (static crop) followed by object detection model. This example may be inspiration for anyone who would like to run specific model only on specific part of the image. The Region of Interest does not necessarily have to be defined statically - please note that coordinates of static crops are referred via input selectors, which means that each time you run the workflow (for instance in each different physical location, where RoI for static crop is location-dependent) you may provide different RoI coordinates.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        },
        {
            "type": "WorkflowParameter",
            "name": "model_id",
            "default_value": "yolov8n-640"
        },
        {
            "type": "WorkflowParameter",
            "name": "confidence",
            "default_value": 0.7
        },
        {
            "type": "WorkflowParameter",
            "name": "x_center"
        },
        {
            "type": "WorkflowParameter",
            "name": "y_center"
        },
        {
            "type": "WorkflowParameter",
            "name": "width"
        },
        {
            "type": "WorkflowParameter",
            "name": "height"
        }
    ],
    "steps": [
        {
            "type": "AbsoluteStaticCrop",
            "name": "crop",
            "image": "$inputs.image",
            "x_center": "$inputs.x_center",
            "y_center": "$inputs.y_center",
            "width": "$inputs.width",
            "height": "$inputs.height"
        },
        {
            "type": "RoboflowObjectDetectionModel",
            "name": "detection",
            "image": "$steps.crop.crops",
            "model_id": "$inputs.model_id",
            "confidence": "$inputs.confidence"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "crop",
            "selector": "$steps.crop.crops"
        },
        {
            "type": "JsonField",
            "name": "result",
            "selector": "$steps.detection.*"
        },
        {
            "type": "JsonField",
            "name": "result_in_own_coordinates",
            "selector": "$steps.detection.*",
            "coordinates_system": "own"
        }
    ]
}

Workflow with single object detection model

This is the basic workflow that only contains a single object detection model.

Please take a look at how batch-oriented WorkflowImage data is plugged to detection step via input selector ($inputs.image) and how non-batch parameters are dynamically specified - via $inputs.model_id and $inputs.confidence selectors.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        },
        {
            "type": "WorkflowParameter",
            "name": "model_id",
            "default_value": "yolov8n-640"
        },
        {
            "type": "WorkflowParameter",
            "name": "confidence",
            "default_value": 0.3
        }
    ],
    "steps": [
        {
            "type": "RoboflowObjectDetectionModel",
            "name": "detection",
            "image": "$inputs.image",
            "model_id": "$inputs.model_id",
            "confidence": "$inputs.confidence"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "result",
            "selector": "$steps.detection.*"
        }
    ]
}