Skip to content

Example Workflows - Basic Workflows

Below you can find example workflows you can use as inspiration to build your apps.

Workflow with bounding rect

This is the basic workflow that only contains a single object detection model and bounding rectangle extraction.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        }
    ],
    "steps": [
        {
            "type": "InstanceSegmentationModel",
            "name": "detection",
            "image": "$inputs.image",
            "model_id": "yolov8n-seg-640"
        },
        {
            "type": "roboflow_core/bounding_rect@v1",
            "name": "bounding_rect",
            "predictions": "$steps.detection.predictions"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "result",
            "selector": "$steps.bounding_rect.detections_with_rect"
        }
    ]
}

Workflow with Embeddings

This Workflow shows how to use an embedding model to compare the similarity of two images with each other.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "InferenceImage",
            "name": "image_1"
        },
        {
            "type": "InferenceImage",
            "name": "image_2"
        }
    ],
    "steps": [
        {
            "type": "roboflow_core/clip@v1",
            "name": "embedding_1",
            "data": "$inputs.image_1",
            "version": "RN50"
        },
        {
            "type": "roboflow_core/clip@v1",
            "name": "embedding_2",
            "data": "$inputs.image_2",
            "version": "RN50"
        },
        {
            "type": "roboflow_core/cosine_similarity@v1",
            "name": "cosine_similarity",
            "embedding_1": "$steps.embedding_1.embedding",
            "embedding_2": "$steps.embedding_2.embedding"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "similarity",
            "coordinates_system": "own",
            "selector": "$steps.cosine_similarity.similarity"
        },
        {
            "type": "JsonField",
            "name": "image_embeddings",
            "coordinates_system": "own",
            "selector": "$steps.embedding_1.embedding"
        }
    ]
}

Workflow with CLIP Comparison

This is the basic workflow that only contains a single CLIP Comparison block.

Please take a look at how batch-oriented WorkflowImage data is plugged to detection step via input selector ($inputs.image) and how non-batch parameters (reference set of texts that the each image in batch will be compared to) is dynamically specified - via $inputs.reference selector.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        },
        {
            "type": "WorkflowParameter",
            "name": "reference"
        }
    ],
    "steps": [
        {
            "type": "ClipComparison",
            "name": "comparison",
            "images": "$inputs.image",
            "texts": "$inputs.reference"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "similarity",
            "selector": "$steps.comparison.similarity"
        }
    ]
}

Workflow with static crop and object detection model

This is the basic workflow that contains single transformation (static crop) followed by object detection model. This example may be inspiration for anyone who would like to run specific model only on specific part of the image. The Region of Interest does not necessarily have to be defined statically - please note that coordinates of static crops are referred via input selectors, which means that each time you run the workflow (for instance in each different physical location, where RoI for static crop is location-dependent) you may provide different RoI coordinates.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        },
        {
            "type": "WorkflowParameter",
            "name": "model_id",
            "default_value": "yolov8n-640"
        },
        {
            "type": "WorkflowParameter",
            "name": "confidence",
            "default_value": 0.7
        },
        {
            "type": "WorkflowParameter",
            "name": "x_center"
        },
        {
            "type": "WorkflowParameter",
            "name": "y_center"
        },
        {
            "type": "WorkflowParameter",
            "name": "width"
        },
        {
            "type": "WorkflowParameter",
            "name": "height"
        }
    ],
    "steps": [
        {
            "type": "AbsoluteStaticCrop",
            "name": "crop",
            "image": "$inputs.image",
            "x_center": "$inputs.x_center",
            "y_center": "$inputs.y_center",
            "width": "$inputs.width",
            "height": "$inputs.height"
        },
        {
            "type": "RoboflowObjectDetectionModel",
            "name": "detection",
            "image": "$steps.crop.crops",
            "model_id": "$inputs.model_id",
            "confidence": "$inputs.confidence"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "crop",
            "selector": "$steps.crop.crops"
        },
        {
            "type": "JsonField",
            "name": "result",
            "selector": "$steps.detection.*"
        },
        {
            "type": "JsonField",
            "name": "result_in_own_coordinates",
            "selector": "$steps.detection.*",
            "coordinates_system": "own"
        }
    ]
}

Workflow writing data to OPC server

In this example data is written to OPC server.

In order to write to OPC this block is making use of asyncua package.

Writing to OPC enables workflows to expose insights extracted from camera to PLC controllers allowing factory automation engineers to take advantage of machine vision when building PLC logic.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "InferenceParameter",
            "name": "opc_url"
        },
        {
            "type": "InferenceParameter",
            "name": "opc_namespace"
        },
        {
            "type": "InferenceParameter",
            "name": "opc_user_name"
        },
        {
            "type": "InferenceParameter",
            "name": "opc_password"
        },
        {
            "type": "InferenceParameter",
            "name": "opc_object_name"
        },
        {
            "type": "InferenceParameter",
            "name": "opc_variable_name"
        },
        {
            "type": "InferenceParameter",
            "name": "opc_value"
        },
        {
            "type": "InferenceParameter",
            "name": "opc_value_type"
        }
    ],
    "steps": [
        {
            "type": "roboflow_enterprise/opc_writer_sink@v1",
            "name": "opc_writer",
            "url": "$inputs.opc_url",
            "namespace": "$inputs.opc_namespace",
            "user_name": "$inputs.opc_user_name",
            "password": "$inputs.opc_password",
            "object_name": "$inputs.opc_object_name",
            "variable_name": "$inputs.opc_variable_name",
            "value": "$inputs.opc_value",
            "value_type": "$inputs.opc_value_type",
            "fire_and_forget": false
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "opc_writer_results",
            "selector": "$steps.opc_writer.*"
        }
    ]
}

Workflow with single object detection model

This is the basic workflow that only contains a single object detection model.

Please take a look at how batch-oriented WorkflowImage data is plugged to detection step via input selector ($inputs.image) and how non-batch parameters are dynamically specified - via $inputs.model_id and $inputs.confidence selectors.

Workflow definition
{
    "version": "1.0",
    "inputs": [
        {
            "type": "WorkflowImage",
            "name": "image"
        },
        {
            "type": "WorkflowParameter",
            "name": "model_id",
            "default_value": "yolov8n-640"
        },
        {
            "type": "WorkflowParameter",
            "name": "confidence",
            "default_value": 0.3
        }
    ],
    "steps": [
        {
            "type": "RoboflowObjectDetectionModel",
            "name": "detection",
            "image": "$inputs.image",
            "model_id": "$inputs.model_id",
            "confidence": "$inputs.confidence"
        }
    ],
    "outputs": [
        {
            "type": "JsonField",
            "name": "result",
            "selector": "$steps.detection.*"
        }
    ]
}