Skip to content

Perspective Correction

Version v1

The PerspectiveCorrectionBlock is a transformer block designed to correct coordinates of detections based on transformation defined by two polygons. This block is best suited when produced coordinates should be considered as if camera was placed directly above the scene and was not introducing distortions.

Type identifier

Use the following identifier in step "type" field: roboflow_core/perspective_correction@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str The unique name of this step..
perspective_polygons List[Any] Perspective polygons (for each batch at least one must be consisting of 4 vertices).
transformed_rect_width int Transformed rect width.
transformed_rect_height int Transformed rect height.
extend_perspective_polygon_by_detections_anchor str If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.
warp_image bool If set to True, image will be warped into transformed rect.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Check what blocks you can connect to Perspective Correction in version v1.

The available connections depend on its binding kinds. Check what binding kinds Perspective Correction in version v1 has.

Bindings
  • input

    • predictions (Union[object_detection_prediction, instance_segmentation_prediction]): Predictions.
    • images (image): The input image for this step..
    • perspective_polygons (list_of_values): Perspective polygons (for each batch at least one must be consisting of 4 vertices).
    • transformed_rect_width (integer): Transformed rect width.
    • transformed_rect_height (integer): Transformed rect height.
    • extend_perspective_polygon_by_detections_anchor (string): If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.
    • warp_image (boolean): If set to True, image will be warped into transformed rect.
  • output

    • corrected_coordinates (Union[object_detection_prediction, instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object if object_detection_prediction or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object if instance_segmentation_prediction.
    • warped_image (image): Image in workflows.
Example JSON definition of step Perspective Correction in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/perspective_correction@v1",
    "predictions": "$steps.object_detection_model.predictions",
    "images": "$inputs.image",
    "perspective_polygons": "$steps.perspective_wrap.zones",
    "transformed_rect_width": 1000,
    "transformed_rect_height": 1000,
    "extend_perspective_polygon_by_detections_anchor": "CENTER",
    "warp_image": false
}