Perspective Correction¶
Version v1
¶
The PerspectiveCorrectionBlock
is a transformer block designed to correct
coordinates of detections based on transformation defined by two polygons.
This block is best suited when produced coordinates should be considered as if camera
was placed directly above the scene and was not introducing distortions.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/perspective_correction@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
The unique name of this step.. | ❌ |
perspective_polygons |
List[Any] |
Perspective polygons (for each batch at least one must be consisting of 4 vertices). | ✅ |
transformed_rect_width |
int |
Transformed rect width. | ✅ |
transformed_rect_height |
int |
Transformed rect height. | ✅ |
extend_perspective_polygon_by_detections_anchor |
str |
If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
warp_image |
bool |
If set to True, image will be warped into transformed rect. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Check what blocks you can connect to Perspective Correction
in version v1
.
- inputs:
Color Visualization
,Detections Filter
,Detections Consensus
,Image Contours
,Path deviation
,Image Preprocessing
,Mask Visualization
,Dot Visualization
,Corner Visualization
,Model Comparison Visualization
,Byte Tracker
,Image Slicer
,Detection Offset
,Image Blur
,Clip Comparison
,Label Visualization
,Time in zone
,Detections Classes Replacement
,Relative Static Crop
,Polygon Visualization
,Camera Focus
,Florence-2 Model
,Dynamic Crop
,YOLO-World Model
,Halo Visualization
,Segment Anything 2 Model
,Crop Visualization
,Dimension Collapse
,Image Threshold
,SIFT
,Google Vision OCR
,Clip Comparison
,Path deviation
,VLM as Detector
,Circle Visualization
,SIFT Comparison
,Byte Tracker
,Google Gemini
,Size Measurement
,Anthropic Claude
,Object Detection Model
,Stability AI Inpainting
,Image Convert Grayscale
,Detections Transformation
,Perspective Correction
,Line Counter Visualization
,Absolute Static Crop
,Detections Stitch
,Background Color Visualization
,OpenAI
,Template Matching
,Bounding Box Visualization
,Polygon Zone Visualization
,Ellipse Visualization
,Pixelate Visualization
,Dynamic Zone
,Time in zone
,Triangle Visualization
,Instance Segmentation Model
,Bounding Rectangle
,Stitch Images
,Blur Visualization
- outputs:
Color Visualization
,Detections Filter
,Detections Consensus
,Image Preprocessing
,Model Comparison Visualization
,Mask Visualization
,Byte Tracker
,Image Slicer
,Keypoint Detection Model
,OCR Model
,Label Visualization
,Time in zone
,Detections Classes Replacement
,Florence-2 Model
,Polygon Visualization
,Relative Static Crop
,Camera Focus
,LMM For Classification
,Crop Visualization
,Multi-Label Classification Model
,Line Counter
,Line Counter
,Google Vision OCR
,Clip Comparison
,Roboflow Dataset Upload
,Barcode Detection
,Path deviation
,Circle Visualization
,SIFT Comparison
,CogVLM
,Anthropic Claude
,Object Detection Model
,Pixel Color Count
,Image Convert Grayscale
,OpenAI
,Perspective Correction
,Background Color Visualization
,Line Counter Visualization
,Detections Stitch
,Bounding Box Visualization
,Polygon Zone Visualization
,Pixelate Visualization
,Dynamic Zone
,Triangle Visualization
,Bounding Rectangle
,VLM as Classifier
,Blur Visualization
,Image Contours
,Path deviation
,Dot Visualization
,Corner Visualization
,Detection Offset
,Image Blur
,Clip Comparison
,Dynamic Crop
,YOLO-World Model
,Halo Visualization
,Segment Anything 2 Model
,Image Threshold
,QR Code Detection
,SIFT
,LMM
,OpenAI
,VLM as Detector
,Google Gemini
,Byte Tracker
,Size Measurement
,Stability AI Inpainting
,Detections Transformation
,Absolute Static Crop
,Distance Measurement
,Property Definition
,Dominant Color
,Template Matching
,Ellipse Visualization
,Time in zone
,Roboflow Dataset Upload
,Instance Segmentation Model
,Roboflow Custom Metadata
,Single-Label Classification Model
,Stitch Images
The available connections depend on its binding kinds. Check what binding kinds
Perspective Correction
in version v1
has.
Bindings
-
input
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Predictions.images
(image
): The input image for this step..perspective_polygons
(list_of_values
): Perspective polygons (for each batch at least one must be consisting of 4 vertices).transformed_rect_width
(integer
): Transformed rect width.transformed_rect_height
(integer
): Transformed rect height.extend_perspective_polygon_by_detections_anchor
(string
): If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.warp_image
(boolean
): If set to True, image will be warped into transformed rect.
-
output
corrected_coordinates
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.warped_image
(image
): Image in workflows.
Example JSON definition of step Perspective Correction
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/perspective_correction@v1",
"predictions": "$steps.object_detection_model.predictions",
"images": "$inputs.image",
"perspective_polygons": "$steps.perspective_wrap.zones",
"transformed_rect_width": 1000,
"transformed_rect_height": 1000,
"extend_perspective_polygon_by_detections_anchor": "CENTER",
"warp_image": false
}