Perspective Correction¶
Class: PerspectiveCorrectionBlockV1
The PerspectiveCorrectionBlock
is a transformer block designed to correct
coordinates of detections based on transformation defined by two polygons.
This block is best suited when produced coordinates should be considered as if camera
was placed directly above the scene and was not introducing distortions.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/perspective_correction@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
perspective_polygons |
List[Any] |
Perspective polygons (for each batch at least one must be consisting of 4 vertices). | ✅ |
transformed_rect_width |
int |
Transformed rect width. | ✅ |
transformed_rect_height |
int |
Transformed rect height. | ✅ |
extend_perspective_polygon_by_detections_anchor |
str |
If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
warp_image |
bool |
If set to True, image will be warped into transformed rect. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Perspective Correction
in version v1
.
- inputs:
Path Deviation
,Stitch Images
,Pixelate Visualization
,Multi-Label Classification Model
,LMM For Classification
,Line Counter
,Instance Segmentation Model
,Blur Visualization
,Single-Label Classification Model
,Mask Visualization
,Object Detection Model
,OCR Model
,SIFT
,Line Counter
,Detections Filter
,YOLO-World Model
,Model Monitoring Inference Aggregator
,Polygon Visualization
,Halo Visualization
,VLM as Detector
,Grid Visualization
,Google Vision OCR
,Model Comparison Visualization
,Email Notification
,Camera Focus
,CogVLM
,Byte Tracker
,Image Threshold
,Keypoint Visualization
,Detections Classes Replacement
,Template Matching
,Image Preprocessing
,Detection Offset
,Roboflow Dataset Upload
,Slack Notification
,Stitch OCR Detections
,Identify Changes
,Relative Static Crop
,Background Color Visualization
,Clip Comparison
,Bounding Box Visualization
,Ellipse Visualization
,Image Contours
,Label Visualization
,Classification Label Visualization
,Byte Tracker
,Line Counter Visualization
,LMM
,Stability AI Inpainting
,Reference Path Visualization
,VLM as Detector
,Dynamic Crop
,Byte Tracker
,Triangle Visualization
,Bounding Rectangle
,Absolute Static Crop
,Object Detection Model
,Distance Measurement
,Time in Zone
,Detections Stitch
,Florence-2 Model
,SIFT Comparison
,Keypoint Detection Model
,Corner Visualization
,Perspective Correction
,Local File Sink
,Polygon Zone Visualization
,VLM as Classifier
,Dimension Collapse
,Image Slicer
,Trace Visualization
,Detections Consensus
,Size Measurement
,OpenAI
,Webhook Sink
,Twilio SMS Notification
,Roboflow Custom Metadata
,Instance Segmentation Model
,Crop Visualization
,Buffer
,Roboflow Dataset Upload
,Clip Comparison
,VLM as Classifier
,Anthropic Claude
,Dynamic Zone
,Image Blur
,Circle Visualization
,Image Convert Grayscale
,Dot Visualization
,Google Gemini
,SIFT Comparison
,Segment Anything 2 Model
,JSON Parser
,Identify Outliers
,Time in Zone
,Florence-2 Model
,Detections Stabilizer
,Path Deviation
,OpenAI
,Color Visualization
,Pixel Color Count
,CSV Formatter
,Llama 3.2 Vision
,Detections Transformation
- outputs:
Pixelate Visualization
,Gaze Detection
,CLIP Embedding Model
,Blur Visualization
,OCR Model
,Mask Visualization
,Object Detection Model
,SIFT
,Line Counter
,YOLO-World Model
,Halo Visualization
,Google Vision OCR
,Camera Focus
,Byte Tracker
,Image Threshold
,Template Matching
,Image Preprocessing
,Roboflow Dataset Upload
,Relative Static Crop
,Background Color Visualization
,Bounding Box Visualization
,Image Contours
,Triangle Visualization
,Bounding Rectangle
,Absolute Static Crop
,Distance Measurement
,Time in Zone
,Florence-2 Model
,Detections Stitch
,SIFT Comparison
,Keypoint Detection Model
,Roboflow Custom Metadata
,Crop Visualization
,Clip Comparison
,Dynamic Zone
,Image Convert Grayscale
,Single-Label Classification Model
,Time in Zone
,Florence-2 Model
,Path Deviation
,OpenAI
,Color Visualization
,Pixel Color Count
,Multi-Label Classification Model
,Path Deviation
,Multi-Label Classification Model
,Stitch Images
,LMM For Classification
,Keypoint Detection Model
,Line Counter
,Instance Segmentation Model
,Single-Label Classification Model
,Detections Filter
,Model Monitoring Inference Aggregator
,Polygon Visualization
,VLM as Detector
,Model Comparison Visualization
,CogVLM
,Keypoint Visualization
,Detections Classes Replacement
,Detection Offset
,Stitch OCR Detections
,Clip Comparison
,Ellipse Visualization
,Label Visualization
,Line Counter Visualization
,Classification Label Visualization
,Byte Tracker
,LMM
,Stability AI Inpainting
,Reference Path Visualization
,VLM as Detector
,Dynamic Crop
,Dominant Color
,Byte Tracker
,Object Detection Model
,Barcode Detection
,Corner Visualization
,Perspective Correction
,Polygon Zone Visualization
,VLM as Classifier
,Trace Visualization
,Size Measurement
,Detections Consensus
,Image Slicer
,OpenAI
,Instance Segmentation Model
,Buffer
,Roboflow Dataset Upload
,VLM as Classifier
,Anthropic Claude
,Image Blur
,Dot Visualization
,Circle Visualization
,Google Gemini
,QR Code Detection
,Segment Anything 2 Model
,Detections Stabilizer
,Llama 3.2 Vision
,Detections Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Perspective Correction
in version v1
has.
Bindings
-
input
predictions
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions.images
(image
): The input image for this step..perspective_polygons
(list_of_values
): Perspective polygons (for each batch at least one must be consisting of 4 vertices).transformed_rect_width
(integer
): Transformed rect width.transformed_rect_height
(integer
): Transformed rect height.extend_perspective_polygon_by_detections_anchor
(string
): If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.warp_image
(boolean
): If set to True, image will be warped into transformed rect.
-
output
corrected_coordinates
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.warped_image
(image
): Image in workflows.
Example JSON definition of step Perspective Correction
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/perspective_correction@v1",
"predictions": "$steps.object_detection_model.predictions",
"images": "$inputs.image",
"perspective_polygons": "$steps.perspective_wrap.zones",
"transformed_rect_width": 1000,
"transformed_rect_height": 1000,
"extend_perspective_polygon_by_detections_anchor": "CENTER",
"warp_image": false
}