Perspective Correction¶
Class: PerspectiveCorrectionBlockV1
The PerspectiveCorrectionBlock
is a transformer block designed to correct
coordinates of detections based on transformation defined by two polygons.
This block is best suited when produced coordinates should be considered as if camera
was placed directly above the scene and was not introducing distortions.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/perspective_correction@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
perspective_polygons |
List[Any] |
Perspective polygons (for each batch at least one must be consisting of 4 vertices). | ✅ |
transformed_rect_width |
int |
Transformed rect width. | ✅ |
transformed_rect_height |
int |
Transformed rect height. | ✅ |
extend_perspective_polygon_by_detections_anchor |
str |
If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS. | ✅ |
warp_image |
bool |
If set to True, image will be warped into transformed rect. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Perspective Correction
in version v1
.
- inputs:
Segment Anything 2 Model
,Image Slicer
,Stability AI Inpainting
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Roboflow Custom Metadata
,Object Detection Model
,SIFT Comparison
,Detection Offset
,Grid Visualization
,VLM as Detector
,Ellipse Visualization
,SIFT
,CogVLM
,Image Contours
,OpenAI
,Absolute Static Crop
,Camera Focus
,Trace Visualization
,Multi-Label Classification Model
,VLM as Detector
,Dot Visualization
,Google Vision OCR
,Clip Comparison
,Identify Outliers
,Polygon Zone Visualization
,Roboflow Dataset Upload
,Identify Changes
,VLM as Classifier
,Byte Tracker
,Classification Label Visualization
,Corner Visualization
,Llama 3.2 Vision
,Dynamic Crop
,Reference Path Visualization
,Line Counter
,Detections Stabilizer
,Label Visualization
,Mask Visualization
,Triangle Visualization
,Template Matching
,Line Counter Visualization
,Dynamic Zone
,Detections Transformation
,Time in Zone
,Model Monitoring Inference Aggregator
,Blur Visualization
,Line Counter
,Anthropic Claude
,Instance Segmentation Model
,Webhook Sink
,SIFT Comparison
,Time in Zone
,Instance Segmentation Model
,Slack Notification
,Detections Filter
,Stitch OCR Detections
,Pixelate Visualization
,Path Deviation
,OpenAI
,Relative Static Crop
,Detections Consensus
,Twilio SMS Notification
,Dimension Collapse
,VLM as Classifier
,Roboflow Dataset Upload
,Google Gemini
,Model Comparison Visualization
,Halo Visualization
,JSON Parser
,Crop Visualization
,Byte Tracker
,Image Blur
,Distance Measurement
,Velocity
,Circle Visualization
,Buffer
,Keypoint Detection Model
,Image Preprocessing
,Background Color Visualization
,Bounding Rectangle
,Pixel Color Count
,Size Measurement
,Florence-2 Model
,Bounding Box Visualization
,Byte Tracker
,Florence-2 Model
,Image Slicer
,Local File Sink
,LMM For Classification
,Stitch Images
,Stability AI Image Generation
,Image Threshold
,Detections Stitch
,OCR Model
,LMM
,Keypoint Visualization
,Email Notification
,Color Visualization
,Path Deviation
,Single-Label Classification Model
,YOLO-World Model
,CSV Formatter
,Image Convert Grayscale
,Detections Classes Replacement
,Polygon Visualization
- outputs:
Object Detection Model
,Object Detection Model
,Detection Offset
,CogVLM
,Ellipse Visualization
,SIFT
,Camera Focus
,CLIP Embedding Model
,Dot Visualization
,Google Vision OCR
,Clip Comparison
,Polygon Zone Visualization
,Gaze Detection
,Classification Label Visualization
,Corner Visualization
,Dynamic Crop
,Detections Stabilizer
,Label Visualization
,Triangle Visualization
,Dynamic Zone
,Dominant Color
,Time in Zone
,Barcode Detection
,Blur Visualization
,Line Counter
,Instance Segmentation Model
,Path Deviation
,Relative Static Crop
,Detections Consensus
,Crop Visualization
,Qwen2.5-VL
,Distance Measurement
,Circle Visualization
,Velocity
,Keypoint Detection Model
,QR Code Detection
,Size Measurement
,Single-Label Classification Model
,Bounding Box Visualization
,LMM For Classification
,Image Threshold
,Detections Stitch
,OCR Model
,Keypoint Visualization
,Single-Label Classification Model
,Detections Classes Replacement
,Polygon Visualization
,Segment Anything 2 Model
,Image Slicer
,Stability AI Inpainting
,Clip Comparison
,Perspective Correction
,Roboflow Custom Metadata
,SIFT Comparison
,VLM as Detector
,Image Contours
,Multi-Label Classification Model
,OpenAI
,Absolute Static Crop
,Trace Visualization
,Multi-Label Classification Model
,VLM as Detector
,Roboflow Dataset Upload
,VLM as Classifier
,Byte Tracker
,Llama 3.2 Vision
,Line Counter
,Reference Path Visualization
,Mask Visualization
,Template Matching
,Line Counter Visualization
,Detections Transformation
,Model Monitoring Inference Aggregator
,Anthropic Claude
,Time in Zone
,Instance Segmentation Model
,Detections Filter
,Stitch OCR Detections
,Pixelate Visualization
,VLM as Classifier
,Roboflow Dataset Upload
,Keypoint Detection Model
,Google Gemini
,Model Comparison Visualization
,Halo Visualization
,Byte Tracker
,Image Blur
,Buffer
,Image Preprocessing
,Background Color Visualization
,Bounding Rectangle
,Pixel Color Count
,Florence-2 Model
,Florence-2 Model
,Byte Tracker
,Image Slicer
,Stitch Images
,Stability AI Image Generation
,LMM
,Color Visualization
,Path Deviation
,YOLO-World Model
,Image Convert Grayscale
,OpenAI
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Perspective Correction
in version v1
has.
Bindings
-
input
predictions
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Predictions.images
(image
): The input image for this step..perspective_polygons
(list_of_values
): Perspective polygons (for each batch at least one must be consisting of 4 vertices).transformed_rect_width
(integer
): Transformed rect width.transformed_rect_height
(integer
): Transformed rect height.extend_perspective_polygon_by_detections_anchor
(string
): If set, perspective polygons will be extended to contain all bounding boxes. Allowed values: CENTER, CENTER_LEFT, CENTER_RIGHT, TOP_CENTER, TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_CENTER, BOTTOM_RIGHT, CENTER_OF_MASS.warp_image
(boolean
): If set to True, image will be warped into transformed rect.
-
output
corrected_coordinates
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
.warped_image
(image
): Image in workflows.
Example JSON definition of step Perspective Correction
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/perspective_correction@v1",
"predictions": "$steps.object_detection_model.predictions",
"images": "$inputs.image",
"perspective_polygons": "$steps.perspective_wrap.zones",
"transformed_rect_width": 1000,
"transformed_rect_height": 1000,
"extend_perspective_polygon_by_detections_anchor": "CENTER",
"warp_image": false
}