Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Apply a fixed offset to the width and height of a detection.
You can use this block to add padding around the result of a detection. This is useful to ensure that you can analyze bounding boxes that may be within the region of an object instead of being around an object.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_offset@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
offset_width |
int |
Offset for box width.. | ✅ |
offset_height |
int |
Offset for box height.. | ✅ |
units |
str |
Units for offset dimensions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset in version v1.
- inputs:
Template Matching,Line Counter,Line Counter,VLM as Detector,Path Deviation,Time in Zone,Byte Tracker,Velocity,Instance Segmentation Model,Object Detection Model,Detections Stabilizer,Bounding Rectangle,Google Vision OCR,Time in Zone,Seg Preview,Instance Segmentation Model,Image Contours,VLM as Detector,Object Detection Model,Overlap Filter,OCR Model,Moondream2,Dynamic Zone,Segment Anything 2 Model,Dynamic Crop,Detections Consensus,Byte Tracker,Detections Classes Replacement,Gaze Detection,Detections Combine,Detection Offset,Time in Zone,Detections Filter,Keypoint Detection Model,SIFT Comparison,Detections Merge,YOLO-World Model,SIFT Comparison,Pixel Color Count,Detections Transformation,Perspective Correction,Keypoint Detection Model,Path Deviation,EasyOCR,Distance Measurement,PTZ Tracking (ONVIF).md),Detections Stitch,Byte Tracker - outputs:
Size Measurement,Line Counter,Line Counter,Path Deviation,Velocity,Byte Tracker,Keypoint Visualization,Model Monitoring Inference Aggregator,Time in Zone,Polygon Visualization,Detections Stabilizer,Icon Visualization,Bounding Rectangle,Time in Zone,Blur Visualization,Trace Visualization,Color Visualization,Halo Visualization,Overlap Filter,Bounding Box Visualization,Dynamic Zone,Triangle Visualization,Background Color Visualization,Segment Anything 2 Model,Dynamic Crop,Dot Visualization,Pixelate Visualization,Stability AI Inpainting,Byte Tracker,Detections Consensus,Corner Visualization,Detections Classes Replacement,Ellipse Visualization,Detections Combine,Time in Zone,Detection Offset,Crop Visualization,Roboflow Custom Metadata,Detections Filter,Mask Visualization,Detections Merge,Florence-2 Model,Detections Transformation,Roboflow Dataset Upload,Stitch OCR Detections,Perspective Correction,Path Deviation,Distance Measurement,Label Visualization,PTZ Tracking (ONVIF).md),Model Comparison Visualization,Circle Visualization,Roboflow Dataset Upload,Detections Stitch,Florence-2 Model,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Offset in version v1 has.
Bindings
-
input
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Model predictions to offset dimensions for..offset_width(integer): Offset for box width..offset_height(integer): Offset for box height..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detection Offset in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_offset@v1",
"predictions": "$steps.object_detection_model.predictions",
"offset_width": 10,
"offset_height": 10,
"units": "Pixels"
}