Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Apply a fixed offset to the width and height of a detection.
You can use this block to add padding around the result of a detection. This is useful to ensure that you can analyze bounding boxes that may be within the region of an object instead of being around an object.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_offset@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
offset_width |
int |
Offset for box width.. | ✅ |
offset_height |
int |
Offset for box height.. | ✅ |
units |
str |
Units for offset dimensions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset in version v1.
- inputs:
Google Vision OCR,Keypoint Detection Model,SIFT Comparison,Time in Zone,Line Counter,Detections Filter,YOLO-World Model,PTZ Tracking (ONVIF).md),Detection Offset,Detections Classes Replacement,Detections Transformation,SAM 3,Template Matching,Seg Preview,Byte Tracker,Overlap Filter,Distance Measurement,SAM 3,SIFT Comparison,Object Detection Model,Path Deviation,VLM as Detector,EasyOCR,Detections Combine,Dynamic Zone,Image Contours,Time in Zone,Object Detection Model,Detections Stitch,Line Counter,Velocity,Moondream2,Byte Tracker,OCR Model,Instance Segmentation Model,Path Deviation,Time in Zone,Dynamic Crop,Gaze Detection,Pixel Color Count,Detections Consensus,Detections Stabilizer,Instance Segmentation Model,Perspective Correction,Detections Merge,VLM as Detector,SAM 3,Byte Tracker,Keypoint Detection Model,Bounding Rectangle,Segment Anything 2 Model - outputs:
Label Visualization,Time in Zone,Line Counter,Blur Visualization,Background Color Visualization,Keypoint Visualization,Bounding Box Visualization,Detections Filter,Polygon Visualization,PTZ Tracking (ONVIF).md),Detection Offset,Pixelate Visualization,Detections Classes Replacement,Icon Visualization,Detections Transformation,Triangle Visualization,Roboflow Dataset Upload,Model Comparison Visualization,Byte Tracker,Overlap Filter,Corner Visualization,Distance Measurement,Florence-2 Model,Color Visualization,Path Deviation,Detections Combine,Halo Visualization,Size Measurement,Dynamic Zone,Circle Visualization,Time in Zone,Dot Visualization,Detections Stitch,Line Counter,Ellipse Visualization,Velocity,Model Monitoring Inference Aggregator,Byte Tracker,Path Deviation,Time in Zone,Roboflow Dataset Upload,Stability AI Inpainting,Dynamic Crop,Detections Consensus,Detections Stabilizer,Crop Visualization,Detections Merge,Florence-2 Model,Perspective Correction,Roboflow Custom Metadata,Mask Visualization,Trace Visualization,Byte Tracker,Stitch OCR Detections,Bounding Rectangle,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Offset in version v1 has.
Bindings
-
input
predictions(Union[instance_segmentation_prediction,object_detection_prediction,keypoint_detection_prediction]): Model predictions to offset dimensions for..offset_width(integer): Offset for box width..offset_height(integer): Offset for box height..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detection Offset in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_offset@v1",
"predictions": "$steps.object_detection_model.predictions",
"offset_width": 10,
"offset_height": 10,
"units": "Pixels"
}