Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Apply a fixed offset to the width and height of a detection.
You can use this block to add padding around the result of a detection. This is useful to ensure that you can analyze bounding boxes that may be within the region of an object instead of being around an object.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_offset@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs | 
|---|---|---|---|
name | 
str | 
Enter a unique identifier for this step.. | ❌ | 
offset_width | 
int | 
Offset for box width.. | ✅ | 
offset_height | 
int | 
Offset for box height.. | ✅ | 
units | 
str | 
Units for offset dimensions.. | ❌ | 
The Refs column marks possibility to parametrise the property with dynamic values available 
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset in version v1.
- inputs: 
VLM as Detector,PTZ Tracking (ONVIF).md),Detections Stitch,Line Counter,Time in Zone,Bounding Rectangle,Keypoint Detection Model,Object Detection Model,Detections Merge,Distance Measurement,Path Deviation,Time in Zone,Keypoint Detection Model,Detections Classes Replacement,Image Contours,Template Matching,Time in Zone,Moondream2,Detections Combine,EasyOCR,Dynamic Crop,Byte Tracker,Object Detection Model,Detections Transformation,Detections Stabilizer,Overlap Filter,Dynamic Zone,VLM as Detector,Segment Anything 2 Model,Instance Segmentation Model,Byte Tracker,Velocity,Gaze Detection,Google Vision OCR,SIFT Comparison,Detections Consensus,SIFT Comparison,Instance Segmentation Model,OCR Model,YOLO-World Model,Detections Filter,Detection Offset,Path Deviation,Seg Preview,Perspective Correction,Line Counter,Byte Tracker,Pixel Color Count - outputs: 
PTZ Tracking (ONVIF).md),Dot Visualization,Detections Stitch,Line Counter,Stability AI Inpainting,Time in Zone,Bounding Rectangle,Halo Visualization,Model Monitoring Inference Aggregator,Detections Merge,Distance Measurement,Path Deviation,Time in Zone,Triangle Visualization,Mask Visualization,Detections Classes Replacement,Size Measurement,Ellipse Visualization,Roboflow Custom Metadata,Model Comparison Visualization,Stitch OCR Detections,Time in Zone,Detections Combine,Polygon Visualization,Background Color Visualization,Corner Visualization,Crop Visualization,Roboflow Dataset Upload,Blur Visualization,Dynamic Crop,Byte Tracker,Overlap Filter,Dynamic Zone,Detections Transformation,Detections Stabilizer,Segment Anything 2 Model,Byte Tracker,Color Visualization,Florence-2 Model,Velocity,Label Visualization,Circle Visualization,Detections Consensus,Keypoint Visualization,Trace Visualization,Line Counter,Icon Visualization,Roboflow Dataset Upload,Bounding Box Visualization,Detections Filter,Detection Offset,Path Deviation,Pixelate Visualization,Perspective Correction,Florence-2 Model,Byte Tracker 
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds 
Detection Offset in version v1  has.
Bindings
- 
input
predictions(Union[instance_segmentation_prediction,keypoint_detection_prediction,object_detection_prediction]): Model predictions to offset dimensions for..offset_width(integer): Offset for box width..offset_height(integer): Offset for box height..
 - 
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
 
Example JSON definition of step Detection Offset in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/detection_offset@v1",
    "predictions": "$steps.object_detection_model.predictions",
    "offset_width": 10,
    "offset_height": 10,
    "units": "Pixels"
}