Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Apply a fixed offset to the width and height of a detection.
You can use this block to add padding around the result of a detection. This is useful to ensure that you can analyze bounding boxes that may be within the region of an object instead of being around an object.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/detection_offset@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
offset_width |
int |
Offset for box width.. | ✅ |
offset_height |
int |
Offset for box height.. | ✅ |
units |
str |
Units for offset dimensions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset
in version v1
.
- inputs:
Keypoint Detection Model
,Detections Stitch
,Google Vision OCR
,Gaze Detection
,SIFT Comparison
,Detections Classes Replacement
,Detection Offset
,Distance Measurement
,VLM as Detector
,Line Counter
,YOLO-World Model
,Moondream2
,SIFT Comparison
,Detections Filter
,Object Detection Model
,Overlap Filter
,Dynamic Zone
,Instance Segmentation Model
,Detections Stabilizer
,Template Matching
,Bounding Rectangle
,VLM as Detector
,Path Deviation
,Time in Zone
,Path Deviation
,Object Detection Model
,Segment Anything 2 Model
,Byte Tracker
,Keypoint Detection Model
,ONVIF Control
,Byte Tracker
,Line Counter
,Byte Tracker
,Image Contours
,Time in Zone
,Perspective Correction
,Dynamic Crop
,Pixel Color Count
,Detections Transformation
,Instance Segmentation Model
,Detections Merge
,Velocity
,Detections Consensus
- outputs:
Detections Stitch
,Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Distance Measurement
,Pixelate Visualization
,Line Counter
,Mask Visualization
,Detections Filter
,Overlap Filter
,Dot Visualization
,Dynamic Zone
,Background Color Visualization
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Detections Stabilizer
,Triangle Visualization
,Stitch OCR Detections
,Bounding Rectangle
,Path Deviation
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Time in Zone
,Path Deviation
,Model Comparison Visualization
,Crop Visualization
,Blur Visualization
,Label Visualization
,Segment Anything 2 Model
,Stability AI Inpainting
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,Byte Tracker
,Line Counter
,Bounding Box Visualization
,Halo Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Size Measurement
,Corner Visualization
,Time in Zone
,Circle Visualization
,Perspective Correction
,Dynamic Crop
,Polygon Visualization
,Detections Transformation
,Detections Merge
,Trace Visualization
,Keypoint Visualization
,Velocity
,Color Visualization
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Offset
in version v1
has.
Bindings
-
input
predictions
(Union[instance_segmentation_prediction
,object_detection_prediction
,keypoint_detection_prediction
]): Model predictions to offset dimensions for..offset_width
(integer
): Offset for box width..offset_height
(integer
): Offset for box height..
-
output
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
or Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction
.
Example JSON definition of step Detection Offset
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_offset@v1",
"predictions": "$steps.object_detection_model.predictions",
"offset_width": 10,
"offset_height": 10,
"units": "Pixels"
}