Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Apply a fixed offset to the width and height of a detection.
You can use this block to add padding around the result of a detection. This is useful to ensure that you can analyze bounding boxes that may be within the region of an object instead of being around an object.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/detection_offset@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
offset_width |
int |
Offset for box width.. | ✅ |
offset_height |
int |
Offset for box height.. | ✅ |
units |
str |
Units for offset dimensions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset
in version v1
.
- inputs:
YOLO-World Model
,VLM as Detector
,Bounding Rectangle
,Keypoint Detection Model
,Detections Classes Replacement
,Gaze Detection
,PTZ Tracking (ONVIF)
.md),Detections Transformation
,Distance Measurement
,Detections Stabilizer
,Detections Merge
,Template Matching
,Detections Stitch
,Dynamic Zone
,Object Detection Model
,Instance Segmentation Model
,Byte Tracker
,Detections Consensus
,Overlap Filter
,Object Detection Model
,Detections Filter
,Perspective Correction
,Google Vision OCR
,Path Deviation
,Detection Offset
,Pixel Color Count
,Keypoint Detection Model
,SIFT Comparison
,Dynamic Crop
,Moondream2
,Time in Zone
,Image Contours
,Segment Anything 2 Model
,Byte Tracker
,SIFT Comparison
,Line Counter
,Byte Tracker
,Time in Zone
,Path Deviation
,Instance Segmentation Model
,Line Counter
,Velocity
,VLM as Detector
- outputs:
Bounding Rectangle
,Corner Visualization
,Detections Classes Replacement
,Roboflow Dataset Upload
,Circle Visualization
,PTZ Tracking (ONVIF)
.md),Triangle Visualization
,Roboflow Custom Metadata
,Stability AI Inpainting
,Detections Transformation
,Bounding Box Visualization
,Size Measurement
,Florence-2 Model
,Distance Measurement
,Detections Stabilizer
,Detections Merge
,Halo Visualization
,Detections Stitch
,Dynamic Zone
,Polygon Visualization
,Ellipse Visualization
,Byte Tracker
,Color Visualization
,Label Visualization
,Dot Visualization
,Detections Consensus
,Crop Visualization
,Overlap Filter
,Detections Filter
,Perspective Correction
,Path Deviation
,Model Monitoring Inference Aggregator
,Detection Offset
,Stitch OCR Detections
,Model Comparison Visualization
,Dynamic Crop
,Mask Visualization
,Time in Zone
,Florence-2 Model
,Segment Anything 2 Model
,Pixelate Visualization
,Byte Tracker
,Roboflow Dataset Upload
,Line Counter
,Byte Tracker
,Time in Zone
,Path Deviation
,Background Color Visualization
,Line Counter
,Keypoint Visualization
,Blur Visualization
,Velocity
,Trace Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Offset
in version v1
has.
Bindings
-
input
predictions
(Union[object_detection_prediction
,keypoint_detection_prediction
,instance_segmentation_prediction
]): Model predictions to offset dimensions for..offset_width
(integer
): Offset for box width..offset_height
(integer
): Offset for box height..
-
output
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
or Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction
.
Example JSON definition of step Detection Offset
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_offset@v1",
"predictions": "$steps.object_detection_model.predictions",
"offset_width": 10,
"offset_height": 10,
"units": "Pixels"
}