Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Apply a fixed offset to the width and height of a detection.
You can use this block to add padding around the result of a detection. This is useful to ensure that you can analyze bounding boxes that may be within the region of an object instead of being around an object.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/detection_offset@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
offset_width |
int |
Offset for boxes width. | ✅ |
offset_height |
int |
Offset for boxes height. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset
in version v1
.
- inputs:
Time in Zone
,Detections Stitch
,Path Deviation
,Keypoint Detection Model
,Gaze Detection
,Line Counter
,Instance Segmentation Model
,Keypoint Detection Model
,SIFT Comparison
,Object Detection Model
,Perspective Correction
,Line Counter
,Detections Filter
,YOLO-World Model
,VLM as Detector
,Google Vision OCR
,Detections Consensus
,Byte Tracker
,Detections Classes Replacement
,Instance Segmentation Model
,Template Matching
,Detection Offset
,SIFT Comparison
,Segment Anything 2 Model
,Image Contours
,Byte Tracker
,Time in Zone
,Detections Stabilizer
,Path Deviation
,VLM as Detector
,Byte Tracker
,Pixel Color Count
,Bounding Rectangle
,Detections Transformation
,Object Detection Model
,Distance Measurement
- outputs:
Time in Zone
,Florence-2 Model
,Path Deviation
,Pixelate Visualization
,Detections Stitch
,Line Counter
,Corner Visualization
,Blur Visualization
,Mask Visualization
,Perspective Correction
,Line Counter
,Detections Filter
,Model Monitoring Inference Aggregator
,Polygon Visualization
,Halo Visualization
,Trace Visualization
,Model Comparison Visualization
,Size Measurement
,Detections Consensus
,Byte Tracker
,Roboflow Custom Metadata
,Keypoint Visualization
,Detections Classes Replacement
,Crop Visualization
,Detection Offset
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Stitch OCR Detections
,Dynamic Zone
,Dot Visualization
,Circle Visualization
,Background Color Visualization
,Segment Anything 2 Model
,Bounding Box Visualization
,Ellipse Visualization
,Label Visualization
,Byte Tracker
,Time in Zone
,Florence-2 Model
,Stability AI Inpainting
,Detections Stabilizer
,Path Deviation
,Dynamic Crop
,Byte Tracker
,Triangle Visualization
,Color Visualization
,Bounding Rectangle
,Detections Transformation
,Distance Measurement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Offset
in version v1
has.
Bindings
-
input
predictions
(Union[instance_segmentation_prediction
,keypoint_detection_prediction
,object_detection_prediction
]): Reference to detection-like predictions.offset_width
(integer
): Offset for boxes width.offset_height
(integer
): Offset for boxes height.
-
output
predictions
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_prediction
or Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_prediction
or Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction
.
Example JSON definition of step Detection Offset
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_offset@v1",
"predictions": "$steps.object_detection_model.predictions",
"offset_width": 10,
"offset_height": 10
}