Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Expand or contract detection bounding boxes by applying fixed offsets to their width and height, adding padding around detections to include more context, adjust bounding box sizes for downstream processing, or compensate for tight detections, supporting both pixel-based and percentage-based offset units for flexible bounding box adjustment.
How This Block Works¶
This block adjusts the size of detection bounding boxes by adding offsets to their dimensions, effectively expanding or contracting the boxes to include more or less context around detected objects. The block:
- Receives detection predictions (object detection, instance segmentation, or keypoint detection) containing bounding boxes
- Processes each detection's bounding box coordinates independently
- Calculates offsets based on the selected unit type:
- Pixel-based offsets: Adds/subtracts a fixed number of pixels on each side (offset_width//2 pixels on left/right, offset_height//2 pixels on top/bottom)
- Percentage-based offsets: Calculates offsets as a percentage of the bounding box's width and height (offset_width% of box width, offset_height% of box height)
- Applies the offsets to expand the bounding boxes:
- Subtracts half the width offset from x_min and adds half to x_max (expands horizontally)
- Subtracts half the height offset from y_min and adds half to y_max (expands vertically)
- Clips the adjusted bounding boxes to image boundaries (ensures coordinates stay within image dimensions using min/max constraints)
- Updates detection metadata:
- Sets parent_id_key to reference the original detection IDs (preserves traceability)
- Generates new detection IDs for the offset detections (tracks that these are modified versions)
- Preserves all other detection properties (masks, keypoints, polygons, class labels, confidence scores) unchanged
- Returns the modified detections with expanded or contracted bounding boxes
The block applies offsets symmetrically around the center of each bounding box, expanding the box equally on all sides based on the width and height offsets. Positive offsets expand boxes (add padding), while the implementation always expands boxes outward. The pixel-based mode applies fixed pixel offsets regardless of box size, useful for consistent padding. The percentage-based mode applies offsets proportional to box size, useful when padding should scale with the detected object size. Boxes are automatically clipped to image boundaries to prevent invalid coordinates.
Common Use Cases¶
- Context Padding for Analysis: Expand tight bounding boxes to include more surrounding context (e.g., add padding around detected objects for better classification, expand boxes to include object context for feature extraction, add margin around text detections for OCR), enabling improved analysis with additional context
- Detection Size Adjustment: Adjust bounding box sizes to match downstream processing requirements (e.g., expand boxes for models that need larger input regions, adjust box sizes to accommodate specific analysis needs, modify detections for compatibility with other blocks), enabling size customization for workflow compatibility
- Tight Detection Compensation: Expand overly tight bounding boxes that cut off parts of objects (e.g., add padding to tight object detections, expand boxes that miss object edges, compensate for models that produce undersized boxes), enabling better object coverage
- Multi-Stage Workflow Preparation: Prepare detections with adjusted sizes for secondary processing (e.g., expand initial detections before running secondary models, adjust box sizes for specialized analysis blocks, prepare detections with context for detailed processing), enabling optimized multi-stage workflows
- Crop Region Optimization: Adjust bounding boxes before cropping to include desired context (e.g., add padding before dynamic cropping to include surrounding context, expand boxes to capture more area for analysis, adjust crop regions for better feature extraction), enabling optimized region extraction
- Visualization and Display: Adjust bounding box sizes for better visualization or display purposes (e.g., expand boxes for clearer annotations, adjust box sizes for presentation, modify detections for visualization consistency), enabling improved visual outputs
Connecting to Other Blocks¶
This block receives detection predictions and produces adjusted detections with modified bounding boxes:
- After detection blocks (e.g., Object Detection, Instance Segmentation, Keypoint Detection) to expand or adjust bounding box sizes before further processing, enabling size-optimized detections for downstream analysis
- Before dynamic crop blocks to adjust bounding box sizes before cropping, enabling optimized crop regions with desired context or padding
- Before classification or analysis blocks that benefit from additional context around detections (e.g., classification with context, feature extraction from expanded regions, detailed analysis with padding), enabling improved analysis with context
- In multi-stage detection workflows where initial detections need size adjustments before secondary processing (e.g., expand initial detections before running specialized models, adjust box sizes for compatibility, prepare detections for optimized processing), enabling flexible multi-stage workflows
- Before visualization blocks to adjust bounding box sizes for display purposes (e.g., expand boxes for clearer annotations, adjust sizes for presentation, modify detections for visualization consistency), enabling optimized visual outputs
- Before blocks that process detection regions where bounding box size matters (e.g., OCR on text regions with padding, feature extraction from expanded regions, specialized models requiring specific box sizes), enabling size-optimized region processing
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_offset@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
offset_width |
int |
Offset value to apply to bounding box width. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box width (divided equally between left and right sides - offset_width//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box width to add (calculated as percentage of the box's width, then divided between left and right). Positive values expand boxes horizontally. Boxes are clipped to image boundaries automatically.. | ✅ |
offset_height |
int |
Offset value to apply to bounding box height. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box height (divided equally between top and bottom sides - offset_height//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box height to add (calculated as percentage of the box's height, then divided between top and bottom). Positive values expand boxes vertically. Boxes are clipped to image boundaries automatically.. | ✅ |
units |
str |
Unit type for offset values: 'Pixels' for fixed pixel offsets (same number of pixels for all boxes regardless of size) or 'Percent (%)' for percentage-based offsets (proportional to each bounding box's dimensions). Pixel offsets provide consistent padding in absolute terms. Percentage offsets scale with box size, providing proportional padding. Use pixels when you need consistent absolute padding. Use percentage when padding should scale with detected object size.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset in version v1.
- inputs:
Overlap Filter,SAM 3,Perspective Correction,Object Detection Model,Time in Zone,EasyOCR,VLM As Detector,Gaze Detection,PTZ Tracking (ONVIF),Time in Zone,Bounding Rectangle,Template Matching,Dynamic Crop,Velocity,Instance Segmentation Model,SIFT Comparison,Detections Combine,Detection Event Log,Line Counter,Time in Zone,ByteTrack Tracker,Path Deviation,Detections Filter,Keypoint Detection Model,Pixel Color Count,Segment Anything 2 Model,Detections Merge,Detections Classes Replacement,Keypoint Detection Model,Detections Transformation,Detections Stabilizer,Byte Tracker,SIFT Comparison,Line Counter,SAM 3,OC-SORT Tracker,OCR Model,Detections Consensus,Motion Detection,SAM 3,Dynamic Zone,Seg Preview,Object Detection Model,Instance Segmentation Model,Image Contours,Detection Offset,Mask Area Measurement,Detections Stitch,YOLO-World Model,SORT Tracker,Byte Tracker,Detections List Roll-Up,Google Vision OCR,Byte Tracker,Distance Measurement,Path Deviation,Moondream2,VLM As Detector - outputs:
Corner Visualization,Ellipse Visualization,Roboflow Dataset Upload,Time in Zone,Stitch OCR Detections,Time in Zone,Dynamic Crop,Velocity,Detections Combine,Trace Visualization,Detection Event Log,Halo Visualization,Dot Visualization,ByteTrack Tracker,Polygon Visualization,Model Monitoring Inference Aggregator,Roboflow Custom Metadata,Pixelate Visualization,Circle Visualization,Icon Visualization,Detections Classes Replacement,Detections Stabilizer,Halo Visualization,Camera Focus,OC-SORT Tracker,Detections Consensus,Polygon Visualization,Crop Visualization,Roboflow Dataset Upload,Mask Visualization,Detection Offset,SORT Tracker,Heatmap Visualization,Byte Tracker,Byte Tracker,Label Visualization,Detections List Roll-Up,Florence-2 Model,Segment Anything 2 Model,Florence-2 Model,Stability AI Inpainting,Overlap Filter,Perspective Correction,PTZ Tracking (ONVIF),Bounding Rectangle,Background Color Visualization,Size Measurement,Keypoint Visualization,Time in Zone,Line Counter,Path Deviation,Detections Filter,Stitch OCR Detections,Detections Merge,Detections Transformation,Byte Tracker,Line Counter,Color Visualization,Dynamic Zone,Roboflow Vision Events,Mask Area Measurement,Detections Stitch,Triangle Visualization,Blur Visualization,Bounding Box Visualization,Distance Measurement,Path Deviation,Model Comparison Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Offset in version v1 has.
Bindings
-
input
predictions(Union[keypoint_detection_prediction,instance_segmentation_prediction,object_detection_prediction]): Detection predictions containing bounding boxes to adjust. Supports object detection, instance segmentation, or keypoint detection predictions. The bounding boxes in these predictions will be expanded or contracted based on the offset_width and offset_height values. All detection properties (masks, keypoints, polygons, classes, confidence) are preserved unchanged - only bounding box coordinates are modified..offset_width(integer): Offset value to apply to bounding box width. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box width (divided equally between left and right sides - offset_width//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box width to add (calculated as percentage of the box's width, then divided between left and right). Positive values expand boxes horizontally. Boxes are clipped to image boundaries automatically..offset_height(integer): Offset value to apply to bounding box height. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box height (divided equally between top and bottom sides - offset_height//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box height to add (calculated as percentage of the box's height, then divided between top and bottom). Positive values expand boxes vertically. Boxes are clipped to image boundaries automatically..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detection Offset in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_offset@v1",
"predictions": "$steps.object_detection_model.predictions",
"offset_width": 10,
"offset_height": 10,
"units": "Pixels"
}