Detection Offset¶
Class: DetectionOffsetBlockV1
Source: inference.core.workflows.core_steps.transformations.detection_offset.v1.DetectionOffsetBlockV1
Expand or contract detection bounding boxes by applying fixed offsets to their width and height, adding padding around detections to include more context, adjust bounding box sizes for downstream processing, or compensate for tight detections, supporting both pixel-based and percentage-based offset units for flexible bounding box adjustment.
How This Block Works¶
This block adjusts the size of detection bounding boxes by adding offsets to their dimensions, effectively expanding or contracting the boxes to include more or less context around detected objects. The block:
- Receives detection predictions (object detection, instance segmentation, or keypoint detection) containing bounding boxes
- Processes each detection's bounding box coordinates independently
- Calculates offsets based on the selected unit type:
- Pixel-based offsets: Adds/subtracts a fixed number of pixels on each side (offset_width//2 pixels on left/right, offset_height//2 pixels on top/bottom)
- Percentage-based offsets: Calculates offsets as a percentage of the bounding box's width and height (offset_width% of box width, offset_height% of box height)
- Applies the offsets to expand the bounding boxes:
- Subtracts half the width offset from x_min and adds half to x_max (expands horizontally)
- Subtracts half the height offset from y_min and adds half to y_max (expands vertically)
- Clips the adjusted bounding boxes to image boundaries (ensures coordinates stay within image dimensions using min/max constraints)
- Updates detection metadata:
- Sets parent_id_key to reference the original detection IDs (preserves traceability)
- Generates new detection IDs for the offset detections (tracks that these are modified versions)
- Preserves all other detection properties (masks, keypoints, polygons, class labels, confidence scores) unchanged
- Returns the modified detections with expanded or contracted bounding boxes
The block applies offsets symmetrically around the center of each bounding box, expanding the box equally on all sides based on the width and height offsets. Positive offsets expand boxes (add padding), while the implementation always expands boxes outward. The pixel-based mode applies fixed pixel offsets regardless of box size, useful for consistent padding. The percentage-based mode applies offsets proportional to box size, useful when padding should scale with the detected object size. Boxes are automatically clipped to image boundaries to prevent invalid coordinates.
Common Use Cases¶
- Context Padding for Analysis: Expand tight bounding boxes to include more surrounding context (e.g., add padding around detected objects for better classification, expand boxes to include object context for feature extraction, add margin around text detections for OCR), enabling improved analysis with additional context
- Detection Size Adjustment: Adjust bounding box sizes to match downstream processing requirements (e.g., expand boxes for models that need larger input regions, adjust box sizes to accommodate specific analysis needs, modify detections for compatibility with other blocks), enabling size customization for workflow compatibility
- Tight Detection Compensation: Expand overly tight bounding boxes that cut off parts of objects (e.g., add padding to tight object detections, expand boxes that miss object edges, compensate for models that produce undersized boxes), enabling better object coverage
- Multi-Stage Workflow Preparation: Prepare detections with adjusted sizes for secondary processing (e.g., expand initial detections before running secondary models, adjust box sizes for specialized analysis blocks, prepare detections with context for detailed processing), enabling optimized multi-stage workflows
- Crop Region Optimization: Adjust bounding boxes before cropping to include desired context (e.g., add padding before dynamic cropping to include surrounding context, expand boxes to capture more area for analysis, adjust crop regions for better feature extraction), enabling optimized region extraction
- Visualization and Display: Adjust bounding box sizes for better visualization or display purposes (e.g., expand boxes for clearer annotations, adjust box sizes for presentation, modify detections for visualization consistency), enabling improved visual outputs
Connecting to Other Blocks¶
This block receives detection predictions and produces adjusted detections with modified bounding boxes:
- After detection blocks (e.g., Object Detection, Instance Segmentation, Keypoint Detection) to expand or adjust bounding box sizes before further processing, enabling size-optimized detections for downstream analysis
- Before dynamic crop blocks to adjust bounding box sizes before cropping, enabling optimized crop regions with desired context or padding
- Before classification or analysis blocks that benefit from additional context around detections (e.g., classification with context, feature extraction from expanded regions, detailed analysis with padding), enabling improved analysis with context
- In multi-stage detection workflows where initial detections need size adjustments before secondary processing (e.g., expand initial detections before running specialized models, adjust box sizes for compatibility, prepare detections for optimized processing), enabling flexible multi-stage workflows
- Before visualization blocks to adjust bounding box sizes for display purposes (e.g., expand boxes for clearer annotations, adjust sizes for presentation, modify detections for visualization consistency), enabling optimized visual outputs
- Before blocks that process detection regions where bounding box size matters (e.g., OCR on text regions with padding, feature extraction from expanded regions, specialized models requiring specific box sizes), enabling size-optimized region processing
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detection_offset@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
offset_width |
int |
Offset value to apply to bounding box width. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box width (divided equally between left and right sides - offset_width//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box width to add (calculated as percentage of the box's width, then divided between left and right). Positive values expand boxes horizontally. Boxes are clipped to image boundaries automatically.. | ✅ |
offset_height |
int |
Offset value to apply to bounding box height. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box height (divided equally between top and bottom sides - offset_height//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box height to add (calculated as percentage of the box's height, then divided between top and bottom). Positive values expand boxes vertically. Boxes are clipped to image boundaries automatically.. | ✅ |
units |
str |
Unit type for offset values: 'Pixels' for fixed pixel offsets (same number of pixels for all boxes regardless of size) or 'Percent (%)' for percentage-based offsets (proportional to each bounding box's dimensions). Pixel offsets provide consistent padding in absolute terms. Percentage offsets scale with box size, providing proportional padding. Use pixels when you need consistent absolute padding. Use percentage when padding should scale with detected object size.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detection Offset in version v1.
- inputs:
Detections Consensus,Image Contours,SIFT Comparison,Detections Transformation,Time in Zone,Detections Stitch,VLM as Detector,Pixel Color Count,SIFT Comparison,Dynamic Crop,Motion Detection,YOLO-World Model,Detection Event Log,Instance Segmentation Model,Detections Classes Replacement,PTZ Tracking (ONVIF).md),Moondream2,Line Counter,Byte Tracker,Google Vision OCR,SAM 3,OCR Model,Detections Merge,Path Deviation,Object Detection Model,Keypoint Detection Model,Seg Preview,Distance Measurement,EasyOCR,Time in Zone,Byte Tracker,Detection Offset,Time in Zone,Detections Filter,Instance Segmentation Model,Detections Combine,Perspective Correction,Path Deviation,Overlap Filter,Keypoint Detection Model,Object Detection Model,Template Matching,Detections Stabilizer,Line Counter,Bounding Rectangle,Dynamic Zone,Detections List Roll-Up,SAM 3,Byte Tracker,Velocity,Gaze Detection,SAM 3,VLM as Detector,Segment Anything 2 Model - outputs:
Detections Consensus,Detections Transformation,Time in Zone,Polygon Visualization,Detections Stitch,Dynamic Crop,Bounding Box Visualization,Roboflow Dataset Upload,Model Monitoring Inference Aggregator,Detection Event Log,Model Comparison Visualization,Camera Focus,Detections Classes Replacement,PTZ Tracking (ONVIF).md),Blur Visualization,Line Counter,Byte Tracker,Mask Visualization,Keypoint Visualization,Detections Merge,Distance Measurement,Path Deviation,Circle Visualization,Roboflow Custom Metadata,Trace Visualization,Pixelate Visualization,Stability AI Inpainting,Color Visualization,Size Measurement,Time in Zone,Byte Tracker,Dot Visualization,Detection Offset,Label Visualization,Time in Zone,Detections Filter,Florence-2 Model,Detections Combine,Perspective Correction,Crop Visualization,Ellipse Visualization,Halo Visualization,Path Deviation,Overlap Filter,Florence-2 Model,Stitch OCR Detections,Detections Stabilizer,Corner Visualization,Line Counter,Bounding Rectangle,Dynamic Zone,Detections List Roll-Up,Stitch OCR Detections,Background Color Visualization,Roboflow Dataset Upload,Byte Tracker,Icon Visualization,Velocity,Triangle Visualization,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detection Offset in version v1 has.
Bindings
-
input
predictions(Union[instance_segmentation_prediction,keypoint_detection_prediction,object_detection_prediction]): Detection predictions containing bounding boxes to adjust. Supports object detection, instance segmentation, or keypoint detection predictions. The bounding boxes in these predictions will be expanded or contracted based on the offset_width and offset_height values. All detection properties (masks, keypoints, polygons, classes, confidence) are preserved unchanged - only bounding box coordinates are modified..offset_width(integer): Offset value to apply to bounding box width. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box width (divided equally between left and right sides - offset_width//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box width to add (calculated as percentage of the box's width, then divided between left and right). Positive values expand boxes horizontally. Boxes are clipped to image boundaries automatically..offset_height(integer): Offset value to apply to bounding box height. Must be a positive integer. If units is 'Pixels', this is the number of pixels added to the box height (divided equally between top and bottom sides - offset_height//2 pixels on each side). If units is 'Percent (%)', this is the percentage of the bounding box height to add (calculated as percentage of the box's height, then divided between top and bottom). Positive values expand boxes vertically. Boxes are clipped to image boundaries automatically..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detection Offset in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detection_offset@v1",
"predictions": "$steps.object_detection_model.predictions",
"offset_width": 10,
"offset_height": 10,
"units": "Pixels"
}