Detections Transformation¶
Class: DetectionsTransformationBlockV1
Apply customizable transformations to detection predictions using UQL (Query Language) operation chains, enabling flexible modification of bounding boxes, filtering detections, extracting properties, resizing boxes, and other detection manipulations through configurable operation sequences for advanced detection processing workflows.
How This Block Works¶
This block transforms detection predictions by applying a chain of UQL operations that can modify, filter, extract, or manipulate detection data. The block:
- Receives detection predictions (object detection, instance segmentation, or keypoint detection) and a list of UQL operations to apply
- Validates that operations_parameters doesn't contain reserved parameter names
- Builds an operations chain from the provided UQL operation definitions, creating a sequence of transformations to apply in order
- Separates operations_parameters into batch parameters (aligned with predictions) and non-batch parameters (applied to all predictions)
- Processes each prediction batch by applying the operations chain:
- Zips predictions with batch parameters to align data per batch item
- Combines batch and non-batch parameters into evaluation parameters for each prediction
- Applies the operations chain to the detections with the combined parameters
- Validates that the output is still sv.Detections (operations must preserve detection type)
- Returns the transformed detections for each input batch
The block supports a wide variety of UQL operations including filtering (DetectionsFilter), property extraction (ExtractDetectionProperty), bounding box transformations (resizing, scaling), and other detection manipulations. Operations are applied sequentially, allowing complex transformations through operation chaining. The block validates that transformations preserve the detection type, ensuring outputs remain compatible with other detection-processing blocks. Batch and non-batch parameters enable flexible operation parameterization, supporting both per-detection and global parameter values.
Common Use Cases¶
- Advanced Detection Filtering: Apply complex filtering logic to detection predictions (e.g., filter detections by class names using conditional statements, filter by confidence thresholds with multiple conditions, apply custom filtering criteria based on detection properties), enabling sophisticated detection selection workflows
- Bounding Box Transformations: Modify bounding box sizes, positions, or properties (e.g., resize bounding boxes proportionally, scale boxes by percentage, adjust box coordinates, transform box dimensions), enabling flexible bounding box manipulation
- Property Extraction and Filtering: Extract detection properties and filter based on extracted values (e.g., extract class names and filter by class lists, extract confidence scores and filter by thresholds, extract properties for conditional processing), enabling property-based detection processing
- Multi-Conditional Processing: Apply complex conditional transformations based on multiple detection criteria (e.g., transform detections based on class and confidence combinations, apply different operations for different detection types, conditionally modify detections based on multiple properties), enabling sophisticated conditional detection processing
- Detection Data Enrichment: Extract and add properties to detections for downstream processing (e.g., extract class names for filtering, compute detection properties, add metadata to detections), enabling enriched detection data for complex workflows
- Custom Detection Manipulation: Apply custom transformations not available in dedicated blocks (e.g., complex multi-step detection modifications, custom filtering and transformation combinations, specialized detection processing workflows), enabling flexible custom detection processing
Connecting to Other Blocks¶
This block receives detection predictions and produces transformed detections:
- After detection blocks (e.g., Object Detection, Instance Segmentation, Keypoint Detection) to apply custom transformations, filtering, or modifications to detection predictions, enabling flexible detection processing workflows
- Before dynamic crop blocks to filter or modify detections before cropping (e.g., filter detections by class before cropping, transform box sizes before cropping, extract specific detections for cropping), enabling optimized region extraction workflows
- Before classification or analysis blocks to prepare detections with custom filtering or transformations (e.g., filter detections for specific analysis, transform boxes for compatibility, prepare detections with custom criteria), enabling customized detection preparation
- In multi-stage detection workflows where detections need custom transformations between stages (e.g., filter and transform initial detections before secondary processing, apply custom modifications between detection stages, conditionally process detections based on criteria), enabling sophisticated multi-stage workflows
- Before visualization blocks to filter or transform detections for display (e.g., filter detections for visualization, transform boxes for presentation, customize detections for display purposes), enabling optimized visual outputs
- After detection blocks and before other transformation blocks to apply custom logic between transformations (e.g., filter after detection and before cropping, transform between detection stages, apply conditional modifications), enabling complex transformation pipelines with custom logic
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_transformation@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
operations |
List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractFrameMetadata, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, PickDetectionsByParentClass, RandomNumber, SequenceAggregate, SequenceApply, SequenceElementsCount, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, TimestampToISOFormat, ToBoolean, ToNumber, ToString]] |
List of UQL (Query Language) operations to apply sequentially to the detections. Operations are executed in order, with each operation receiving the output of the previous operation. Supported operations include DetectionsFilter (filtering detections by conditions), ExtractDetectionProperty (extracting properties from detections), bounding box transformations (resizing, scaling), and other UQL operations that accept and return sv.Detections. Operations can be parameterized using operations_parameters. The operations chain must transform sv.Detections to sv.Detections (type must be preserved).. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Transformation in version v1.
- inputs:
Mask Visualization,Classification Label Visualization,Detections Consensus,Detections Merge,Instance Segmentation Model,Webhook Sink,Multi-Label Classification Model,Email Notification,QR Code Generator,VLM As Detector,LMM,Multi-Label Classification Model,SAM 3,Detection Offset,Image Convert Grayscale,Corner Visualization,Stability AI Outpainting,Segment Anything 2 Model,Halo Visualization,Object Detection Model,JSON Parser,Single-Label Classification Model,Trace Visualization,Google Vision OCR,Instance Segmentation Model,Clip Comparison,CSV Formatter,Text Display,Stitch Images,Google Gemini,Local File Sink,Slack Notification,VLM As Classifier,PTZ Tracking (ONVIF).md),Roboflow Dataset Upload,Color Visualization,Dot Visualization,Polygon Visualization,Object Detection Model,Anthropic Claude,Buffer,Byte Tracker,Contrast Equalization,Identify Changes,Detections Classes Replacement,Dimension Collapse,Perception Encoder Embedding Model,First Non Empty Or Default,Velocity,Continue If,Environment Secrets Store,Expression,Moondream2,SIFT Comparison,Halo Visualization,Florence-2 Model,Blur Visualization,Label Visualization,Twilio SMS/MMS Notification,Ellipse Visualization,OpenAI,SIFT,Model Monitoring Inference Aggregator,Single-Label Classification Model,Detections List Roll-Up,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,Size Measurement,OpenAI,Keypoint Detection Model,Gaze Detection,SAM 3,Polygon Visualization,Twilio SMS Notification,Bounding Box Visualization,OCR Model,Overlap Filter,Time in Zone,Icon Visualization,Google Gemini,Florence-2 Model,Roboflow Dataset Upload,Anthropic Claude,Rate Limiter,Dynamic Zone,Dynamic Crop,CLIP Embedding Model,VLM As Detector,Google Gemini,Path Deviation,Image Blur,Line Counter,Byte Tracker,Cache Set,SmolVLM2,Stability AI Inpainting,Template Matching,Image Contours,Path Deviation,Morphological Transformation,Bounding Rectangle,Triangle Visualization,Detections Stitch,Relative Static Crop,Property Definition,Detections Filter,Camera Calibration,Grid Visualization,Detections Stabilizer,Delta Filter,Camera Focus,Detections Combine,Image Slicer,LMM For Classification,Line Counter Visualization,Keypoint Detection Model,Llama 3.2 Vision,Distance Measurement,SIFT Comparison,Camera Focus,Dominant Color,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Seg Preview,Identify Outliers,Qwen3-VL,Barcode Detection,Clip Comparison,Email Notification,QR Code Detection,Byte Tracker,Image Preprocessing,SAM 3,Depth Estimation,Cosine Similarity,Cache Get,Time in Zone,Line Counter,CogVLM,Absolute Static Crop,Roboflow Custom Metadata,EasyOCR,Stitch OCR Detections,Perspective Correction,Qwen2.5-VL,Anthropic Claude,Pixelate Visualization,Data Aggregator,Stability AI Image Generation,Reference Path Visualization,Keypoint Visualization,VLM As Classifier,Detection Event Log,Polygon Zone Visualization,YOLO-World Model,Stitch OCR Detections,Crop Visualization,Pixel Color Count,Motion Detection,OpenAI,Detections Transformation - outputs:
Mask Visualization,Circle Visualization,Detections Consensus,Detections Merge,Halo Visualization,Dynamic Zone,Florence-2 Model,Blur Visualization,Dynamic Crop,Label Visualization,Path Deviation,Detection Offset,Corner Visualization,Ellipse Visualization,Byte Tracker,Byte Tracker,Line Counter,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Halo Visualization,Stability AI Inpainting,Detections List Roll-Up,Model Comparison Visualization,Background Color Visualization,Path Deviation,Trace Visualization,Size Measurement,Time in Zone,Line Counter,Triangle Visualization,Bounding Rectangle,Detections Stitch,Detections Filter,Roboflow Custom Metadata,Detections Stabilizer,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Camera Focus,Stitch OCR Detections,Perspective Correction,Color Visualization,Detections Combine,Dot Visualization,Polygon Visualization,Pixelate Visualization,Keypoint Visualization,Polygon Visualization,Bounding Box Visualization,Byte Tracker,Detection Event Log,Distance Measurement,Detections Transformation,Detections Classes Replacement,Overlap Filter,Icon Visualization,Crop Visualization,Time in Zone,Stitch OCR Detections,Time in Zone,Velocity,Florence-2 Model,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Transformation in version v1 has.
Bindings
-
input
predictions(Union[instance_segmentation_prediction,object_detection_prediction,keypoint_detection_prediction]): Detection predictions to transform using UQL operations. Supports object detection, instance segmentation, or keypoint detection predictions. The detections will be transformed by the operations chain defined in the operations field. All transformations must preserve the detection type (output must remain sv.Detections). The block processes batch inputs and applies transformations per batch item..operations_parameters(*): Dictionary mapping parameter names (used in operations) to workflow data sources or values. Parameters are referenced in operations (e.g., in conditional statements, filter operations) and provided at runtime. Supports both batch parameters (aligned with predictions, one value per batch item) and non-batch parameters (same value for all batch items). Parameters are automatically separated into batch and non-batch based on their data structure. Cannot use reserved parameter names. Use this to parameterize operations dynamically (e.g., provide class lists for filtering, provide thresholds for conditions, supply values for operations that need runtime parameters)..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detections Transformation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_transformation@v1",
"predictions": "$steps.object_detection_model.predictions",
"operations": [
{
"filter_operation": {
"statements": [
{
"comparator": {
"type": "in (Sequence)"
},
"left_operand": {
"operations": [
{
"property_name": "class_name",
"type": "ExtractDetectionProperty"
}
],
"type": "DynamicOperand"
},
"right_operand": {
"operand_name": "classes",
"type": "DynamicOperand"
},
"type": "BinaryStatement"
}
],
"type": "StatementGroup"
},
"type": "DetectionsFilter"
}
],
"operations_parameters": {
"classes": "$inputs.classes"
}
}