Detections Transformation¶
Class: DetectionsTransformationBlockV1
Apply customizable transformations to detection predictions using UQL (Query Language) operation chains, enabling flexible modification of bounding boxes, filtering detections, extracting properties, resizing boxes, and other detection manipulations through configurable operation sequences for advanced detection processing workflows.
How This Block Works¶
This block transforms detection predictions by applying a chain of UQL operations that can modify, filter, extract, or manipulate detection data. The block:
- Receives detection predictions (object detection, instance segmentation, or keypoint detection) and a list of UQL operations to apply
- Validates that operations_parameters doesn't contain reserved parameter names
- Builds an operations chain from the provided UQL operation definitions, creating a sequence of transformations to apply in order
- Separates operations_parameters into batch parameters (aligned with predictions) and non-batch parameters (applied to all predictions)
- Processes each prediction batch by applying the operations chain:
- Zips predictions with batch parameters to align data per batch item
- Combines batch and non-batch parameters into evaluation parameters for each prediction
- Applies the operations chain to the detections with the combined parameters
- Validates that the output is still sv.Detections (operations must preserve detection type)
- Returns the transformed detections for each input batch
The block supports a wide variety of UQL operations including filtering (DetectionsFilter), property extraction (ExtractDetectionProperty), bounding box transformations (resizing, scaling), and other detection manipulations. Operations are applied sequentially, allowing complex transformations through operation chaining. The block validates that transformations preserve the detection type, ensuring outputs remain compatible with other detection-processing blocks. Batch and non-batch parameters enable flexible operation parameterization, supporting both per-detection and global parameter values.
Common Use Cases¶
- Advanced Detection Filtering: Apply complex filtering logic to detection predictions (e.g., filter detections by class names using conditional statements, filter by confidence thresholds with multiple conditions, apply custom filtering criteria based on detection properties), enabling sophisticated detection selection workflows
- Bounding Box Transformations: Modify bounding box sizes, positions, or properties (e.g., resize bounding boxes proportionally, scale boxes by percentage, adjust box coordinates, transform box dimensions), enabling flexible bounding box manipulation
- Property Extraction and Filtering: Extract detection properties and filter based on extracted values (e.g., extract class names and filter by class lists, extract confidence scores and filter by thresholds, extract properties for conditional processing), enabling property-based detection processing
- Multi-Conditional Processing: Apply complex conditional transformations based on multiple detection criteria (e.g., transform detections based on class and confidence combinations, apply different operations for different detection types, conditionally modify detections based on multiple properties), enabling sophisticated conditional detection processing
- Detection Data Enrichment: Extract and add properties to detections for downstream processing (e.g., extract class names for filtering, compute detection properties, add metadata to detections), enabling enriched detection data for complex workflows
- Custom Detection Manipulation: Apply custom transformations not available in dedicated blocks (e.g., complex multi-step detection modifications, custom filtering and transformation combinations, specialized detection processing workflows), enabling flexible custom detection processing
Connecting to Other Blocks¶
This block receives detection predictions and produces transformed detections:
- After detection blocks (e.g., Object Detection, Instance Segmentation, Keypoint Detection) to apply custom transformations, filtering, or modifications to detection predictions, enabling flexible detection processing workflows
- Before dynamic crop blocks to filter or modify detections before cropping (e.g., filter detections by class before cropping, transform box sizes before cropping, extract specific detections for cropping), enabling optimized region extraction workflows
- Before classification or analysis blocks to prepare detections with custom filtering or transformations (e.g., filter detections for specific analysis, transform boxes for compatibility, prepare detections with custom criteria), enabling customized detection preparation
- In multi-stage detection workflows where detections need custom transformations between stages (e.g., filter and transform initial detections before secondary processing, apply custom modifications between detection stages, conditionally process detections based on criteria), enabling sophisticated multi-stage workflows
- Before visualization blocks to filter or transform detections for display (e.g., filter detections for visualization, transform boxes for presentation, customize detections for display purposes), enabling optimized visual outputs
- After detection blocks and before other transformation blocks to apply custom logic between transformations (e.g., filter after detection and before cropping, transform between detection stages, apply conditional modifications), enabling complex transformation pipelines with custom logic
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_transformation@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
operations |
List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractFrameMetadata, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, PickDetectionsByParentClass, RandomNumber, SequenceAggregate, SequenceApply, SequenceElementsCount, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, TimestampToISOFormat, ToBoolean, ToNumber, ToString]] |
List of UQL (Query Language) operations to apply sequentially to the detections. Operations are executed in order, with each operation receiving the output of the previous operation. Supported operations include DetectionsFilter (filtering detections by conditions), ExtractDetectionProperty (extracting properties from detections), bounding box transformations (resizing, scaling), and other UQL operations that accept and return sv.Detections. Operations can be parameterized using operations_parameters. The operations chain must transform sv.Detections to sv.Detections (type must be preserved).. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Transformation in version v1.
- inputs:
Icon Visualization,Image Preprocessing,LMM,Blur Visualization,Detections Classes Replacement,Detections Merge,Color Visualization,Contrast Equalization,Cache Set,Llama 3.2 Vision,Velocity,Reference Path Visualization,SIFT,Buffer,OpenAI,SAM 3,Halo Visualization,Roboflow Dataset Upload,Dimension Collapse,Trace Visualization,Twilio SMS/MMS Notification,Detections Transformation,VLM as Detector,Single-Label Classification Model,Image Convert Grayscale,Path Deviation,Background Color Visualization,Multi-Label Classification Model,Qwen2.5-VL,Single-Label Classification Model,Camera Calibration,VLM as Detector,Triangle Visualization,Dynamic Zone,Ellipse Visualization,Seg Preview,Slack Notification,Absolute Static Crop,Time in Zone,Google Gemini,Webhook Sink,Line Counter Visualization,Florence-2 Model,Dominant Color,Detections Consensus,Property Definition,QR Code Generator,Object Detection Model,Anthropic Claude,Keypoint Detection Model,Byte Tracker,Image Slicer,Pixel Color Count,Image Contours,Stability AI Inpainting,VLM as Classifier,Line Counter,Google Gemini,Motion Detection,SIFT Comparison,Line Counter,Byte Tracker,Keypoint Detection Model,Dot Visualization,YOLO-World Model,Camera Focus,Bounding Box Visualization,OCR Model,Background Subtraction,OpenAI,SAM 3,Dynamic Crop,Qwen3-VL,Distance Measurement,Keypoint Visualization,Email Notification,Image Threshold,Anthropic Claude,Corner Visualization,Rate Limiter,Pixelate Visualization,SmolVLM2,SIFT Comparison,Twilio SMS Notification,Moondream2,Morphological Transformation,Roboflow Custom Metadata,Stitch Images,Detections Filter,Circle Visualization,Template Matching,Stability AI Image Generation,Image Blur,Detections List Roll-Up,Bounding Rectangle,Email Notification,Continue If,Perception Encoder Embedding Model,Detection Event Log,EasyOCR,Google Gemini,Environment Secrets Store,Instance Segmentation Model,Delta Filter,Detection Offset,Classification Label Visualization,Clip Comparison,Google Vision OCR,CogVLM,Data Aggregator,Stitch OCR Detections,QR Code Detection,Segment Anything 2 Model,LMM For Classification,Text Display,CLIP Embedding Model,SAM 3,Multi-Label Classification Model,Barcode Detection,Mask Visualization,OpenAI,Local File Sink,Anthropic Claude,Polygon Visualization,Polygon Zone Visualization,Model Comparison Visualization,JSON Parser,Label Visualization,Perspective Correction,Overlap Filter,Image Slicer,Identify Changes,Instance Segmentation Model,Cosine Similarity,Stability AI Outpainting,VLM as Classifier,Grid Visualization,Relative Static Crop,CSV Formatter,Size Measurement,Path Deviation,Camera Focus,Time in Zone,Florence-2 Model,Identify Outliers,Object Detection Model,Detections Stitch,Cache Get,First Non Empty Or Default,Crop Visualization,Clip Comparison,Expression,Byte Tracker,Detections Stabilizer,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Time in Zone,Detections Combine,Depth Estimation,OpenAI,Gaze Detection - outputs:
Icon Visualization,Perspective Correction,Overlap Filter,Blur Visualization,Florence-2 Model,Roboflow Custom Metadata,Detections Consensus,Pixelate Visualization,Detections Filter,Detections Classes Replacement,Color Visualization,Detections Merge,Circle Visualization,Velocity,Byte Tracker,Detections List Roll-Up,Size Measurement,Bounding Rectangle,Stability AI Inpainting,Path Deviation,Line Counter,Time in Zone,Halo Visualization,Detection Event Log,Florence-2 Model,Trace Visualization,Roboflow Dataset Upload,Detections Stitch,Detections Transformation,Line Counter,Byte Tracker,Detection Offset,Dot Visualization,Camera Focus,Crop Visualization,Path Deviation,Background Color Visualization,Bounding Box Visualization,Stitch OCR Detections,Byte Tracker,Detections Stabilizer,Segment Anything 2 Model,Roboflow Dataset Upload,Dynamic Crop,Triangle Visualization,Distance Measurement,Keypoint Visualization,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Dynamic Zone,Ellipse Visualization,Time in Zone,Detections Combine,Mask Visualization,Corner Visualization,Polygon Visualization,Time in Zone,Model Comparison Visualization,Label Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Transformation in version v1 has.
Bindings
-
input
predictions(Union[object_detection_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Detection predictions to transform using UQL operations. Supports object detection, instance segmentation, or keypoint detection predictions. The detections will be transformed by the operations chain defined in the operations field. All transformations must preserve the detection type (output must remain sv.Detections). The block processes batch inputs and applies transformations per batch item..operations_parameters(*): Dictionary mapping parameter names (used in operations) to workflow data sources or values. Parameters are referenced in operations (e.g., in conditional statements, filter operations) and provided at runtime. Supports both batch parameters (aligned with predictions, one value per batch item) and non-batch parameters (same value for all batch items). Parameters are automatically separated into batch and non-batch based on their data structure. Cannot use reserved parameter names. Use this to parameterize operations dynamically (e.g., provide class lists for filtering, provide thresholds for conditions, supply values for operations that need runtime parameters)..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detections Transformation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_transformation@v1",
"predictions": "$steps.object_detection_model.predictions",
"operations": [
{
"filter_operation": {
"statements": [
{
"comparator": {
"type": "in (Sequence)"
},
"left_operand": {
"operations": [
{
"property_name": "class_name",
"type": "ExtractDetectionProperty"
}
],
"type": "DynamicOperand"
},
"right_operand": {
"operand_name": "classes",
"type": "DynamicOperand"
},
"type": "BinaryStatement"
}
],
"type": "StatementGroup"
},
"type": "DetectionsFilter"
}
],
"operations_parameters": {
"classes": "$inputs.classes"
}
}