Property Definition¶
Class: PropertyDefinitionBlockV1
Source: inference.core.workflows.core_steps.formatters.property_definition.v1.PropertyDefinitionBlockV1
Extract specific properties or fields from workflow step outputs using configurable operation chains to extract class names, confidences, counts, coordinates, OCR text, metadata, and other properties from model predictions or workflow data for data transformation, property extraction, metadata access, and value extraction workflows.
How This Block Works¶
This block extracts specific properties from data by applying a chain of operations that navigate and extract values from complex data structures. The block:
- Receives input data from any workflow step (detections, classifications, OCR results, images, or other data types)
- Applies a chain of operations defined in the operations parameter:
- Each operation performs a specific extraction or transformation task
- Operations are executed sequentially, with each operation working on the result of the previous one
- Operations can extract properties, filter data, transform formats, or combine values
- Extracts properties based on operation type:
For Detection Properties: - Extracts properties from object detection, instance segmentation, or keypoint detection predictions - Can extract: class names, confidences, counts, bounding box coordinates (x_min, y_min, x_max, y_max), centers, sizes, tracker IDs, velocities, speeds, path deviations, time in zone, polygons, and more - Returns lists of values (one per detection) or aggregated values
For Classification Properties: - Extracts properties from classification predictions - Can extract: predicted class, confidence scores, all classes, all confidences - Returns single values or lists depending on the property
For OCR Properties: - Extracts text, coordinates, and metadata from OCR results - Can extract: recognized text, bounding box information, confidence scores
For Image Properties: - Extracts metadata and properties from images - Can extract: dimensions, format information, and other image metadata
- Supports compound operations for complex extractions:
- Operations can be chained to perform multi-step extractions
- Can filter detections before extracting properties
- Can select specific detections, transform formats, or combine multiple properties
- Returns the extracted property value:
- Output type depends on the property extracted (list, string, number, dictionary, etc.)
- Returns a single output value containing the extracted property
The block uses a flexible operation system that allows extracting virtually any property from workflow data. Operations can be simple (extract a single property) or compound (filter, transform, then extract). This makes the block highly versatile for accessing specific fields from complex data structures without needing custom code.
Common Use Cases¶
- Property Extraction: Extract specific fields from model predictions (e.g., extract class names from detections, get confidence scores, extract OCR text, get detection counts), enabling property extraction workflows
- Metadata Access: Access metadata and computed properties from workflow steps (e.g., extract tracker IDs, get velocity values, access time in zone, retrieve path deviations), enabling metadata access workflows
- Data Transformation: Transform complex data structures into simpler values for downstream use (e.g., convert detections to lists, extract coordinates, get bounding box centers, extract class lists), enabling data transformation workflows
- Conditional Logic: Extract values for use in conditional logic or decision making (e.g., extract counts for thresholds, get confidences for filtering, extract class names for classification, get coordinates for calculations), enabling conditional logic workflows
- Data Formatting: Format data for storage, display, or API responses (e.g., extract values for JSON output, format data for storage, prepare data for visualization, extract for API responses), enabling data formatting workflows
- Analytics Extraction: Extract metrics and measurements for analysis (e.g., extract detection counts, get confidence statistics, extract measurement values, retrieve analytics metrics), enabling analytics extraction workflows
Connecting to Other Blocks¶
This block receives data from any workflow step and produces extracted property values:
- After model blocks (detection, classification, OCR, etc.) to extract properties from predictions (e.g., extract class names from detections, get classification results, extract OCR text), enabling model-to-property workflows
- After analytics blocks to extract computed metrics and measurements (e.g., extract velocity values, get time in zone, retrieve path deviations, access tracking information), enabling analytics-to-property workflows
- Before logic blocks like Continue If to use extracted values in conditions (e.g., continue if count exceeds threshold, filter based on extracted confidence, make decisions using extracted values), enabling property-based decision workflows
- Before data storage blocks to format extracted values for storage (e.g., store extracted properties, format values for logging, prepare data for storage), enabling property-to-storage workflows
- Before visualization blocks to provide extracted values for display (e.g., display extracted counts, show extracted text, visualize extracted metrics), enabling property visualization workflows
- Before notification blocks to use extracted values in notifications (e.g., include extracted counts in alerts, send extracted text in messages, use extracted values in notifications), enabling property-based notification workflows
Requirements¶
This block works with any data type from workflow steps. The operations parameter defines a list of operations to perform on the input data. Each operation must be compatible with the data type and previous operation outputs. Common operations include DetectionsPropertyExtract (for detection properties), ClassificationPropertyExtract (for classification properties), and other extraction operations. The block supports compound operations (operations that can contain other operations) for complex extractions. The output type depends on the operations performed and the properties extracted - it can be a list, string, number, dictionary, or other types depending on what is extracted.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/property_definition@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
operations |
List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractFrameMetadata, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, PickDetectionsByParentClass, RandomNumber, SequenceAggregate, SequenceApply, SequenceElementsCount, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, TimestampToISOFormat, ToBoolean, ToNumber, ToString]] |
List of operations to perform sequentially on the input data. Each operation performs extraction, filtering, transformation, or combination. Operations execute in order, with each operation working on the previous result. Common operations: DetectionsPropertyExtract (extract properties like class_name, confidence, count, coordinates from detections), ClassificationPropertyExtract (extract class, confidence from classifications), DetectionsFilter (filter detections before extraction), DetectionsSelection (select specific detections). Can include single or compound operations for complex extractions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Property Definition in version v1.
- inputs:
Image Convert Grayscale,Image Slicer,SmolVLM2,Image Blur,Ellipse Visualization,Halo Visualization,Perception Encoder Embedding Model,Camera Focus,Detection Offset,Line Counter,Detection Event Log,Stability AI Inpainting,Reference Path Visualization,OpenAI,Slack Notification,Circle Visualization,Background Subtraction,Roboflow Dataset Upload,Stability AI Image Generation,LMM For Classification,VLM as Classifier,YOLO-World Model,Pixel Color Count,Cache Get,Clip Comparison,Detections Merge,Barcode Detection,Rate Limiter,Anthropic Claude,Buffer,Line Counter,Pixelate Visualization,Byte Tracker,CLIP Embedding Model,Email Notification,Image Contours,JSON Parser,Stitch OCR Detections,VLM as Detector,Relative Static Crop,Detections Consensus,Delta Filter,Camera Focus,Byte Tracker,Multi-Label Classification Model,LMM,Anthropic Claude,Dot Visualization,Stitch OCR Detections,Dimension Collapse,Dominant Color,Qwen2.5-VL,Keypoint Visualization,Anthropic Claude,Detections Transformation,Trace Visualization,Crop Visualization,First Non Empty Or Default,Absolute Static Crop,Google Gemini,Expression,Segment Anything 2 Model,Byte Tracker,Overlap Filter,Image Preprocessing,Gaze Detection,Instance Segmentation Model,Identify Changes,Perspective Correction,Email Notification,Cosine Similarity,Motion Detection,Halo Visualization,VLM as Classifier,SIFT Comparison,Path Deviation,Local File Sink,EasyOCR,Depth Estimation,CogVLM,Continue If,OpenAI,Polygon Visualization,QR Code Generator,Cache Set,QR Code Detection,Property Definition,Bounding Box Visualization,Size Measurement,Corner Visualization,Data Aggregator,Label Visualization,Clip Comparison,CSV Formatter,Florence-2 Model,SIFT Comparison,Google Gemini,OCR Model,Single-Label Classification Model,Webhook Sink,Contrast Equalization,Stability AI Outpainting,Environment Secrets Store,Qwen3-VL,Stitch Images,Detections Filter,Model Comparison Visualization,Distance Measurement,Polygon Visualization,Object Detection Model,Detections Stabilizer,OpenAI,Detections List Roll-Up,Path Deviation,Icon Visualization,Twilio SMS/MMS Notification,Model Monitoring Inference Aggregator,Object Detection Model,Multi-Label Classification Model,Color Visualization,SAM 3,Mask Visualization,Roboflow Dataset Upload,Time in Zone,Detections Classes Replacement,Image Slicer,Template Matching,OpenAI,Bounding Rectangle,Instance Segmentation Model,Keypoint Detection Model,Dynamic Zone,Google Gemini,Text Display,Blur Visualization,Roboflow Custom Metadata,Triangle Visualization,Google Vision OCR,Identify Outliers,SAM 3,Classification Label Visualization,Detections Combine,Image Threshold,PTZ Tracking (ONVIF).md),Camera Calibration,Time in Zone,Background Color Visualization,Seg Preview,Polygon Zone Visualization,Grid Visualization,Dynamic Crop,Keypoint Detection Model,SAM 3,Line Counter Visualization,Florence-2 Model,Time in Zone,Detections Stitch,Moondream2,Twilio SMS Notification,SIFT,Morphological Transformation,Single-Label Classification Model,Velocity,Llama 3.2 Vision,VLM as Detector - outputs:
Image Convert Grayscale,Image Slicer,SmolVLM2,Image Blur,Ellipse Visualization,Halo Visualization,Perception Encoder Embedding Model,Camera Focus,Line Counter,Detection Offset,Detection Event Log,Stability AI Inpainting,Reference Path Visualization,OpenAI,Slack Notification,Circle Visualization,Stability AI Image Generation,Roboflow Dataset Upload,Background Subtraction,LMM For Classification,VLM as Classifier,YOLO-World Model,Pixel Color Count,Cache Get,Clip Comparison,Detections Merge,Barcode Detection,Rate Limiter,Anthropic Claude,Line Counter,Pixelate Visualization,Buffer,Byte Tracker,CLIP Embedding Model,Email Notification,Image Contours,JSON Parser,Stitch OCR Detections,VLM as Detector,Relative Static Crop,Detections Consensus,Delta Filter,Camera Focus,Byte Tracker,Multi-Label Classification Model,LMM,Dot Visualization,Anthropic Claude,Stitch OCR Detections,Dimension Collapse,Dominant Color,Qwen2.5-VL,Keypoint Visualization,Anthropic Claude,Trace Visualization,Detections Transformation,Crop Visualization,First Non Empty Or Default,Absolute Static Crop,Google Gemini,Expression,Segment Anything 2 Model,Byte Tracker,Overlap Filter,Image Preprocessing,Gaze Detection,Instance Segmentation Model,Identify Changes,Perspective Correction,Email Notification,Motion Detection,Cosine Similarity,Halo Visualization,VLM as Classifier,SIFT Comparison,Path Deviation,Local File Sink,EasyOCR,Depth Estimation,CogVLM,OpenAI,Polygon Visualization,QR Code Generator,Continue If,Cache Set,QR Code Detection,Property Definition,Bounding Box Visualization,Size Measurement,Corner Visualization,Data Aggregator,Label Visualization,Clip Comparison,CSV Formatter,SIFT Comparison,Florence-2 Model,Google Gemini,OCR Model,Webhook Sink,Single-Label Classification Model,Stability AI Outpainting,Contrast Equalization,Qwen3-VL,Stitch Images,Model Comparison Visualization,Detections Filter,Distance Measurement,Polygon Visualization,Object Detection Model,Detections Stabilizer,OpenAI,Detections List Roll-Up,Path Deviation,Icon Visualization,Twilio SMS/MMS Notification,Model Monitoring Inference Aggregator,Object Detection Model,Multi-Label Classification Model,Color Visualization,SAM 3,Mask Visualization,Roboflow Dataset Upload,Time in Zone,Detections Classes Replacement,Image Slicer,Template Matching,OpenAI,Bounding Rectangle,Instance Segmentation Model,Keypoint Detection Model,Dynamic Zone,Google Gemini,Text Display,Blur Visualization,Roboflow Custom Metadata,Triangle Visualization,Google Vision OCR,Identify Outliers,SAM 3,Classification Label Visualization,Detections Combine,Image Threshold,PTZ Tracking (ONVIF).md),Camera Calibration,Time in Zone,Background Color Visualization,Seg Preview,Polygon Zone Visualization,Grid Visualization,Dynamic Crop,Keypoint Detection Model,SAM 3,Line Counter Visualization,Florence-2 Model,Time in Zone,Detections Stitch,Moondream2,Twilio SMS Notification,SIFT,Morphological Transformation,Single-Label Classification Model,Velocity,Llama 3.2 Vision,VLM as Detector
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Property Definition in version v1 has.
Bindings
-
input
data(*): Input data from any workflow step to extract properties from. Can be detections, classifications, OCR results, images, or any other workflow output. The data type determines which operations are applicable. Examples: detection predictions for extracting class names, classification results for extracting predicted class, OCR results for extracting text..
-
output
output(*): Equivalent of any element.
Example JSON definition of step Property Definition in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/property_definition@v1",
"data": "$steps.object_detection_model.predictions",
"operations": [
{
"property_name": "class_name",
"type": "DetectionsPropertyExtract"
}
]
}