Property Definition¶
Class: PropertyDefinitionBlockV1
Source: inference.core.workflows.core_steps.formatters.property_definition.v1.PropertyDefinitionBlockV1
Extract specific properties or fields from workflow step outputs using configurable operation chains to extract class names, confidences, counts, coordinates, OCR text, metadata, and other properties from model predictions or workflow data for data transformation, property extraction, metadata access, and value extraction workflows.
How This Block Works¶
This block extracts specific properties from data by applying a chain of operations that navigate and extract values from complex data structures. The block:
- Receives input data from any workflow step (detections, classifications, OCR results, images, or other data types)
- Applies a chain of operations defined in the operations parameter:
- Each operation performs a specific extraction or transformation task
- Operations are executed sequentially, with each operation working on the result of the previous one
- Operations can extract properties, filter data, transform formats, or combine values
- Extracts properties based on operation type:
For Detection Properties: - Extracts properties from object detection, instance segmentation, or keypoint detection predictions - Can extract: class names, confidences, counts, bounding box coordinates (x_min, y_min, x_max, y_max), centers, sizes, tracker IDs, velocities, speeds, path deviations, time in zone, polygons, and more - Returns lists of values (one per detection) or aggregated values
For Classification Properties: - Extracts properties from classification predictions - Can extract: predicted class, confidence scores, all classes, all confidences - Returns single values or lists depending on the property
For OCR Properties: - Extracts text, coordinates, and metadata from OCR results - Can extract: recognized text, bounding box information, confidence scores
For Image Properties: - Extracts metadata and properties from images - Can extract: dimensions, format information, and other image metadata
- Supports compound operations for complex extractions:
- Operations can be chained to perform multi-step extractions
- Can filter detections before extracting properties
- Can select specific detections, transform formats, or combine multiple properties
- Returns the extracted property value:
- Output type depends on the property extracted (list, string, number, dictionary, etc.)
- Returns a single output value containing the extracted property
The block uses a flexible operation system that allows extracting virtually any property from workflow data. Operations can be simple (extract a single property) or compound (filter, transform, then extract). This makes the block highly versatile for accessing specific fields from complex data structures without needing custom code.
Common Use Cases¶
- Property Extraction: Extract specific fields from model predictions (e.g., extract class names from detections, get confidence scores, extract OCR text, get detection counts), enabling property extraction workflows
- Metadata Access: Access metadata and computed properties from workflow steps (e.g., extract tracker IDs, get velocity values, access time in zone, retrieve path deviations), enabling metadata access workflows
- Data Transformation: Transform complex data structures into simpler values for downstream use (e.g., convert detections to lists, extract coordinates, get bounding box centers, extract class lists), enabling data transformation workflows
- Conditional Logic: Extract values for use in conditional logic or decision making (e.g., extract counts for thresholds, get confidences for filtering, extract class names for classification, get coordinates for calculations), enabling conditional logic workflows
- Data Formatting: Format data for storage, display, or API responses (e.g., extract values for JSON output, format data for storage, prepare data for visualization, extract for API responses), enabling data formatting workflows
- Analytics Extraction: Extract metrics and measurements for analysis (e.g., extract detection counts, get confidence statistics, extract measurement values, retrieve analytics metrics), enabling analytics extraction workflows
Connecting to Other Blocks¶
This block receives data from any workflow step and produces extracted property values:
- After model blocks (detection, classification, OCR, etc.) to extract properties from predictions (e.g., extract class names from detections, get classification results, extract OCR text), enabling model-to-property workflows
- After analytics blocks to extract computed metrics and measurements (e.g., extract velocity values, get time in zone, retrieve path deviations, access tracking information), enabling analytics-to-property workflows
- Before logic blocks like Continue If to use extracted values in conditions (e.g., continue if count exceeds threshold, filter based on extracted confidence, make decisions using extracted values), enabling property-based decision workflows
- Before data storage blocks to format extracted values for storage (e.g., store extracted properties, format values for logging, prepare data for storage), enabling property-to-storage workflows
- Before visualization blocks to provide extracted values for display (e.g., display extracted counts, show extracted text, visualize extracted metrics), enabling property visualization workflows
- Before notification blocks to use extracted values in notifications (e.g., include extracted counts in alerts, send extracted text in messages, use extracted values in notifications), enabling property-based notification workflows
Requirements¶
This block works with any data type from workflow steps. The operations parameter defines a list of operations to perform on the input data. Each operation must be compatible with the data type and previous operation outputs. Common operations include DetectionsPropertyExtract (for detection properties), ClassificationPropertyExtract (for classification properties), and other extraction operations. The block supports compound operations (operations that can contain other operations) for complex extractions. The output type depends on the operations performed and the properties extracted - it can be a list, string, number, dictionary, or other types depending on what is extracted.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/property_definition@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
operations |
List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractFrameMetadata, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, PickDetectionsByParentClass, RandomNumber, SequenceAggregate, SequenceApply, SequenceElementsCount, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, TimestampToISOFormat, ToBoolean, ToNumber, ToString]] |
List of operations to perform sequentially on the input data. Each operation performs extraction, filtering, transformation, or combination. Operations execute in order, with each operation working on the previous result. Common operations: DetectionsPropertyExtract (extract properties like class_name, confidence, count, coordinates from detections), ClassificationPropertyExtract (extract class, confidence from classifications), DetectionsFilter (filter detections before extraction), DetectionsSelection (select specific detections). Can include single or compound operations for complex extractions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Property Definition in version v1.
- inputs:
Icon Visualization,Moondream2,Slack Notification,Label Visualization,Instance Segmentation Model,Multi-Label Classification Model,Dot Visualization,Camera Calibration,Trace Visualization,SAM 3,Time in Zone,Roboflow Custom Metadata,Dynamic Zone,Semantic Segmentation Model,Delta Filter,Relative Static Crop,Image Threshold,Keypoint Visualization,Overlap Filter,PTZ Tracking (ONVIF),Template Matching,Single-Label Classification Model,Path Deviation,Blur Visualization,Circle Visualization,Keypoint Detection Model,Crop Visualization,Detections Merge,Classification Label Visualization,OpenAI,Email Notification,Google Gemini,OpenAI,Identify Outliers,Twilio SMS Notification,YOLO-World Model,CSV Formatter,Twilio SMS/MMS Notification,Model Monitoring Inference Aggregator,Keypoint Detection Model,Anthropic Claude,Google Gemini,Image Convert Grayscale,Path Deviation,Detections Filter,Time in Zone,Distance Measurement,Stability AI Inpainting,Depth Estimation,S3 Sink,Inner Workflow,Model Comparison Visualization,Byte Tracker,SAM2 Video Tracker,Motion Detection,Google Gemini,Detections Classes Replacement,Detections List Roll-Up,CLIP Embedding Model,Florence-2 Model,Size Measurement,LMM,Detection Offset,Dynamic Crop,SAM 3,Barcode Detection,Clip Comparison,Perception Encoder Embedding Model,SIFT Comparison,Detections Stabilizer,Byte Tracker,Multi-Label Classification Model,Property Definition,VLM As Detector,Grid Visualization,Polygon Visualization,Mask Visualization,Webhook Sink,Semantic Segmentation Model,Seg Preview,Image Slicer,Bounding Rectangle,SORT Tracker,OC-SORT Tracker,OCR Model,Detection Event Log,Qwen3-VL,Object Detection Model,Object Detection Model,GLM-OCR,Roboflow Dataset Upload,Object Detection Model,Detections Stitch,Polygon Zone Visualization,SIFT,Morphological Transformation,Perspective Correction,Instance Segmentation Model,Detections Combine,Cache Get,Anthropic Claude,Rate Limiter,Image Slicer,Absolute Static Crop,SmolVLM2,Cache Set,Data Aggregator,Email Notification,Camera Focus,EasyOCR,SAM 3,Gaze Detection,Polygon Visualization,OpenAI,VLM As Classifier,Stitch OCR Detections,Color Visualization,Continue If,Byte Tracker,Line Counter,Local File Sink,Image Contours,Mask Area Measurement,Roboflow Vision Events,OpenAI,Single-Label Classification Model,Llama 3.2 Vision,First Non Empty Or Default,Clip Comparison,Instance Segmentation Model,VLM As Detector,Cosine Similarity,JSON Parser,Time in Zone,LMM For Classification,Pixel Color Count,Triangle Visualization,Background Color Visualization,Stitch OCR Detections,Expression,Identify Changes,Line Counter,Qwen3.5-VL,CogVLM,Qwen2.5-VL,Anthropic Claude,Image Blur,Stitch Images,Dominant Color,Contrast Equalization,Corner Visualization,Velocity,Halo Visualization,Stability AI Image Generation,Detections Consensus,Reference Path Visualization,Buffer,QR Code Detection,Line Counter Visualization,ByteTrack Tracker,Multi-Label Classification Model,Keypoint Detection Model,Roboflow Dataset Upload,Heatmap Visualization,Text Display,VLM As Classifier,Segment Anything 2 Model,Camera Focus,Single-Label Classification Model,Detections Transformation,Image Preprocessing,SIFT Comparison,Environment Secrets Store,Bounding Box Visualization,Stability AI Outpainting,Halo Visualization,Background Subtraction,Dimension Collapse,QR Code Generator,Pixelate Visualization,Ellipse Visualization,Google Vision OCR,Florence-2 Model - outputs:
Moondream2,Icon Visualization,Slack Notification,Instance Segmentation Model,Label Visualization,Multi-Label Classification Model,Dot Visualization,Trace Visualization,Camera Calibration,SAM 3,Time in Zone,Roboflow Custom Metadata,Dynamic Zone,Semantic Segmentation Model,Delta Filter,Relative Static Crop,Image Threshold,Keypoint Visualization,PTZ Tracking (ONVIF),Overlap Filter,Template Matching,Single-Label Classification Model,Path Deviation,Blur Visualization,Circle Visualization,Keypoint Detection Model,Crop Visualization,Email Notification,Classification Label Visualization,OpenAI,Detections Merge,OpenAI,Google Gemini,Identify Outliers,Twilio SMS Notification,YOLO-World Model,Twilio SMS/MMS Notification,CSV Formatter,Keypoint Detection Model,Model Monitoring Inference Aggregator,Anthropic Claude,Path Deviation,Google Gemini,Image Convert Grayscale,Time in Zone,Detections Filter,Distance Measurement,Stability AI Inpainting,S3 Sink,Depth Estimation,Inner Workflow,Model Comparison Visualization,SAM2 Video Tracker,Byte Tracker,Motion Detection,Google Gemini,Detections Classes Replacement,Detections List Roll-Up,CLIP Embedding Model,Florence-2 Model,Size Measurement,LMM,Dynamic Crop,Detection Offset,SAM 3,Perception Encoder Embedding Model,Clip Comparison,Barcode Detection,SIFT Comparison,Detections Stabilizer,Byte Tracker,VLM As Detector,Multi-Label Classification Model,Property Definition,Grid Visualization,Polygon Visualization,Mask Visualization,Webhook Sink,Semantic Segmentation Model,Seg Preview,Image Slicer,Bounding Rectangle,SORT Tracker,OC-SORT Tracker,OCR Model,Detection Event Log,Qwen3-VL,Object Detection Model,Object Detection Model,GLM-OCR,Roboflow Dataset Upload,Object Detection Model,Polygon Zone Visualization,Detections Stitch,SIFT,Morphological Transformation,Instance Segmentation Model,Perspective Correction,Detections Combine,Cache Get,Anthropic Claude,Rate Limiter,Image Slicer,Absolute Static Crop,SmolVLM2,Cache Set,Data Aggregator,Email Notification,EasyOCR,Camera Focus,SAM 3,Gaze Detection,Polygon Visualization,OpenAI,VLM As Classifier,Stitch OCR Detections,Color Visualization,Line Counter,Local File Sink,Byte Tracker,Continue If,Image Contours,Mask Area Measurement,Roboflow Vision Events,OpenAI,Single-Label Classification Model,Llama 3.2 Vision,First Non Empty Or Default,VLM As Detector,Instance Segmentation Model,Clip Comparison,Cosine Similarity,Time in Zone,JSON Parser,Triangle Visualization,Pixel Color Count,LMM For Classification,Background Color Visualization,Identify Changes,Stitch OCR Detections,Expression,Line Counter,Qwen3.5-VL,CogVLM,Qwen2.5-VL,Anthropic Claude,Image Blur,Stitch Images,Dominant Color,Contrast Equalization,Corner Visualization,Velocity,Stability AI Image Generation,Halo Visualization,Detections Consensus,Reference Path Visualization,QR Code Detection,Buffer,Line Counter Visualization,ByteTrack Tracker,Multi-Label Classification Model,Keypoint Detection Model,Roboflow Dataset Upload,Heatmap Visualization,Text Display,VLM As Classifier,Segment Anything 2 Model,Camera Focus,Single-Label Classification Model,Detections Transformation,Image Preprocessing,SIFT Comparison,Bounding Box Visualization,Stability AI Outpainting,Halo Visualization,Background Subtraction,Dimension Collapse,QR Code Generator,Pixelate Visualization,Ellipse Visualization,Google Vision OCR,Florence-2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Property Definition in version v1 has.
Bindings
-
input
data(*): Input data from any workflow step to extract properties from. Can be detections, classifications, OCR results, images, or any other workflow output. The data type determines which operations are applicable. Examples: detection predictions for extracting class names, classification results for extracting predicted class, OCR results for extracting text..
-
output
output(*): Equivalent of any element.
Example JSON definition of step Property Definition in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/property_definition@v1",
"data": "$steps.object_detection_model.predictions",
"operations": [
{
"property_name": "class_name",
"type": "DetectionsPropertyExtract"
}
]
}