Property Definition¶
Class: PropertyDefinitionBlockV1
Source: inference.core.workflows.core_steps.formatters.property_definition.v1.PropertyDefinitionBlockV1
Extract specific properties or fields from workflow step outputs using configurable operation chains to extract class names, confidences, counts, coordinates, OCR text, metadata, and other properties from model predictions or workflow data for data transformation, property extraction, metadata access, and value extraction workflows.
How This Block Works¶
This block extracts specific properties from data by applying a chain of operations that navigate and extract values from complex data structures. The block:
- Receives input data from any workflow step (detections, classifications, OCR results, images, or other data types)
- Applies a chain of operations defined in the operations parameter:
- Each operation performs a specific extraction or transformation task
- Operations are executed sequentially, with each operation working on the result of the previous one
- Operations can extract properties, filter data, transform formats, or combine values
- Extracts properties based on operation type:
For Detection Properties: - Extracts properties from object detection, instance segmentation, or keypoint detection predictions - Can extract: class names, confidences, counts, bounding box coordinates (x_min, y_min, x_max, y_max), centers, sizes, tracker IDs, velocities, speeds, path deviations, time in zone, polygons, and more - Returns lists of values (one per detection) or aggregated values
For Classification Properties: - Extracts properties from classification predictions - Can extract: predicted class, confidence scores, all classes, all confidences - Returns single values or lists depending on the property
For OCR Properties: - Extracts text, coordinates, and metadata from OCR results - Can extract: recognized text, bounding box information, confidence scores
For Image Properties: - Extracts metadata and properties from images - Can extract: dimensions, format information, and other image metadata
- Supports compound operations for complex extractions:
- Operations can be chained to perform multi-step extractions
- Can filter detections before extracting properties
- Can select specific detections, transform formats, or combine multiple properties
- Returns the extracted property value:
- Output type depends on the property extracted (list, string, number, dictionary, etc.)
- Returns a single output value containing the extracted property
The block uses a flexible operation system that allows extracting virtually any property from workflow data. Operations can be simple (extract a single property) or compound (filter, transform, then extract). This makes the block highly versatile for accessing specific fields from complex data structures without needing custom code.
Common Use Cases¶
- Property Extraction: Extract specific fields from model predictions (e.g., extract class names from detections, get confidence scores, extract OCR text, get detection counts), enabling property extraction workflows
- Metadata Access: Access metadata and computed properties from workflow steps (e.g., extract tracker IDs, get velocity values, access time in zone, retrieve path deviations), enabling metadata access workflows
- Data Transformation: Transform complex data structures into simpler values for downstream use (e.g., convert detections to lists, extract coordinates, get bounding box centers, extract class lists), enabling data transformation workflows
- Conditional Logic: Extract values for use in conditional logic or decision making (e.g., extract counts for thresholds, get confidences for filtering, extract class names for classification, get coordinates for calculations), enabling conditional logic workflows
- Data Formatting: Format data for storage, display, or API responses (e.g., extract values for JSON output, format data for storage, prepare data for visualization, extract for API responses), enabling data formatting workflows
- Analytics Extraction: Extract metrics and measurements for analysis (e.g., extract detection counts, get confidence statistics, extract measurement values, retrieve analytics metrics), enabling analytics extraction workflows
Connecting to Other Blocks¶
This block receives data from any workflow step and produces extracted property values:
- After model blocks (detection, classification, OCR, etc.) to extract properties from predictions (e.g., extract class names from detections, get classification results, extract OCR text), enabling model-to-property workflows
- After analytics blocks to extract computed metrics and measurements (e.g., extract velocity values, get time in zone, retrieve path deviations, access tracking information), enabling analytics-to-property workflows
- Before logic blocks like Continue If to use extracted values in conditions (e.g., continue if count exceeds threshold, filter based on extracted confidence, make decisions using extracted values), enabling property-based decision workflows
- Before data storage blocks to format extracted values for storage (e.g., store extracted properties, format values for logging, prepare data for storage), enabling property-to-storage workflows
- Before visualization blocks to provide extracted values for display (e.g., display extracted counts, show extracted text, visualize extracted metrics), enabling property visualization workflows
- Before notification blocks to use extracted values in notifications (e.g., include extracted counts in alerts, send extracted text in messages, use extracted values in notifications), enabling property-based notification workflows
Requirements¶
This block works with any data type from workflow steps. The operations parameter defines a list of operations to perform on the input data. Each operation must be compatible with the data type and previous operation outputs. Common operations include DetectionsPropertyExtract (for detection properties), ClassificationPropertyExtract (for classification properties), and other extraction operations. The block supports compound operations (operations that can contain other operations) for complex extractions. The output type depends on the operations performed and the properties extracted - it can be a list, string, number, dictionary, or other types depending on what is extracted.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/property_definition@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
operations |
List[Union[ClassificationPropertyExtract, ConvertDictionaryToJSON, ConvertImageToBase64, ConvertImageToJPEG, DetectionsFilter, DetectionsOffset, DetectionsPropertyExtract, DetectionsRename, DetectionsSelection, DetectionsShift, DetectionsToDictionary, Divide, ExtractDetectionProperty, ExtractFrameMetadata, ExtractImageProperty, LookupTable, Multiply, NumberRound, NumericSequenceAggregate, PickDetectionsByParentClass, RandomNumber, SequenceAggregate, SequenceApply, SequenceElementsCount, SequenceLength, SequenceMap, SortDetections, StringMatches, StringSubSequence, StringToLowerCase, StringToUpperCase, TimestampToISOFormat, ToBoolean, ToNumber, ToString]] |
List of operations to perform sequentially on the input data. Each operation performs extraction, filtering, transformation, or combination. Operations execute in order, with each operation working on the previous result. Common operations: DetectionsPropertyExtract (extract properties like class_name, confidence, count, coordinates from detections), ClassificationPropertyExtract (extract class, confidence from classifications), DetectionsFilter (filter detections before extraction), DetectionsSelection (select specific detections). Can include single or compound operations for complex extractions.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Property Definition in version v1.
- inputs:
Clip Comparison,Morphological Transformation,Motion Detection,Email Notification,Detections Stitch,Anthropic Claude,Detections Merge,Pixel Color Count,Keypoint Detection Model,Reference Path Visualization,Stitch OCR Detections,Camera Focus,Expression,Stability AI Image Generation,Rate Limiter,Stability AI Outpainting,Stitch Images,Time in Zone,Bounding Rectangle,Roboflow Dataset Upload,Detections Transformation,Depth Estimation,CogVLM,JSON Parser,Local File Sink,Identify Outliers,SAM 3,Dynamic Crop,Time in Zone,Perception Encoder Embedding Model,Moondream2,Dot Visualization,Triangle Visualization,Cosine Similarity,Crop Visualization,PTZ Tracking (ONVIF).md),Twilio SMS Notification,Perspective Correction,Twilio SMS/MMS Notification,EasyOCR,Dimension Collapse,First Non Empty Or Default,Pixelate Visualization,Detections Consensus,OpenAI,Roboflow Dataset Upload,Buffer,Barcode Detection,Single-Label Classification Model,Object Detection Model,SIFT Comparison,Cache Set,Contrast Equalization,Byte Tracker,Halo Visualization,Model Comparison Visualization,Byte Tracker,Slack Notification,Dynamic Zone,Cache Get,Qwen2.5-VL,Image Contours,Background Color Visualization,Image Blur,Mask Visualization,Google Vision OCR,Color Visualization,Corner Visualization,Path Deviation,Clip Comparison,Template Matching,Line Counter Visualization,Ellipse Visualization,Icon Visualization,Velocity,Image Slicer,Detections Stabilizer,Absolute Static Crop,Stability AI Inpainting,SAM 3,Distance Measurement,Relative Static Crop,SIFT,CSV Formatter,Detections Filter,Blur Visualization,Multi-Label Classification Model,Instance Segmentation Model,Florence-2 Model,Google Gemini,LMM,Instance Segmentation Model,Polygon Zone Visualization,Keypoint Visualization,Roboflow Custom Metadata,Camera Focus,Multi-Label Classification Model,Detection Offset,Image Threshold,LMM For Classification,Anthropic Claude,Delta Filter,Email Notification,Gaze Detection,Overlap Filter,Property Definition,Image Slicer,SmolVLM2,OpenAI,Detection Event Log,YOLO-World Model,Google Gemini,Image Preprocessing,Florence-2 Model,VLM as Detector,Image Convert Grayscale,Time in Zone,Byte Tracker,OCR Model,Seg Preview,Path Deviation,Continue If,SAM 3,Detections List Roll-Up,Grid Visualization,Google Gemini,Line Counter,Object Detection Model,Trace Visualization,QR Code Generator,CLIP Embedding Model,Camera Calibration,Webhook Sink,QR Code Detection,VLM as Detector,Data Aggregator,Background Subtraction,Bounding Box Visualization,Label Visualization,OpenAI,Circle Visualization,Dominant Color,VLM as Classifier,Size Measurement,Llama 3.2 Vision,Classification Label Visualization,Single-Label Classification Model,OpenAI,Segment Anything 2 Model,Detections Combine,Detections Classes Replacement,Environment Secrets Store,Model Monitoring Inference Aggregator,Line Counter,VLM as Classifier,Polygon Visualization,SIFT Comparison,Keypoint Detection Model,Qwen3-VL,Identify Changes,Text Display - outputs:
Clip Comparison,Morphological Transformation,Motion Detection,Email Notification,Detections Stitch,Anthropic Claude,Pixel Color Count,Detections Merge,Keypoint Detection Model,Reference Path Visualization,Stitch OCR Detections,Camera Focus,Expression,Stability AI Image Generation,Stitch Images,Stability AI Outpainting,Time in Zone,Rate Limiter,Bounding Rectangle,Roboflow Dataset Upload,Depth Estimation,Detections Transformation,CogVLM,Identify Outliers,JSON Parser,Local File Sink,SAM 3,Dynamic Crop,Time in Zone,Perception Encoder Embedding Model,Moondream2,Triangle Visualization,Dot Visualization,Cosine Similarity,Crop Visualization,PTZ Tracking (ONVIF).md),Twilio SMS Notification,Twilio SMS/MMS Notification,Perspective Correction,EasyOCR,Dimension Collapse,First Non Empty Or Default,Pixelate Visualization,Detections Consensus,OpenAI,Roboflow Dataset Upload,Buffer,Single-Label Classification Model,Object Detection Model,Barcode Detection,SIFT Comparison,Cache Set,Contrast Equalization,Byte Tracker,Halo Visualization,Model Comparison Visualization,Byte Tracker,Slack Notification,Dynamic Zone,Qwen2.5-VL,Cache Get,Image Contours,Image Blur,Background Color Visualization,Mask Visualization,Google Vision OCR,Color Visualization,Corner Visualization,Path Deviation,Clip Comparison,Template Matching,Line Counter Visualization,Ellipse Visualization,Icon Visualization,Velocity,Image Slicer,Detections Stabilizer,Absolute Static Crop,Stability AI Inpainting,SAM 3,Distance Measurement,Relative Static Crop,SIFT,CSV Formatter,Detections Filter,Blur Visualization,Multi-Label Classification Model,Instance Segmentation Model,Florence-2 Model,Google Gemini,LMM,Instance Segmentation Model,Polygon Zone Visualization,Keypoint Visualization,Roboflow Custom Metadata,Camera Focus,Multi-Label Classification Model,Detection Offset,Image Threshold,LMM For Classification,Anthropic Claude,Delta Filter,Email Notification,Gaze Detection,Overlap Filter,Property Definition,Image Slicer,SmolVLM2,OpenAI,Detection Event Log,YOLO-World Model,Google Gemini,Image Preprocessing,Florence-2 Model,VLM as Detector,Image Convert Grayscale,Time in Zone,Byte Tracker,OCR Model,Seg Preview,Path Deviation,Continue If,SAM 3,Detections List Roll-Up,Grid Visualization,Google Gemini,Line Counter,Object Detection Model,Trace Visualization,QR Code Generator,CLIP Embedding Model,Camera Calibration,Webhook Sink,QR Code Detection,VLM as Detector,Data Aggregator,Background Subtraction,Bounding Box Visualization,Label Visualization,OpenAI,Circle Visualization,VLM as Classifier,Dominant Color,Size Measurement,Llama 3.2 Vision,Classification Label Visualization,Single-Label Classification Model,OpenAI,Detections Combine,Segment Anything 2 Model,Detections Classes Replacement,Model Monitoring Inference Aggregator,Line Counter,VLM as Classifier,Polygon Visualization,SIFT Comparison,Keypoint Detection Model,Qwen3-VL,Identify Changes,Text Display
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Property Definition in version v1 has.
Bindings
-
input
data(*): Input data from any workflow step to extract properties from. Can be detections, classifications, OCR results, images, or any other workflow output. The data type determines which operations are applicable. Examples: detection predictions for extracting class names, classification results for extracting predicted class, OCR results for extracting text..
-
output
output(*): Equivalent of any element.
Example JSON definition of step Property Definition in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/property_definition@v1",
"data": "$steps.object_detection_model.predictions",
"operations": [
{
"property_name": "class_name",
"type": "DetectionsPropertyExtract"
}
]
}