Distance Measurement¶
Class: DistanceMeasurementBlockV1
Source: inference.core.workflows.core_steps.classical_cv.distance_measurement.v1.DistanceMeasurementBlockV1
Calculate the distance between two bounding boxes on a 2D plane, leveraging a perpendicular camera view and either a reference object or a pixel-to-unit scaling ratio for precise measurements.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/distance_measurement@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
object_1_class_name |
str |
The class name of the first object.. | ❌ |
object_2_class_name |
str |
The class name of the second object.. | ❌ |
reference_axis |
str |
The axis along which the distance will be measured.. | ❌ |
calibration_method |
str |
Select how to calibrate the measurement of distance between objects.. | ❌ |
reference_object_class_name |
str |
The class name of the reference object.. | ✅ |
reference_width |
float |
Width of the reference object in centimeters. | ✅ |
reference_height |
float |
Height of the reference object in centimeters. | ✅ |
pixel_ratio |
float |
The pixel-to-centimeter ratio of the input image, i.e. 1 centimeter = 100 pixels.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Distance Measurement
in version v1
.
- inputs:
Roboflow Custom Metadata
,CogVLM
,Webhook Sink
,Email Notification
,Detections Filter
,Gaze Detection
,Dynamic Crop
,VLM as Classifier
,VLM as Detector
,Twilio SMS Notification
,Moondream2
,Byte Tracker
,Google Gemini
,Segment Anything 2 Model
,Detections Consensus
,Instance Segmentation Model
,Google Vision OCR
,Time in Zone
,OpenAI
,Detections Combine
,OpenAI
,Roboflow Dataset Upload
,Florence-2 Model
,Time in Zone
,Single-Label Classification Model
,PTZ Tracking (ONVIF)
.md),Identify Changes
,Byte Tracker
,VLM as Detector
,OpenAI
,Perspective Correction
,Path Deviation
,Overlap Filter
,Bounding Rectangle
,Multi-Label Classification Model
,Line Counter
,Roboflow Dataset Upload
,Keypoint Detection Model
,Dynamic Zone
,Llama 3.2 Vision
,CSV Formatter
,Detections Merge
,YOLO-World Model
,Detections Classes Replacement
,Stitch OCR Detections
,OCR Model
,Detections Stabilizer
,Local File Sink
,Slack Notification
,Model Monitoring Inference Aggregator
,Camera Focus
,Clip Comparison
,Object Detection Model
,Florence-2 Model
,LMM
,Detections Stitch
,LMM For Classification
,Cosine Similarity
,Instance Segmentation Model
,Detections Transformation
,Path Deviation
,Detection Offset
,Velocity
,Object Detection Model
,EasyOCR
,Byte Tracker
,Time in Zone
,Anthropic Claude
,Template Matching
- outputs:
Stability AI Outpainting
,Image Slicer
,Pixelate Visualization
,Webhook Sink
,SIFT Comparison
,Image Threshold
,Blur Visualization
,Twilio SMS Notification
,Image Slicer
,Image Blur
,Byte Tracker
,Detections Consensus
,Morphological Transformation
,Polygon Visualization
,Dot Visualization
,PTZ Tracking (ONVIF)
.md),Identify Changes
,Corner Visualization
,Halo Visualization
,Mask Visualization
,Detections Classes Replacement
,Trace Visualization
,Color Visualization
,Identify Outliers
,Instance Segmentation Model
,Ellipse Visualization
,Reference Path Visualization
,Object Detection Model
,Triangle Visualization
,Image Preprocessing
,Anthropic Claude
,Keypoint Detection Model
,Email Notification
,Line Counter Visualization
,Crop Visualization
,Image Contours
,Grid Visualization
,Instance Segmentation Model
,SIFT Comparison
,Classification Label Visualization
,Stitch Images
,Keypoint Visualization
,Byte Tracker
,Absolute Static Crop
,Pixel Color Count
,Perspective Correction
,Keypoint Detection Model
,Dynamic Zone
,Stitch OCR Detections
,QR Code Generator
,Detections Stabilizer
,Slack Notification
,Circle Visualization
,Icon Visualization
,Object Detection Model
,Stability AI Inpainting
,Bounding Box Visualization
,Detection Offset
,Dominant Color
,Label Visualization
,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Distance Measurement
in version v1
has.
Bindings
-
input
predictions
(Union[instance_segmentation_prediction
,object_detection_prediction
]): The output of a detection model describing the bounding boxes that will be used to measure the objects..reference_object_class_name
(string
): The class name of the reference object..reference_width
(float
): Width of the reference object in centimeters.reference_height
(float
): Height of the reference object in centimeters.pixel_ratio
(float
): The pixel-to-centimeter ratio of the input image, i.e. 1 centimeter = 100 pixels..
-
output
Example JSON definition of step Distance Measurement
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/distance_measurement@v1",
"predictions": "$steps.model.predictions",
"object_1_class_name": "car",
"object_2_class_name": "person",
"reference_axis": "vertical",
"calibration_method": "<block_does_not_provide_example>",
"reference_object_class_name": "reference-object",
"reference_width": 2.5,
"reference_height": 2.5,
"pixel_ratio": 100
}