Depth Estimation¶
Class: DepthEstimationBlockV1
Source: inference.core.workflows.core_steps.models.foundation.depth_estimation.v1.DepthEstimationBlockV1
π― This workflow block performs depth estimation on images using Apple's DepthPro model. It analyzes the spatial relationships
and depth information in images to create a depth map where:
π Each pixel's value represents its relative distance from the camera
π Lower values (darker colors) indicate closer objects
π Higher values (lighter colors) indicate further objects
The model outputs:
1. πΊοΈ A depth map showing the relative distances of objects in the scene
2. π The camera's field of view (in degrees)
3. π¬ The camera's focal length
This is particularly useful for:
- ποΈ Understanding 3D structure from 2D images
- π¨ Creating depth-aware visualizations
- π Analyzing spatial relationships in scenes
- πΆοΈ Applications in augmented reality and 3D reconstruction
β‘ The model runs efficiently on Apple Silicon (M1-M4) using Metal Performance Shaders (MPS) for accelerated inference.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/depth_estimation@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | β |
model_version |
str |
The Depth Estimation model to be used for inference.. | β |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Depth Estimation in version v1.
- inputs:
Email Notification,Contrast Equalization,Instance Segmentation Model,Google Vision OCR,Grid Visualization,S3 Sink,Stability AI Image Generation,Model Comparison Visualization,Absolute Static Crop,Keypoint Visualization,SIFT,Trace Visualization,Roboflow Dataset Upload,Twilio SMS/MMS Notification,QR Code Generator,Model Monitoring Inference Aggregator,GLM-OCR,Reference Path Visualization,Halo Visualization,OCR Model,SIFT Comparison,VLM As Classifier,Image Preprocessing,Crop Visualization,OpenAI,OpenAI,Label Visualization,Classification Label Visualization,Pixelate Visualization,Local File Sink,Twilio SMS Notification,Email Notification,Qwen3.5-VL,Stitch OCR Detections,Corner Visualization,Stitch Images,Background Subtraction,Stitch OCR Detections,LMM For Classification,EasyOCR,Morphological Transformation,CSV Formatter,OpenAI,Clip Comparison,Image Threshold,Background Color Visualization,Anthropic Claude,Google Gemini,Camera Calibration,Halo Visualization,Stability AI Outpainting,Roboflow Custom Metadata,CogVLM,OpenAI,Single-Label Classification Model,Ellipse Visualization,Heatmap Visualization,Image Convert Grayscale,Triangle Visualization,Image Blur,Depth Estimation,Color Visualization,Camera Focus,Text Display,Anthropic Claude,Dot Visualization,Image Slicer,Keypoint Detection Model,Polygon Visualization,Florence-2 Model,Circle Visualization,Blur Visualization,Multi-Label Classification Model,Google Gemini,LMM,Slack Notification,Icon Visualization,Camera Focus,Stability AI Inpainting,Polygon Visualization,Webhook Sink,Polygon Zone Visualization,Perspective Correction,Florence-2 Model,Anthropic Claude,Mask Visualization,Google Gemini,Image Contours,Dynamic Crop,Roboflow Dataset Upload,Llama 3.2 Vision,VLM As Detector,Object Detection Model,Image Slicer,Line Counter Visualization,Relative Static Crop,Bounding Box Visualization - outputs:
Dominant Color,Email Notification,Instance Segmentation Model,ByteTrack Tracker,Google Vision OCR,Contrast Equalization,Stability AI Image Generation,Model Comparison Visualization,Absolute Static Crop,Keypoint Visualization,Trace Visualization,SIFT,Roboflow Dataset Upload,Twilio SMS/MMS Notification,CLIP Embedding Model,Buffer,SAM 3,GLM-OCR,Reference Path Visualization,Halo Visualization,OCR Model,SIFT Comparison,VLM As Classifier,Time in Zone,Detections Stabilizer,VLM As Detector,Image Preprocessing,SORT Tracker,Perception Encoder Embedding Model,Crop Visualization,OpenAI,OpenAI,Label Visualization,Pixelate Visualization,Classification Label Visualization,Qwen3.5-VL,Corner Visualization,Stitch Images,Background Subtraction,LMM For Classification,EasyOCR,Morphological Transformation,Qwen2.5-VL,SAM 3,OpenAI,Clip Comparison,Single-Label Classification Model,Background Color Visualization,Anthropic Claude,Image Threshold,Google Gemini,Camera Calibration,Stability AI Outpainting,Halo Visualization,CogVLM,OpenAI,Single-Label Classification Model,Ellipse Visualization,Detections Stitch,VLM As Classifier,Heatmap Visualization,Image Convert Grayscale,Triangle Visualization,Semantic Segmentation Model,Image Blur,Depth Estimation,Barcode Detection,Color Visualization,SIFT Comparison,Camera Focus,Text Display,Anthropic Claude,Dot Visualization,Image Slicer,SmolVLM2,SAM 3,Template Matching,Keypoint Detection Model,Polygon Visualization,Florence-2 Model,Motion Detection,Circle Visualization,Blur Visualization,Qwen3-VL,Multi-Label Classification Model,Google Gemini,LMM,Icon Visualization,Camera Focus,Stability AI Inpainting,Polygon Visualization,Polygon Zone Visualization,Florence-2 Model,Gaze Detection,Perspective Correction,Anthropic Claude,Instance Segmentation Model,Mask Visualization,Moondream2,QR Code Detection,Google Gemini,Image Contours,Clip Comparison,YOLO-World Model,Dynamic Crop,Roboflow Dataset Upload,Llama 3.2 Vision,Seg Preview,Keypoint Detection Model,Object Detection Model,Pixel Color Count,VLM As Detector,Object Detection Model,Image Slicer,Byte Tracker,Line Counter Visualization,Relative Static Crop,Multi-Label Classification Model,OC-SORT Tracker,Bounding Box Visualization,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Depth Estimation in version v1 has.
Bindings
-
input
-
output
image(image): Image in workflows.normalized_depth(numpy_array): Numpy array.
Example JSON definition of step Depth Estimation in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/depth_estimation@v1",
"images": "$inputs.image",
"model_version": "depth-anything-v2/small"
}