Depth Estimation¶
Class: DepthEstimationBlockV1
Source: inference.core.workflows.core_steps.models.foundation.depth_estimation.v1.DepthEstimationBlockV1
π― This workflow block performs depth estimation on images using Apple's DepthPro model. It analyzes the spatial relationships
and depth information in images to create a depth map where:
π Each pixel's value represents its relative distance from the camera
π Lower values (darker colors) indicate closer objects
π Higher values (lighter colors) indicate further objects
The model outputs:
1. πΊοΈ A depth map showing the relative distances of objects in the scene
2. π The camera's field of view (in degrees)
3. π¬ The camera's focal length
This is particularly useful for:
- ποΈ Understanding 3D structure from 2D images
- π¨ Creating depth-aware visualizations
- π Analyzing spatial relationships in scenes
- πΆοΈ Applications in augmented reality and 3D reconstruction
β‘ The model runs efficiently on Apple Silicon (M1-M4) using Metal Performance Shaders (MPS) for accelerated inference.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/depth_estimation@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | β |
model_version |
str |
The Depth Estimation model to be used for inference.. | β |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Depth Estimation
in version v1
.
- inputs:
Corner Visualization
,Camera Focus
,Polygon Zone Visualization
,Circle Visualization
,Triangle Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Bounding Box Visualization
,Depth Estimation
,SIFT
,Image Convert Grayscale
,Halo Visualization
,Grid Visualization
,Polygon Visualization
,Absolute Static Crop
,Dot Visualization
,Color Visualization
,Label Visualization
,Stability AI Outpainting
,Crop Visualization
,Perspective Correction
,Stability AI Image Generation
,Image Slicer
,Image Threshold
,Image Preprocessing
,Model Comparison Visualization
,SIFT Comparison
,Dynamic Crop
,Stitch Images
,Image Contours
,Mask Visualization
,Pixelate Visualization
,Camera Calibration
,Reference Path Visualization
,Image Blur
,Line Counter Visualization
,Background Color Visualization
,Image Slicer
,Keypoint Visualization
,Blur Visualization
,Ellipse Visualization
,Trace Visualization
,Relative Static Crop
- outputs:
YOLO-World Model
,OpenAI
,VLM as Detector
,Keypoint Detection Model
,Circle Visualization
,Gaze Detection
,Roboflow Dataset Upload
,Perception Encoder Embedding Model
,Depth Estimation
,SIFT
,Florence-2 Model
,Buffer
,Single-Label Classification Model
,Template Matching
,Detections Stitch
,Instance Segmentation Model
,Color Visualization
,Object Detection Model
,Perspective Correction
,Image Slicer
,OpenAI
,Keypoint Detection Model
,Model Comparison Visualization
,Clip Comparison
,Stitch Images
,Dynamic Crop
,Moondream2
,Image Contours
,Pixelate Visualization
,Llama 3.2 Vision
,Byte Tracker
,SIFT Comparison
,Camera Calibration
,Reference Path Visualization
,Time in Zone
,Image Blur
,CLIP Embedding Model
,Blur Visualization
,OCR Model
,Ellipse Visualization
,Trace Visualization
,SmolVLM2
,Polygon Zone Visualization
,Corner Visualization
,Google Gemini
,Camera Focus
,OpenAI
,Triangle Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Single-Label Classification Model
,Barcode Detection
,Bounding Box Visualization
,Detections Stabilizer
,CogVLM
,Image Convert Grayscale
,Halo Visualization
,LMM
,Polygon Visualization
,Absolute Static Crop
,Object Detection Model
,Dot Visualization
,Label Visualization
,Stability AI Outpainting
,Crop Visualization
,Google Vision OCR
,Stability AI Image Generation
,Pixel Color Count
,Image Threshold
,Image Preprocessing
,VLM as Classifier
,SIFT Comparison
,Mask Visualization
,Dominant Color
,Florence-2 Model
,Segment Anything 2 Model
,Clip Comparison
,Roboflow Dataset Upload
,QR Code Detection
,Line Counter Visualization
,VLM as Classifier
,Instance Segmentation Model
,Background Color Visualization
,Anthropic Claude
,LMM For Classification
,Multi-Label Classification Model
,Image Slicer
,Keypoint Visualization
,Qwen2.5-VL
,Multi-Label Classification Model
,VLM as Detector
,Relative Static Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Depth Estimation
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..
-
output
image
(image
): Image in workflows.normalized_depth
(numpy_array
): Numpy array.
Example JSON definition of step Depth Estimation
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/depth_estimation@v1",
"images": "$inputs.image",
"model_version": "depth-anything-v2/small"
}