Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/gaze@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection
in version v1
.
- inputs:
Reference Path Visualization
,Identify Outliers
,Blur Visualization
,Pixelate Visualization
,Email Notification
,Classification Label Visualization
,Background Color Visualization
,VLM as Detector
,Dynamic Crop
,Keypoint Visualization
,Camera Focus
,Mask Visualization
,Webhook Sink
,Twilio SMS Notification
,Image Slicer
,Absolute Static Crop
,Model Monitoring Inference Aggregator
,Stability AI Image Generation
,Image Blur
,Roboflow Dataset Upload
,Identify Changes
,Circle Visualization
,Grid Visualization
,Crop Visualization
,Image Convert Grayscale
,Image Threshold
,Trace Visualization
,VLM as Detector
,JSON Parser
,SIFT Comparison
,Polygon Visualization
,Triangle Visualization
,Stability AI Inpainting
,Detections Consensus
,Halo Visualization
,Dot Visualization
,Polygon Zone Visualization
,Local File Sink
,Roboflow Custom Metadata
,VLM as Classifier
,Camera Calibration
,VLM as Classifier
,Slack Notification
,SIFT
,Corner Visualization
,Image Contours
,Model Comparison Visualization
,Stitch Images
,Bounding Box Visualization
,Line Counter Visualization
,Image Slicer
,Roboflow Dataset Upload
,Perspective Correction
,Image Preprocessing
,SIFT Comparison
,Label Visualization
,Relative Static Crop
,Color Visualization
,Ellipse Visualization
- outputs:
Blur Visualization
,Pixelate Visualization
,Anthropic Claude
,Llama 3.2 Vision
,Background Color Visualization
,Webhook Sink
,Dynamic Crop
,Keypoint Visualization
,Segment Anything 2 Model
,Detection Offset
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Florence-2 Model
,Detections Filter
,Detections Transformation
,Roboflow Dataset Upload
,Circle Visualization
,Crop Visualization
,Trace Visualization
,Template Matching
,Triangle Visualization
,Detections Consensus
,Dot Visualization
,Google Gemini
,Detections Merge
,Velocity
,Dynamic Zone
,Roboflow Custom Metadata
,OpenAI
,Detections Classes Replacement
,Camera Calibration
,Corner Visualization
,Model Comparison Visualization
,Roboflow Dataset Upload
,Bounding Box Visualization
,Line Counter Visualization
,Label Visualization
,Distance Measurement
,Color Visualization
,Ellipse Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection
in version v1
has.
Bindings
-
input
-
output
face_predictions
(keypoint_detection_prediction
): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees
(float
): Float value.pitch_degrees
(float
): Float value.
Example JSON definition of step Gaze Detection
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}