Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/gaze@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection
in version v1
.
- inputs:
Identify Changes
,Blur Visualization
,Camera Focus
,SIFT Comparison
,Image Threshold
,Polygon Zone Visualization
,Stability AI Inpainting
,VLM as Detector
,Relative Static Crop
,Image Preprocessing
,Slack Notification
,Keypoint Visualization
,Background Color Visualization
,Grid Visualization
,Local File Sink
,Image Convert Grayscale
,Trace Visualization
,Absolute Static Crop
,Roboflow Custom Metadata
,Color Visualization
,Perspective Correction
,Twilio SMS Notification
,Classification Label Visualization
,Circle Visualization
,Camera Calibration
,Pixelate Visualization
,Image Slicer
,Label Visualization
,Halo Visualization
,Triangle Visualization
,Reference Path Visualization
,Image Slicer
,JSON Parser
,Webhook Sink
,Roboflow Dataset Upload
,Line Counter Visualization
,Image Blur
,Corner Visualization
,SIFT Comparison
,Email Notification
,Detections Consensus
,Image Contours
,Roboflow Dataset Upload
,Dynamic Crop
,Polygon Visualization
,VLM as Classifier
,Depth Estimation
,SIFT
,Ellipse Visualization
,Mask Visualization
,VLM as Detector
,Model Monitoring Inference Aggregator
,Stitch Images
,Bounding Box Visualization
,Dot Visualization
,Stability AI Image Generation
,Identify Outliers
,Model Comparison Visualization
,VLM as Classifier
,Crop Visualization
- outputs:
Detection Offset
,Blur Visualization
,Detections Filter
,Keypoint Visualization
,Background Color Visualization
,Trace Visualization
,Roboflow Custom Metadata
,Color Visualization
,Distance Measurement
,Circle Visualization
,Pixelate Visualization
,Camera Calibration
,OpenAI
,Label Visualization
,Triangle Visualization
,Webhook Sink
,Google Gemini
,Roboflow Dataset Upload
,Line Counter Visualization
,Byte Tracker
,Detections Transformation
,Corner Visualization
,Florence-2 Model
,Detections Classes Replacement
,Template Matching
,Anthropic Claude
,Detections Consensus
,Roboflow Dataset Upload
,Dynamic Crop
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Dynamic Zone
,Velocity
,Model Monitoring Inference Aggregator
,Segment Anything 2 Model
,Dot Visualization
,Bounding Box Visualization
,Llama 3.2 Vision
,Model Comparison Visualization
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection
in version v1
has.
Bindings
-
input
-
output
face_predictions
(keypoint_detection_prediction
): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees
(float
): Float value.pitch_degrees
(float
): Float value.
Example JSON definition of step Gaze Detection
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}