Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/gaze@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection in version v1.
- inputs:
Roboflow Dataset Upload,Line Counter Visualization,Stability AI Outpainting,Email Notification,Image Slicer,Identify Outliers,Image Preprocessing,Color Visualization,Ellipse Visualization,Polygon Visualization,Relative Static Crop,Detections Consensus,Webhook Sink,Model Comparison Visualization,Trace Visualization,Camera Focus,Roboflow Custom Metadata,VLM As Classifier,Image Threshold,Stitch Images,Heatmap Visualization,SIFT Comparison,Morphological Transformation,Halo Visualization,Crop Visualization,Camera Calibration,Dot Visualization,S3 Sink,Twilio SMS Notification,Icon Visualization,Model Monitoring Inference Aggregator,Local File Sink,Roboflow Dataset Upload,Dynamic Zone,Image Contours,VLM As Classifier,JSON Parser,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Reference Path Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,VLM As Detector,Stability AI Image Generation,Perspective Correction,Bounding Box Visualization,Depth Estimation,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Stability AI Inpainting,Polygon Visualization,Image Convert Grayscale,SIFT,Roboflow Vision Events,VLM As Detector,Label Visualization,Corner Visualization,Grid Visualization,Dynamic Crop,Contrast Equalization,Keypoint Visualization,Triangle Visualization,QR Code Generator,Halo Visualization,Circle Visualization,Camera Focus,Mask Visualization,Morphological Transformation,Contrast Enhancement,Background Color Visualization,Email Notification,PTZ Tracking (ONVIF),Slack Notification,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Google Gemma API,Mask Edge Snap,Distance Measurement,Google Gemini,Color Visualization,SAM2 Video Tracker,OpenAI,Ellipse Visualization,ByteTrack Tracker,Byte Tracker,Detection Event Log,Anthropic Claude,Detections Consensus,Detections Classes Replacement,Webhook Sink,Model Comparison Visualization,Continue If,Trace Visualization,Roboflow Custom Metadata,Qwen 3.5 API,Detection Offset,SAM 3,Detections List Roll-Up,Template Matching,Mask Area Measurement,Heatmap Visualization,SORT Tracker,Qwen 3.6 API,Florence-2 Model,Detections Transformation,Crop Visualization,Florence-2 Model,Camera Calibration,Dot Visualization,OC-SORT Tracker,SAM 3,Seg Preview,Model Monitoring Inference Aggregator,Detections Filter,Icon Visualization,Roboflow Dataset Upload,Google Gemini,Dynamic Zone,Pixelate Visualization,Blur Visualization,Anthropic Claude,Text Display,Detections Merge,Anthropic Claude,Velocity,Bounding Box Visualization,SAM 3,Roboflow Vision Events,OpenAI,Google Gemini,Label Visualization,Corner Visualization,Dynamic Crop,Per-Class Confidence Filter,Keypoint Visualization,Triangle Visualization,Circle Visualization,Segment Anything 2 Model,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection in version v1 has.
Bindings
-
input
-
output
face_predictions(keypoint_detection_prediction): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees(float): Float value.pitch_degrees(float): Float value.
Example JSON definition of step Gaze Detection in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}