Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/gaze@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection in version v1.
- inputs:
Mask Visualization,Circle Visualization,Classification Label Visualization,Detections Consensus,SIFT Comparison,Halo Visualization,Identify Outliers,Webhook Sink,Dynamic Zone,Email Notification,Blur Visualization,QR Code Generator,Dynamic Crop,VLM As Detector,VLM As Detector,Label Visualization,Twilio SMS/MMS Notification,Image Blur,Corner Visualization,Image Convert Grayscale,Email Notification,Ellipse Visualization,SIFT,Image Preprocessing,Stability AI Outpainting,Model Monitoring Inference Aggregator,Halo Visualization,Stability AI Inpainting,JSON Parser,Image Threshold,Background Color Visualization,Image Contours,Depth Estimation,Model Comparison Visualization,Trace Visualization,Morphological Transformation,Triangle Visualization,Absolute Static Crop,Relative Static Crop,Text Display,Stitch Images,Roboflow Custom Metadata,Camera Calibration,Grid Visualization,Local File Sink,Slack Notification,VLM As Classifier,Roboflow Dataset Upload,Camera Focus,PTZ Tracking (ONVIF).md),Perspective Correction,Color Visualization,Dot Visualization,Image Slicer,Pixelate Visualization,Polygon Visualization,Stability AI Image Generation,Reference Path Visualization,Keypoint Visualization,Polygon Visualization,Twilio SMS Notification,VLM As Classifier,Line Counter Visualization,Bounding Box Visualization,Contrast Equalization,Polygon Zone Visualization,Identify Changes,SIFT Comparison,Camera Focus,Icon Visualization,Crop Visualization,Motion Detection,Background Subtraction,Roboflow Dataset Upload,Image Slicer - outputs:
Anthropic Claude,Detections Consensus,Detections Merge,Webhook Sink,Dynamic Zone,Dynamic Crop,Google Gemini,SAM 3,Detection Offset,Corner Visualization,Byte Tracker,Segment Anything 2 Model,Template Matching,Trace Visualization,Triangle Visualization,Text Display,Detections Filter,Google Gemini,Camera Calibration,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Color Visualization,Dot Visualization,Anthropic Claude,Llama 3.2 Vision,Line Counter Visualization,Distance Measurement,Detections Classes Replacement,Velocity,Continue If,Circle Visualization,Seg Preview,Florence-2 Model,Blur Visualization,Florence-2 Model,Label Visualization,Ellipse Visualization,OpenAI,Model Monitoring Inference Aggregator,SAM 3,Detections List Roll-Up,OpenAI,Model Comparison Visualization,Background Color Visualization,Roboflow Custom Metadata,Anthropic Claude,Pixelate Visualization,Keypoint Visualization,SAM 3,Bounding Box Visualization,Detection Event Log,Icon Visualization,Crop Visualization,Stitch OCR Detections,Google Gemini,OpenAI,Detections Transformation,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection in version v1 has.
Bindings
-
input
-
output
face_predictions(keypoint_detection_prediction): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees(float): Float value.pitch_degrees(float): Float value.
Example JSON definition of step Gaze Detection in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}