Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/gaze@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection in version v1.
- inputs:
Detections Consensus,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Blur Visualization,Dot Visualization,Image Contours,Twilio SMS Notification,Perspective Correction,Polygon Zone Visualization,Bounding Box Visualization,QR Code Generator,VLM as Classifier,Pixelate Visualization,Reference Path Visualization,Morphological Transformation,Trace Visualization,Motion Detection,Roboflow Custom Metadata,Webhook Sink,PTZ Tracking (ONVIF).md),Image Threshold,Polygon Visualization,Dynamic Crop,Icon Visualization,Image Slicer,Identify Outliers,Stability AI Outpainting,Model Comparison Visualization,Dynamic Zone,Contrast Equalization,Classification Label Visualization,Camera Focus,Stitch Images,Mask Visualization,Stability AI Inpainting,JSON Parser,Relative Static Crop,Absolute Static Crop,Line Counter Visualization,SIFT Comparison,Identify Changes,Circle Visualization,Ellipse Visualization,Image Convert Grayscale,Email Notification,Crop Visualization,Grid Visualization,Color Visualization,Image Blur,Image Preprocessing,Stability AI Image Generation,VLM as Classifier,Keypoint Visualization,Camera Calibration,Local File Sink,SIFT,VLM as Detector,Image Slicer,Depth Estimation,Background Subtraction,Email Notification,Label Visualization,VLM as Detector,Roboflow Dataset Upload,Background Color Visualization,Triangle Visualization,Slack Notification,SIFT Comparison,Halo Visualization,Corner Visualization - outputs:
Google Gemini,Detections Consensus,Model Monitoring Inference Aggregator,Llama 3.2 Vision,Roboflow Dataset Upload,Blur Visualization,Dot Visualization,SAM 3,Detections Merge,Bounding Box Visualization,Seg Preview,Pixelate Visualization,Distance Measurement,Roboflow Custom Metadata,Trace Visualization,Velocity,OpenAI,Detections Transformation,Segment Anything 2 Model,Webhook Sink,PTZ Tracking (ONVIF).md),Dynamic Crop,Icon Visualization,Detections Classes Replacement,Model Comparison Visualization,Dynamic Zone,OpenAI,Florence-2 Model,SAM 3,Line Counter Visualization,Google Gemini,Circle Visualization,Florence-2 Model,Template Matching,Ellipse Visualization,Crop Visualization,Color Visualization,Anthropic Claude,Keypoint Visualization,Camera Calibration,Label Visualization,SAM 3,Anthropic Claude,Byte Tracker,Roboflow Dataset Upload,Detections Filter,Triangle Visualization,Background Color Visualization,Detection Offset,OpenAI,Corner Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection in version v1 has.
Bindings
-
input
-
output
face_predictions(keypoint_detection_prediction): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees(float): Float value.pitch_degrees(float): Float value.
Example JSON definition of step Gaze Detection in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}