Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/gaze@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection in version v1.
- inputs:
Absolute Static Crop,VLM as Detector,Relative Static Crop,JSON Parser,Polygon Visualization,Keypoint Visualization,Model Monitoring Inference Aggregator,Icon Visualization,VLM as Classifier,Local File Sink,Slack Notification,Blur Visualization,Trace Visualization,Color Visualization,Image Contours,Polygon Zone Visualization,Identify Outliers,Camera Focus,Halo Visualization,VLM as Detector,Identify Changes,Bounding Box Visualization,SIFT,Camera Calibration,Twilio SMS Notification,Dynamic Zone,Triangle Visualization,Classification Label Visualization,Background Color Visualization,VLM as Classifier,Webhook Sink,Dynamic Crop,Dot Visualization,Pixelate Visualization,Stability AI Inpainting,Image Threshold,Reference Path Visualization,Detections Consensus,Corner Visualization,Email Notification,Ellipse Visualization,Image Slicer,Stitch Images,Crop Visualization,Morphological Transformation,Roboflow Custom Metadata,Grid Visualization,Image Preprocessing,Mask Visualization,Line Counter Visualization,SIFT Comparison,SIFT Comparison,QR Code Generator,Depth Estimation,Roboflow Dataset Upload,Image Slicer,Perspective Correction,Image Convert Grayscale,Stability AI Image Generation,Contrast Equalization,Label Visualization,PTZ Tracking (ONVIF).md),Image Blur,Model Comparison Visualization,Circle Visualization,Stability AI Outpainting,Roboflow Dataset Upload - outputs:
Template Matching,Model Monitoring Inference Aggregator,Velocity,Keypoint Visualization,Llama 3.2 Vision,Icon Visualization,Blur Visualization,Trace Visualization,Color Visualization,Seg Preview,Bounding Box Visualization,Camera Calibration,Dynamic Zone,Triangle Visualization,Background Color Visualization,Segment Anything 2 Model,Webhook Sink,Dynamic Crop,Dot Visualization,Pixelate Visualization,Detections Consensus,Byte Tracker,Corner Visualization,Detections Classes Replacement,Ellipse Visualization,OpenAI,Detection Offset,Crop Visualization,Detections Filter,Roboflow Custom Metadata,Detections Merge,Line Counter Visualization,OpenAI,Florence-2 Model,Detections Transformation,Roboflow Dataset Upload,Google Gemini,Distance Measurement,Label Visualization,Anthropic Claude,PTZ Tracking (ONVIF).md),Model Comparison Visualization,Circle Visualization,Roboflow Dataset Upload,Florence-2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection in version v1 has.
Bindings
-
input
-
output
face_predictions(keypoint_detection_prediction): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees(float): Float value.pitch_degrees(float): Float value.
Example JSON definition of step Gaze Detection in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}