Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/gaze@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection in version v1.
- inputs:
Grid Visualization,Line Counter Visualization,SIFT,Icon Visualization,Stability AI Inpainting,VLM as Detector,Circle Visualization,Polygon Zone Visualization,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,QR Code Generator,Image Slicer,Dot Visualization,Webhook Sink,Blur Visualization,Slack Notification,Camera Calibration,Perspective Correction,Roboflow Dataset Upload,Background Color Visualization,Mask Visualization,Camera Focus,Twilio SMS Notification,Detections Consensus,SIFT Comparison,Dynamic Crop,Crop Visualization,Classification Label Visualization,Stability AI Outpainting,Image Preprocessing,Trace Visualization,Color Visualization,Morphological Transformation,JSON Parser,Image Threshold,Depth Estimation,Relative Static Crop,Triangle Visualization,Reference Path Visualization,Ellipse Visualization,Stability AI Image Generation,Model Comparison Visualization,VLM as Classifier,VLM as Detector,Polygon Visualization,Email Notification,Identify Changes,Corner Visualization,Halo Visualization,Image Slicer,Stitch Images,Image Blur,Absolute Static Crop,SIFT Comparison,Local File Sink,Bounding Box Visualization,Roboflow Custom Metadata,VLM as Classifier,Image Contours,Pixelate Visualization,PTZ Tracking (ONVIF).md),Dynamic Zone,Label Visualization,Keypoint Visualization,Identify Outliers,Contrast Equalization,Image Convert Grayscale - outputs:
Detections Filter,Line Counter Visualization,Distance Measurement,Icon Visualization,Detections Classes Replacement,Circle Visualization,Model Monitoring Inference Aggregator,OpenAI,Roboflow Dataset Upload,Template Matching,Dot Visualization,Webhook Sink,Blur Visualization,Camera Calibration,Roboflow Dataset Upload,Anthropic Claude,Background Color Visualization,OpenAI,Florence-2 Model,Llama 3.2 Vision,Detections Consensus,Dynamic Crop,Crop Visualization,Trace Visualization,Color Visualization,Velocity,Triangle Visualization,Ellipse Visualization,Model Comparison Visualization,Segment Anything 2 Model,Detection Offset,Corner Visualization,Florence-2 Model,Roboflow Custom Metadata,Bounding Box Visualization,Google Gemini,Pixelate Visualization,PTZ Tracking (ONVIF).md),Dynamic Zone,Label Visualization,Detections Merge,Keypoint Visualization,Detections Transformation,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection in version v1 has.
Bindings
-
input
-
output
face_predictions(keypoint_detection_prediction): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees(float): Float value.pitch_degrees(float): Float value.
Example JSON definition of step Gaze Detection in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}