Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/gaze@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection
in version v1
.
- inputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Bounding Box Visualization
,JSON Parser
,Twilio SMS Notification
,Line Counter Visualization
,Image Preprocessing
,Slack Notification
,VLM as Detector
,VLM as Classifier
,Trace Visualization
,Label Visualization
,Polygon Zone Visualization
,Camera Focus
,Local File Sink
,Image Slicer
,Image Slicer
,Image Blur
,Crop Visualization
,Identify Outliers
,Dot Visualization
,Relative Static Crop
,Model Comparison Visualization
,Stability AI Inpainting
,Roboflow Dataset Upload
,Pixelate Visualization
,Perspective Correction
,Detections Consensus
,Image Convert Grayscale
,Absolute Static Crop
,Mask Visualization
,Stability AI Image Generation
,Webhook Sink
,Color Visualization
,Image Threshold
,Dynamic Crop
,Halo Visualization
,Polygon Visualization
,VLM as Classifier
,Image Contours
,Camera Calibration
,Email Notification
,SIFT
,SIFT Comparison
,Reference Path Visualization
,Classification Label Visualization
,Triangle Visualization
,Model Monitoring Inference Aggregator
,SIFT Comparison
,VLM as Detector
,Roboflow Dataset Upload
,Keypoint Visualization
,Identify Changes
,Roboflow Custom Metadata
,Grid Visualization
,Ellipse Visualization
,Stitch Images
,Blur Visualization
- outputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Bounding Box Visualization
,Line Counter Visualization
,Trace Visualization
,Label Visualization
,Detections Transformation
,Anthropic Claude
,Crop Visualization
,Dot Visualization
,Detections Merge
,Google Gemini
,Detection Offset
,Model Comparison Visualization
,Roboflow Dataset Upload
,Pixelate Visualization
,OpenAI
,Detections Consensus
,Distance Measurement
,Detections Filter
,Webhook Sink
,Color Visualization
,Dynamic Crop
,Template Matching
,Florence-2 Model
,Detections Classes Replacement
,Dynamic Zone
,Camera Calibration
,Florence-2 Model
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Velocity
,Llama 3.2 Vision
,Roboflow Dataset Upload
,Keypoint Visualization
,Roboflow Custom Metadata
,Segment Anything 2 Model
,Ellipse Visualization
,Blur Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection
in version v1
has.
Bindings
-
input
-
output
face_predictions
(keypoint_detection_prediction
): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees
(float
): Float value.pitch_degrees
(float
): Float value.
Example JSON definition of step Gaze Detection
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}