Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/gaze@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection
in version v1
.
- inputs:
Image Preprocessing
,Identify Changes
,Relative Static Crop
,Background Color Visualization
,Line Counter Visualization
,SIFT
,Twilio SMS Notification
,Dynamic Zone
,Reference Path Visualization
,Grid Visualization
,Roboflow Dataset Upload
,Blur Visualization
,Image Contours
,Roboflow Custom Metadata
,VLM as Detector
,Pixelate Visualization
,Camera Focus
,Color Visualization
,Roboflow Dataset Upload
,Ellipse Visualization
,Label Visualization
,Image Slicer
,Crop Visualization
,Stitch Images
,Polygon Zone Visualization
,Dot Visualization
,Bounding Box Visualization
,Image Slicer
,Detections Consensus
,Webhook Sink
,Perspective Correction
,Email Notification
,JSON Parser
,Image Blur
,Stability AI Image Generation
,Dynamic Crop
,Image Convert Grayscale
,Stability AI Inpainting
,Model Monitoring Inference Aggregator
,Identify Outliers
,Model Comparison Visualization
,Triangle Visualization
,Classification Label Visualization
,Trace Visualization
,VLM as Classifier
,Depth Estimation
,Corner Visualization
,SIFT Comparison
,SIFT Comparison
,Camera Calibration
,Absolute Static Crop
,Keypoint Visualization
,Mask Visualization
,Image Threshold
,VLM as Detector
,Halo Visualization
,VLM as Classifier
,Circle Visualization
,Slack Notification
,Local File Sink
,Polygon Visualization
- outputs:
Line Counter Visualization
,Background Color Visualization
,Dynamic Zone
,OpenAI
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Blur Visualization
,Detections Classes Replacement
,Pixelate Visualization
,Detections Transformation
,Llama 3.2 Vision
,Roboflow Dataset Upload
,Color Visualization
,Ellipse Visualization
,Label Visualization
,Crop Visualization
,Detections Filter
,Dot Visualization
,Bounding Box Visualization
,Florence-2 Model
,Detections Consensus
,Google Gemini
,Template Matching
,Webhook Sink
,Velocity
,Dynamic Crop
,Model Monitoring Inference Aggregator
,Model Comparison Visualization
,Triangle Visualization
,Trace Visualization
,Anthropic Claude
,Distance Measurement
,Byte Tracker
,Corner Visualization
,Segment Anything 2 Model
,Camera Calibration
,Detection Offset
,Keypoint Visualization
,Florence-2 Model
,Circle Visualization
,Detections Merge
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection
in version v1
has.
Bindings
-
input
-
output
face_predictions
(keypoint_detection_prediction
): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees
(float
): Float value.pitch_degrees
(float
): Float value.
Example JSON definition of step Gaze Detection
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}