Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/gaze@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection
in version v1
.
- inputs:
VLM as Detector
,Corner Visualization
,Camera Focus
,Polygon Zone Visualization
,Circle Visualization
,Roboflow Dataset Upload
,PTZ Tracking (ONVIF)
.md),Triangle Visualization
,Stability AI Inpainting
,Roboflow Custom Metadata
,Classification Label Visualization
,Bounding Box Visualization
,Depth Estimation
,SIFT
,Image Convert Grayscale
,Halo Visualization
,Grid Visualization
,Dynamic Zone
,Email Notification
,Polygon Visualization
,Absolute Static Crop
,Slack Notification
,Dot Visualization
,Color Visualization
,Label Visualization
,Stability AI Outpainting
,Detections Consensus
,Crop Visualization
,Identify Outliers
,Perspective Correction
,Stability AI Image Generation
,Image Slicer
,Model Monitoring Inference Aggregator
,Image Threshold
,Image Preprocessing
,Model Comparison Visualization
,VLM as Classifier
,SIFT Comparison
,Dynamic Crop
,Stitch Images
,Image Contours
,Mask Visualization
,Identify Changes
,JSON Parser
,Webhook Sink
,Twilio SMS Notification
,Pixelate Visualization
,Roboflow Dataset Upload
,SIFT Comparison
,Camera Calibration
,Reference Path Visualization
,Image Blur
,Line Counter Visualization
,Local File Sink
,VLM as Classifier
,Background Color Visualization
,Image Slicer
,Keypoint Visualization
,Blur Visualization
,Ellipse Visualization
,Trace Visualization
,VLM as Detector
,Relative Static Crop
- outputs:
Corner Visualization
,Google Gemini
,Detections Classes Replacement
,OpenAI
,Roboflow Dataset Upload
,Circle Visualization
,PTZ Tracking (ONVIF)
.md),Triangle Visualization
,Roboflow Custom Metadata
,Detections Transformation
,Bounding Box Visualization
,Florence-2 Model
,Detections Merge
,Distance Measurement
,Template Matching
,Dynamic Zone
,Dot Visualization
,Label Visualization
,Color Visualization
,Detections Consensus
,Crop Visualization
,Detections Filter
,Model Monitoring Inference Aggregator
,Detection Offset
,OpenAI
,Model Comparison Visualization
,Dynamic Crop
,Webhook Sink
,Florence-2 Model
,Segment Anything 2 Model
,Pixelate Visualization
,Llama 3.2 Vision
,Roboflow Dataset Upload
,Byte Tracker
,Camera Calibration
,Line Counter Visualization
,Anthropic Claude
,Background Color Visualization
,Velocity
,Keypoint Visualization
,Blur Visualization
,Ellipse Visualization
,Trace Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection
in version v1
has.
Bindings
-
input
-
output
face_predictions
(keypoint_detection_prediction
): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees
(float
): Float value.pitch_degrees
(float
): Float value.
Example JSON definition of step Gaze Detection
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}