Gaze Detection¶
Class: GazeBlockV1
Source: inference.core.workflows.core_steps.models.foundation.gaze.v1.GazeBlockV1
Run L2CS Gaze detection model on faces in images.
This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images
The gaze direction is represented by yaw and pitch angles in degrees.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/gaze@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
do_run_face_detection |
bool |
Whether to run face detection. Set to False if input images are pre-cropped face images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Gaze Detection
in version v1
.
- inputs:
Crop Visualization
,SIFT
,Stability AI Image Generation
,Triangle Visualization
,Blur Visualization
,PTZ Tracking (ONVIF)
.md),Background Color Visualization
,Relative Static Crop
,Color Visualization
,Image Contours
,Camera Focus
,Corner Visualization
,Line Counter Visualization
,Slack Notification
,Icon Visualization
,Mask Visualization
,Image Convert Grayscale
,Circle Visualization
,Image Blur
,Pixelate Visualization
,SIFT Comparison
,Absolute Static Crop
,Model Comparison Visualization
,Webhook Sink
,VLM as Detector
,VLM as Classifier
,Dynamic Zone
,Image Threshold
,VLM as Classifier
,Reference Path Visualization
,Detections Consensus
,Image Slicer
,Roboflow Dataset Upload
,Stitch Images
,Identify Outliers
,Depth Estimation
,Roboflow Dataset Upload
,Trace Visualization
,Image Preprocessing
,Classification Label Visualization
,Polygon Visualization
,Roboflow Custom Metadata
,Stability AI Outpainting
,Keypoint Visualization
,Dot Visualization
,Email Notification
,Grid Visualization
,Local File Sink
,JSON Parser
,Bounding Box Visualization
,Camera Calibration
,Polygon Zone Visualization
,Ellipse Visualization
,QR Code Generator
,Halo Visualization
,Perspective Correction
,Stability AI Inpainting
,Model Monitoring Inference Aggregator
,Image Slicer
,Twilio SMS Notification
,SIFT Comparison
,Label Visualization
,Identify Changes
,VLM as Detector
,Dynamic Crop
- outputs:
Anthropic Claude
,Segment Anything 2 Model
,Crop Visualization
,Triangle Visualization
,Blur Visualization
,PTZ Tracking (ONVIF)
.md),Background Color Visualization
,Color Visualization
,Line Counter Visualization
,Corner Visualization
,Icon Visualization
,Pixelate Visualization
,Circle Visualization
,Google Gemini
,Webhook Sink
,Model Comparison Visualization
,Llama 3.2 Vision
,Dynamic Zone
,Detections Consensus
,Roboflow Dataset Upload
,Byte Tracker
,Roboflow Dataset Upload
,Detection Offset
,Detections Filter
,Trace Visualization
,OpenAI
,Roboflow Custom Metadata
,Keypoint Visualization
,Dot Visualization
,Detections Transformation
,Bounding Box Visualization
,Camera Calibration
,Detections Classes Replacement
,Ellipse Visualization
,Detections Merge
,Florence-2 Model
,Florence-2 Model
,OpenAI
,Model Monitoring Inference Aggregator
,Template Matching
,Velocity
,Label Visualization
,Distance Measurement
,Dynamic Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Gaze Detection
in version v1
has.
Bindings
-
input
-
output
face_predictions
(keypoint_detection_prediction
): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.yaw_degrees
(float
): Float value.pitch_degrees
(float
): Float value.
Example JSON definition of step Gaze Detection
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/gaze@v1",
"images": "$inputs.image",
"do_run_face_detection": "<block_does_not_provide_example>"
}