Skip to content

Gaze Detection

Run L2CS Gaze detection model on faces in images.

This block can: 1. Detect faces in images and estimate their gaze direction 2. Estimate gaze direction on pre-cropped face images

The gaze direction is represented by yaw and pitch angles in degrees.

Type identifier

Use the following identifier in step "type" field: roboflow_core/gaze@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
do_run_face_detection bool Whether to run face detection. Set to False if input images are pre-cropped face images..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Check what blocks you can connect to Gaze Detection in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Gaze Detection in version v1 has.

Bindings
  • input

    • images (image): The image to infer on.
    • do_run_face_detection (boolean): Whether to run face detection. Set to False if input images are pre-cropped face images..
  • output

    • face_predictions (keypoint_detection_prediction): Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object.
    • yaw_degrees (float): Float value.
    • pitch_degrees (float): Float value.
Example JSON definition of step Gaze Detection in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/gaze@v1",
    "images": "$inputs.image",
    "do_run_face_detection": "<block_does_not_provide_example>"
}