To use L2CS-Net with Inference, you will need a Roboflow API key. If you don't already have a Roboflow account, sign up for a free Roboflow account. Then, retrieve your API key from the Roboflow dashboard. Run the following command to set your API key in your coding environment:
export ROBOFLOW_API_KEY=<your api key>
L2CS-Net accepts an image and returns pitch and yaw values that you can use to:
Figure out the direction in which someone is looking, and;
Estimate, roughly, where someone is looking.
We recommend using L2CS-Net paired with inference HTTP API. It's easy to set up with our inference-cli tool. Run the
following command to set up environment and run the API under http://localhost:9001
pipinstallinferenceinference-cliinference-sdk
inferenceserverstart# this starts server under http://localhost:9001
importosfrominference_sdkimportInferenceHTTPClientCLIENT=InferenceHTTPClient(api_url="http://localhost:9001",# only local hosting supportedapi_key=os.environ["ROBOFLOW_API_KEY"])CLIENT.detect_gazes(inference_input="./image.jpg")# single image request
Above, replace image.jpg with the image in which you want to detect gazes.
The code above makes two assumptions:
Faces are roughly one meter away from the camera.
Faces are roughly 250mm tall.
These assumptions are a good starting point if you are using a computer webcam with L2CS-Net, where people in the frame are likely to be sitting at a desk.
On the first run, the model will be downloaded. On subsequent runs, the model will be cached locally and loaded from the cache. It will take a few moments for the model to download.
The results of L2CS-Net will appear in your terminal: