Skip to content

Perception Encoder Embedding Model

Class: PerceptionEncoderModelBlockV1

Source: inference.core.workflows.core_steps.models.foundation.perception_encoder.v1.PerceptionEncoderModelBlockV1

Use the Meta Perception Encoder model to create semantic embeddings of text and images.

This block accepts an image or string and returns an embedding. The embedding can be used to compare similarity between different images or between images and text.

Type identifier

Use the following identifier in step "type" field: roboflow_core/perception_encoder@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Unique name of step in workflows.
data str The string or image to generate an embedding for..
version str Variant of Perception Encoder model.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to Perception Encoder Embedding Model in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds Perception Encoder Embedding Model in version v1 has.

Bindings
  • input

    • data (Union[string, image]): The string or image to generate an embedding for..
    • version (string): Variant of Perception Encoder model.
  • output

    • embedding (embedding): A list of floating point numbers representing a vector embedding..
Example JSON definition of step Perception Encoder Embedding Model in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/perception_encoder@v1",
    "data": "$inputs.image",
    "version": "PE-Core-B16-224"
}