Skip to content

CLIP Embedding Model

Class: ClipModelBlockV1

Source: inference.core.workflows.core_steps.models.foundation.clip.v1.ClipModelBlockV1

Use a CLIP model to create semantic embeddings of text and images.

This block accepts an image or string and returns an embedding. The embedding can be used to compare the similarity between different images or between images and text.

Type identifier

Use the following identifier in step "type" field: roboflow_core/clip@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Unique name of step in workflows.
data str The string or image to generate an embedding for..
version str Variant of CLIP model.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to CLIP Embedding Model in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds CLIP Embedding Model in version v1 has.

Bindings
  • input

    • data (Union[string, image]): The string or image to generate an embedding for..
    • version (string): Variant of CLIP model.
  • output

    • embedding (embedding): A list of floating point numbers representing a vector embedding..
Example JSON definition of step CLIP Embedding Model in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/clip@v1",
    "data": "$inputs.image",
    "version": "ViT-B-16"
}