CLIP Embedding Model¶
Class: ClipModelBlockV1
Source: inference.core.workflows.core_steps.models.foundation.clip.v1.ClipModelBlockV1
Use a CLIP model to create semantic embeddings of text and images.
This block accepts an image or string and returns an embedding. The embedding can be used to compare the similarity between different images or between images and text.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
data |
str |
The string or image to generate an embedding for.. | ✅ |
version |
str |
Variant of CLIP model. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to CLIP Embedding Model
in version v1
.
- inputs:
Multi-Label Classification Model
,Single-Label Classification Model
,Classification Label Visualization
,Background Color Visualization
,Webhook Sink
,Dynamic Crop
,Mask Visualization
,Clip Comparison
,Google Vision OCR
,Twilio SMS Notification
,Absolute Static Crop
,Model Monitoring Inference Aggregator
,Stability AI Image Generation
,Florence-2 Model
,Image Blur
,LMM For Classification
,Roboflow Dataset Upload
,CogVLM
,Circle Visualization
,OCR Model
,Crop Visualization
,OpenAI
,Stitch OCR Detections
,OpenAI
,Image Preprocessing
,Model Comparison Visualization
,Stitch Images
,Bounding Box Visualization
,Keypoint Detection Model
,Perspective Correction
,SIFT Comparison
,Relative Static Crop
,Slack Notification
,Color Visualization
,Ellipse Visualization
,Reference Path Visualization
,Blur Visualization
,Pixelate Visualization
,Anthropic Claude
,Email Notification
,LMM
,Llama 3.2 Vision
,CSV Formatter
,VLM as Detector
,Keypoint Visualization
,Camera Focus
,Florence-2 Model
,Grid Visualization
,Image Convert Grayscale
,Image Threshold
,Trace Visualization
,Polygon Visualization
,Triangle Visualization
,Stability AI Inpainting
,Halo Visualization
,Dot Visualization
,Polygon Zone Visualization
,Google Gemini
,Local File Sink
,Instance Segmentation Model
,Roboflow Custom Metadata
,VLM as Classifier
,Camera Calibration
,Object Detection Model
,SIFT
,Corner Visualization
,Image Contours
,Line Counter Visualization
,Roboflow Dataset Upload
,Image Slicer
,Image Slicer
,Label Visualization
- outputs:
Cosine Similarity
,Identify Outliers
,Identify Changes
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
CLIP Embedding Model
in version v1
has.
Bindings
Example JSON definition of step CLIP Embedding Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip@v1",
"data": "$inputs.image",
"version": "ViT-B-16"
}