CLIP Embedding Model¶
Class: ClipModelBlockV1
Source: inference.core.workflows.core_steps.models.foundation.clip.v1.ClipModelBlockV1
Use a CLIP model to create semantic embeddings of text and images.
This block accepts an image or string and returns an embedding. The embedding can be used to compare the similarity between different images or between images and text.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
data |
str |
The string or image to generate an embedding for.. | ✅ |
version |
str |
Variant of CLIP model. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to CLIP Embedding Model
in version v1
.
- inputs:
LMM
,Roboflow Custom Metadata
,Image Convert Grayscale
,Absolute Static Crop
,Multi-Label Classification Model
,Relative Static Crop
,Line Counter Visualization
,Background Color Visualization
,OCR Model
,Camera Focus
,Image Contours
,Image Slicer
,Keypoint Detection Model
,Instance Segmentation Model
,Reference Path Visualization
,SIFT Comparison
,Object Detection Model
,Triangle Visualization
,Depth Estimation
,Google Vision OCR
,Roboflow Dataset Upload
,Llama 3.2 Vision
,Clip Comparison
,Perspective Correction
,Crop Visualization
,Webhook Sink
,Dot Visualization
,Email Notification
,Model Comparison Visualization
,Classification Label Visualization
,Camera Calibration
,Slack Notification
,Stability AI Image Generation
,Trace Visualization
,Corner Visualization
,Image Threshold
,Local File Sink
,Blur Visualization
,CogVLM
,Stability AI Inpainting
,SIFT
,Circle Visualization
,OpenAI
,Florence-2 Model
,Twilio SMS Notification
,Label Visualization
,Stitch Images
,Image Preprocessing
,Grid Visualization
,Polygon Zone Visualization
,Keypoint Visualization
,LMM For Classification
,Stitch OCR Detections
,Bounding Box Visualization
,Image Blur
,OpenAI
,Halo Visualization
,Google Gemini
,Ellipse Visualization
,Color Visualization
,Pixelate Visualization
,VLM as Detector
,Roboflow Dataset Upload
,Polygon Visualization
,Single-Label Classification Model
,VLM as Classifier
,CSV Formatter
,Model Monitoring Inference Aggregator
,Image Slicer
,Mask Visualization
,Anthropic Claude
,Florence-2 Model
,Dynamic Crop
- outputs:
Identify Changes
,Cosine Similarity
,Identify Outliers
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
CLIP Embedding Model
in version v1
has.
Bindings
Example JSON definition of step CLIP Embedding Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip@v1",
"data": "$inputs.image",
"version": "ViT-B-16"
}