CLIP Embedding Model¶
Class: ClipModelBlockV1
Source: inference.core.workflows.core_steps.models.foundation.clip.v1.ClipModelBlockV1
Use a CLIP model to create semantic embeddings of text and images.
This block accepts an image or string and returns an embedding. The embedding can be used to compare the similarity between different images or between images and text.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
data |
str |
The string or image to generate an embedding for.. | ✅ |
version |
str |
Variant of CLIP model. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to CLIP Embedding Model
in version v1
.
- inputs:
LMM
,Depth Estimation
,Classification Label Visualization
,Camera Calibration
,Stitch OCR Detections
,LMM For Classification
,Clip Comparison
,CSV Formatter
,Instance Segmentation Model
,Stitch Images
,CogVLM
,Image Slicer
,Roboflow Dataset Upload
,Absolute Static Crop
,Twilio SMS Notification
,Florence-2 Model
,OpenAI
,Label Visualization
,Single-Label Classification Model
,Roboflow Dataset Upload
,Bounding Box Visualization
,Llama 3.2 Vision
,Model Comparison Visualization
,Slack Notification
,Object Detection Model
,VLM as Classifier
,Grid Visualization
,Image Convert Grayscale
,Halo Visualization
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Keypoint Detection Model
,Reference Path Visualization
,Perspective Correction
,Dynamic Crop
,Camera Focus
,VLM as Detector
,Florence-2 Model
,Local File Sink
,OpenAI
,Google Vision OCR
,Stability AI Image Generation
,Ellipse Visualization
,SIFT
,Blur Visualization
,Circle Visualization
,Dot Visualization
,Image Blur
,Background Color Visualization
,Multi-Label Classification Model
,Color Visualization
,Pixelate Visualization
,Google Gemini
,Stability AI Inpainting
,Polygon Zone Visualization
,Relative Static Crop
,OCR Model
,Keypoint Visualization
,Mask Visualization
,Image Preprocessing
,Line Counter Visualization
,Roboflow Custom Metadata
,Anthropic Claude
,Webhook Sink
,SIFT Comparison
,Trace Visualization
,Corner Visualization
,Polygon Visualization
,Crop Visualization
,Email Notification
,Image Contours
,Image Slicer
,Image Threshold
- outputs:
Identify Changes
,Identify Outliers
,Cosine Similarity
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
CLIP Embedding Model
in version v1
has.
Bindings
Example JSON definition of step CLIP Embedding Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip@v1",
"data": "$inputs.image",
"version": "ViT-B-16"
}