CLIP Embedding Model¶
Class: ClipModelBlockV1
Source: inference.core.workflows.core_steps.models.foundation.clip.v1.ClipModelBlockV1
Use a CLIP model to create semantic embeddings of text and images.
This block accepts an image or string and returns an embedding. The embedding can be used to compare the similarity between different images or between images and text.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/clip@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
data |
str |
The string or image to generate an embedding for.. | ✅ |
version |
str |
Variant of CLIP model. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to CLIP Embedding Model in version v1.
- inputs:
Florence-2 Model,Trace Visualization,Roboflow Dataset Upload,Classification Label Visualization,Stitch Images,Image Slicer,Ellipse Visualization,Crop Visualization,Grid Visualization,Morphological Transformation,Triangle Visualization,Reference Path Visualization,Roboflow Dataset Upload,Google Gemini,LMM,Stitch OCR Detections,Twilio SMS/MMS Notification,Image Slicer,Local File Sink,VLM As Classifier,Icon Visualization,QR Code Generator,Stability AI Outpainting,OpenAI,Florence-2 Model,Google Vision OCR,Camera Focus,Pixelate Visualization,Model Comparison Visualization,Image Preprocessing,Background Color Visualization,Clip Comparison,Color Visualization,Twilio SMS Notification,Polygon Zone Visualization,OpenAI,Halo Visualization,Background Subtraction,Keypoint Detection Model,Keypoint Visualization,Instance Segmentation Model,Contrast Equalization,EasyOCR,Image Blur,Polygon Visualization,Anthropic Claude,SIFT,Google Gemini,Webhook Sink,Perspective Correction,Object Detection Model,Circle Visualization,Blur Visualization,Dot Visualization,Camera Calibration,Heatmap Visualization,Image Threshold,Multi-Label Classification Model,Relative Static Crop,Google Gemini,Text Display,Email Notification,OpenAI,Single-Label Classification Model,Anthropic Claude,Depth Estimation,Mask Visualization,CSV Formatter,Stability AI Image Generation,Halo Visualization,Absolute Static Crop,OCR Model,Label Visualization,Stability AI Inpainting,Anthropic Claude,Corner Visualization,Image Convert Grayscale,Stitch OCR Detections,Roboflow Custom Metadata,SIFT Comparison,Polygon Visualization,CogVLM,VLM As Detector,Line Counter Visualization,Bounding Box Visualization,Llama 3.2 Vision,Camera Focus,Email Notification,Slack Notification,Dynamic Crop,Image Contours,Model Monitoring Inference Aggregator,LMM For Classification,OpenAI - outputs:
Identify Changes,Identify Outliers,Cosine Similarity
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
CLIP Embedding Model in version v1 has.
Bindings
Example JSON definition of step CLIP Embedding Model in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip@v1",
"data": "$inputs.image",
"version": "ViT-B-16"
}