CLIP Embedding Model¶
Class: ClipModelBlockV1
Source: inference.core.workflows.core_steps.models.foundation.clip.v1.ClipModelBlockV1
Use a CLIP model to create semantic embeddings of text and images.
This block accepts an image or string and returns an embedding. The embedding can be used to compare the similarity between different images or between images and text.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
data |
str |
The string or image to generate an embedding for.. | ✅ |
version |
str |
Variant of CLIP model. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to CLIP Embedding Model
in version v1
.
- inputs:
Florence-2 Model
,Model Monitoring Inference Aggregator
,Label Visualization
,Depth Estimation
,Florence-2 Model
,Triangle Visualization
,CogVLM
,Image Blur
,OCR Model
,Model Comparison Visualization
,Line Counter Visualization
,Circle Visualization
,Relative Static Crop
,Trace Visualization
,Multi-Label Classification Model
,Stitch Images
,Reference Path Visualization
,Llama 3.2 Vision
,Polygon Visualization
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,SIFT
,Single-Label Classification Model
,Image Threshold
,Keypoint Visualization
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Image Slicer
,VLM as Classifier
,Local File Sink
,Dynamic Crop
,Google Gemini
,OpenAI
,Dot Visualization
,Instance Segmentation Model
,Roboflow Dataset Upload
,Keypoint Detection Model
,Stability AI Inpainting
,Google Vision OCR
,Corner Visualization
,Background Color Visualization
,Polygon Zone Visualization
,Camera Focus
,Grid Visualization
,Perspective Correction
,Stability AI Image Generation
,VLM as Detector
,Image Slicer
,CSV Formatter
,OpenAI
,Clip Comparison
,Blur Visualization
,Classification Label Visualization
,Image Convert Grayscale
,Image Preprocessing
,Slack Notification
,SIFT Comparison
,OpenAI
,Stability AI Outpainting
,Anthropic Claude
,Webhook Sink
,Camera Calibration
,Mask Visualization
,Bounding Box Visualization
,Pixelate Visualization
,Twilio SMS Notification
,Email Notification
,Stitch OCR Detections
,Image Contours
,Object Detection Model
,Absolute Static Crop
,Halo Visualization
,LMM For Classification
,LMM
- outputs:
Identify Changes
,Identify Outliers
,Cosine Similarity
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
CLIP Embedding Model
in version v1
has.
Bindings
Example JSON definition of step CLIP Embedding Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip@v1",
"data": "$inputs.image",
"version": "ViT-B-16"
}