Clip Comparison¶
v2¶
Class: ClipComparisonBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.models.foundation.clip_comparison.v2.ClipComparisonBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Use the OpenAI CLIP zero-shot classification model to classify images.
This block accepts an image and a list of text prompts. The block then returns the similarity of each text label to the provided image.
This block is useful for classifying images without having to train a fine-tuned classification model. For example, you could use CLIP to classify the type of vehicle in an image, or if an image contains NSFW material.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip_comparison@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
classes |
List[str] |
List of classes to calculate similarity against each input image. | ✅ |
version |
str |
Variant of CLIP model. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Clip Comparison
in version v2
.
- inputs:
Keypoint Detection Model
,CogVLM
,Image Convert Grayscale
,Anthropic Claude
,OpenAI
,Google Vision OCR
,Florence-2 Model
,Pixelate Visualization
,Dimension Collapse
,Single-Label Classification Model
,SIFT
,OpenAI
,Stability AI Image Generation
,Mask Visualization
,Object Detection Model
,Image Slicer
,Triangle Visualization
,VLM as Detector
,CSV Formatter
,Polygon Zone Visualization
,Model Comparison Visualization
,Crop Visualization
,Classification Label Visualization
,LMM
,Reference Path Visualization
,Multi-Label Classification Model
,Twilio SMS Notification
,Google Gemini
,Bounding Box Visualization
,Image Contours
,Roboflow Custom Metadata
,Size Measurement
,Circle Visualization
,Perspective Correction
,Slack Notification
,Polygon Visualization
,VLM as Classifier
,Trace Visualization
,Webhook Sink
,Color Visualization
,LMM For Classification
,Image Threshold
,SIFT Comparison
,Absolute Static Crop
,Clip Comparison
,Line Counter Visualization
,Stitch Images
,Dot Visualization
,Dynamic Zone
,Instance Segmentation Model
,Background Color Visualization
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Stitch OCR Detections
,Image Slicer
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Image Blur
,Email Notification
,Camera Focus
,Grid Visualization
,Blur Visualization
,Label Visualization
,Stability AI Inpainting
,Depth Estimation
,Image Preprocessing
,Llama 3.2 Vision
,Ellipse Visualization
,Halo Visualization
,Corner Visualization
,Camera Calibration
,Clip Comparison
,Buffer
,Dynamic Crop
,Relative Static Crop
,OpenAI
,Keypoint Visualization
,OCR Model
,Local File Sink
- outputs:
CogVLM
,OpenAI
,Cache Set
,Anthropic Claude
,Google Vision OCR
,Detections Classes Replacement
,Florence-2 Model
,Single-Label Classification Model
,YOLO-World Model
,Stability AI Image Generation
,Object Detection Model
,Triangle Visualization
,Model Comparison Visualization
,LMM
,Segment Anything 2 Model
,Keypoint Detection Model
,Line Counter
,Google Gemini
,Byte Tracker
,Roboflow Custom Metadata
,Circle Visualization
,Trace Visualization
,Webhook Sink
,LMM For Classification
,Detections Stitch
,SIFT Comparison
,Clip Comparison
,Line Counter Visualization
,Identify Outliers
,Dynamic Zone
,Florence-2 Model
,Background Color Visualization
,Image Slicer
,Roboflow Dataset Upload
,Email Notification
,Path Deviation
,Label Visualization
,Stability AI Inpainting
,Image Preprocessing
,Ellipse Visualization
,ONVIF Control
,VLM as Classifier
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Single-Label Classification Model
,OpenAI
,Keypoint Visualization
,Keypoint Detection Model
,Distance Measurement
,VLM as Detector
,OpenAI
,Mask Visualization
,Image Slicer
,Path Deviation
,VLM as Detector
,Polygon Zone Visualization
,Crop Visualization
,Classification Label Visualization
,Reference Path Visualization
,Multi-Label Classification Model
,Twilio SMS Notification
,Bounding Box Visualization
,Size Measurement
,Perspective Correction
,Slack Notification
,Pixel Color Count
,Polygon Visualization
,Instance Segmentation Model
,VLM as Classifier
,Color Visualization
,Identify Changes
,Detections Consensus
,Image Threshold
,Stitch Images
,Multi-Label Classification Model
,Line Counter
,Dot Visualization
,Instance Segmentation Model
,Detections Stabilizer
,Model Monitoring Inference Aggregator
,Template Matching
,Roboflow Dataset Upload
,Time in Zone
,Image Blur
,Grid Visualization
,CLIP Embedding Model
,Object Detection Model
,Llama 3.2 Vision
,Byte Tracker
,Byte Tracker
,Halo Visualization
,Corner Visualization
,Cache Get
,Buffer
,Relative Static Crop
,Local File Sink
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Clip Comparison
in version v2
has.
Bindings
-
input
images
(image
): The image to infer on..classes
(list_of_values
): List of classes to calculate similarity against each input image.version
(string
): Variant of CLIP model.
-
output
similarities
(list_of_values
): List of values of any type.max_similarity
(float_zero_to_one
):float
value in range[0.0, 1.0]
.most_similar_class
(string
): String value.min_similarity
(float_zero_to_one
):float
value in range[0.0, 1.0]
.least_similar_class
(string
): String value.classification_predictions
(classification_prediction
): Predictions from classifier.parent_id
(parent_id
): Identifier of parent for step output.root_parent_id
(parent_id
): Identifier of parent for step output.
Example JSON definition of step Clip Comparison
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip_comparison@v2",
"images": "$inputs.image",
"classes": [
"a",
"b",
"c"
],
"version": "ViT-B-16"
}
v1¶
Class: ClipComparisonBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.models.foundation.clip_comparison.v1.ClipComparisonBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Use the OpenAI CLIP zero-shot classification model to classify images.
This block accepts an image and a list of text prompts. The block then returns the similarity of each text label to the provided image.
This block is useful for classifying images without having to train a fine-tuned classification model. For example, you could use CLIP to classify the type of vehicle in an image, or if an image contains NSFW material.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip_comparison@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
texts |
List[str] |
List of texts to calculate similarity against each input image. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Clip Comparison
in version v1
.
- inputs:
Image Threshold
,Image Convert Grayscale
,Anthropic Claude
,OpenAI
,SIFT Comparison
,Florence-2 Model
,Absolute Static Crop
,Pixelate Visualization
,Clip Comparison
,Dimension Collapse
,SIFT
,Line Counter Visualization
,Stitch Images
,Stability AI Image Generation
,Dot Visualization
,Mask Visualization
,Image Slicer
,Dynamic Zone
,Background Color Visualization
,Florence-2 Model
,Triangle Visualization
,Image Slicer
,Image Blur
,Camera Focus
,Grid Visualization
,Polygon Zone Visualization
,Blur Visualization
,Crop Visualization
,Label Visualization
,Classification Label Visualization
,Depth Estimation
,Image Preprocessing
,Model Comparison Visualization
,Stability AI Inpainting
,Llama 3.2 Vision
,Reference Path Visualization
,Ellipse Visualization
,Google Gemini
,Bounding Box Visualization
,Halo Visualization
,Image Contours
,Corner Visualization
,Camera Calibration
,Size Measurement
,Clip Comparison
,Buffer
,Circle Visualization
,Perspective Correction
,Dynamic Crop
,Polygon Visualization
,Relative Static Crop
,Trace Visualization
,Keypoint Visualization
,OpenAI
,Color Visualization
- outputs:
Keypoint Detection Model
,OpenAI
,Cache Set
,Anthropic Claude
,Florence-2 Model
,Clip Comparison
,Line Counter Visualization
,VLM as Detector
,Line Counter
,YOLO-World Model
,Dot Visualization
,Object Detection Model
,Mask Visualization
,Detections Consensus
,Instance Segmentation Model
,Florence-2 Model
,Triangle Visualization
,Path Deviation
,VLM as Detector
,Time in Zone
,Email Notification
,Grid Visualization
,Polygon Zone Visualization
,Object Detection Model
,Label Visualization
,Crop Visualization
,Classification Label Visualization
,Path Deviation
,Llama 3.2 Vision
,Keypoint Detection Model
,Reference Path Visualization
,Ellipse Visualization
,VLM as Classifier
,Line Counter
,Google Gemini
,Bounding Box Visualization
,Halo Visualization
,Corner Visualization
,Size Measurement
,Clip Comparison
,Time in Zone
,Buffer
,Circle Visualization
,Perspective Correction
,Polygon Visualization
,Instance Segmentation Model
,VLM as Classifier
,Trace Visualization
,Keypoint Visualization
,OpenAI
,Webhook Sink
,Color Visualization
,LMM For Classification
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Clip Comparison
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..texts
(list_of_values
): List of texts to calculate similarity against each input image.
-
output
similarity
(list_of_values
): List of values of any type.parent_id
(parent_id
): Identifier of parent for step output.root_parent_id
(parent_id
): Identifier of parent for step output.prediction_type
(prediction_type
): String value with type of prediction.
Example JSON definition of step Clip Comparison
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip_comparison@v1",
"images": "$inputs.image",
"texts": [
"a",
"b",
"c"
]
}