Clip Comparison¶
v2¶
Class: ClipComparisonBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.models.foundation.clip_comparison.v2.ClipComparisonBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Use the OpenAI CLIP zero-shot classification model to classify images.
This block accepts an image and a list of text prompts. The block then returns the similarity of each text label to the provided image.
This block is useful for classifying images without having to train a fine-tuned classification model. For example, you could use CLIP to classify the type of vehicle in an image, or if an image contains NSFW material.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip_comparison@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
classes |
List[str] |
List of classes to calculate similarity against each input image. | ✅ |
version |
str |
Variant of CLIP model. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Clip Comparison
in version v2
.
- inputs:
Multi-Label Classification Model
,Single-Label Classification Model
,Classification Label Visualization
,Dimension Collapse
,Background Color Visualization
,Webhook Sink
,Dynamic Crop
,Mask Visualization
,Clip Comparison
,Google Vision OCR
,Twilio SMS Notification
,Buffer
,Absolute Static Crop
,Model Monitoring Inference Aggregator
,Stability AI Image Generation
,Florence-2 Model
,Image Blur
,LMM For Classification
,Roboflow Dataset Upload
,CogVLM
,Circle Visualization
,OCR Model
,Clip Comparison
,Crop Visualization
,OpenAI
,Stitch OCR Detections
,OpenAI
,Image Preprocessing
,Model Comparison Visualization
,Stitch Images
,Bounding Box Visualization
,Keypoint Detection Model
,Perspective Correction
,SIFT Comparison
,Relative Static Crop
,Color Visualization
,Slack Notification
,Ellipse Visualization
,Reference Path Visualization
,Blur Visualization
,Pixelate Visualization
,Anthropic Claude
,Email Notification
,LMM
,Llama 3.2 Vision
,CSV Formatter
,VLM as Detector
,Keypoint Visualization
,Camera Focus
,Florence-2 Model
,Grid Visualization
,Image Convert Grayscale
,Image Threshold
,Trace Visualization
,Polygon Visualization
,Triangle Visualization
,Stability AI Inpainting
,Halo Visualization
,Dot Visualization
,Polygon Zone Visualization
,Google Gemini
,Local File Sink
,Dynamic Zone
,Size Measurement
,Instance Segmentation Model
,Roboflow Custom Metadata
,VLM as Classifier
,Camera Calibration
,Object Detection Model
,SIFT
,Corner Visualization
,Image Contours
,Line Counter Visualization
,Roboflow Dataset Upload
,Image Slicer
,Image Slicer
,Label Visualization
- outputs:
Multi-Label Classification Model
,Classification Label Visualization
,Background Color Visualization
,Dynamic Crop
,Clip Comparison
,Segment Anything 2 Model
,LMM For Classification
,Image Blur
,Roboflow Dataset Upload
,CogVLM
,Circle Visualization
,Clip Comparison
,Template Matching
,Multi-Label Classification Model
,Path Deviation
,OpenAI
,Detections Stitch
,Pixel Color Count
,Detections Stabilizer
,Path Deviation
,Line Counter
,VLM as Classifier
,Time in Zone
,Model Comparison Visualization
,Stitch Images
,Keypoint Detection Model
,Bounding Box Visualization
,Color Visualization
,Slack Notification
,LMM
,Llama 3.2 Vision
,Instance Segmentation Model
,VLM as Detector
,Time in Zone
,Byte Tracker
,YOLO-World Model
,Grid Visualization
,Stability AI Inpainting
,Keypoint Detection Model
,Dot Visualization
,Google Gemini
,CLIP Embedding Model
,Size Measurement
,Roboflow Custom Metadata
,VLM as Classifier
,Detections Classes Replacement
,Object Detection Model
,Corner Visualization
,Roboflow Dataset Upload
,Image Preprocessing
,Image Slicer
,Byte Tracker
,Label Visualization
,Line Counter
,Identify Outliers
,Single-Label Classification Model
,Webhook Sink
,Cache Get
,Mask Visualization
,Twilio SMS Notification
,Google Vision OCR
,Buffer
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Stability AI Image Generation
,Cache Set
,Identify Changes
,Crop Visualization
,VLM as Detector
,OpenAI
,Perspective Correction
,SIFT Comparison
,Relative Static Crop
,Ellipse Visualization
,Reference Path Visualization
,Anthropic Claude
,Email Notification
,Keypoint Visualization
,Single-Label Classification Model
,Florence-2 Model
,Trace Visualization
,Image Threshold
,Polygon Visualization
,Detections Consensus
,Triangle Visualization
,Halo Visualization
,Polygon Zone Visualization
,Local File Sink
,Instance Segmentation Model
,Line Counter Visualization
,Image Slicer
,Byte Tracker
,Distance Measurement
,Object Detection Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Clip Comparison
in version v2
has.
Bindings
-
input
images
(image
): The image to infer on..classes
(list_of_values
): List of classes to calculate similarity against each input image.version
(string
): Variant of CLIP model.
-
output
similarities
(list_of_values
): List of values of any type.max_similarity
(float_zero_to_one
):float
value in range[0.0, 1.0]
.most_similar_class
(string
): String value.min_similarity
(float_zero_to_one
):float
value in range[0.0, 1.0]
.least_similar_class
(string
): String value.classification_predictions
(classification_prediction
): Predictions from classifier.parent_id
(parent_id
): Identifier of parent for step output.root_parent_id
(parent_id
): Identifier of parent for step output.
Example JSON definition of step Clip Comparison
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip_comparison@v2",
"images": "$inputs.image",
"classes": [
"a",
"b",
"c"
],
"version": "ViT-B-16"
}
v1¶
Class: ClipComparisonBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.models.foundation.clip_comparison.v1.ClipComparisonBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Use the OpenAI CLIP zero-shot classification model to classify images.
This block accepts an image and a list of text prompts. The block then returns the similarity of each text label to the provided image.
This block is useful for classifying images without having to train a fine-tuned classification model. For example, you could use CLIP to classify the type of vehicle in an image, or if an image contains NSFW material.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/clip_comparison@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
texts |
List[str] |
List of texts to calculate similarity against each input image. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Clip Comparison
in version v1
.
- inputs:
Reference Path Visualization
,Blur Visualization
,Pixelate Visualization
,Anthropic Claude
,Classification Label Visualization
,Dimension Collapse
,Llama 3.2 Vision
,Background Color Visualization
,Dynamic Crop
,Keypoint Visualization
,Camera Focus
,Mask Visualization
,Clip Comparison
,Image Slicer
,Buffer
,Absolute Static Crop
,Stability AI Image Generation
,Florence-2 Model
,Image Blur
,Florence-2 Model
,Circle Visualization
,Grid Visualization
,Clip Comparison
,Crop Visualization
,Image Convert Grayscale
,Image Threshold
,Trace Visualization
,Polygon Visualization
,Triangle Visualization
,Stability AI Inpainting
,Halo Visualization
,Dot Visualization
,Polygon Zone Visualization
,Google Gemini
,Dynamic Zone
,Size Measurement
,OpenAI
,Camera Calibration
,SIFT
,Corner Visualization
,Image Contours
,Model Comparison Visualization
,Stitch Images
,Bounding Box Visualization
,Line Counter Visualization
,Image Slicer
,Perspective Correction
,Image Preprocessing
,SIFT Comparison
,Label Visualization
,Relative Static Crop
,Color Visualization
,Ellipse Visualization
- outputs:
Reference Path Visualization
,Anthropic Claude
,Email Notification
,Classification Label Visualization
,Llama 3.2 Vision
,Instance Segmentation Model
,Webhook Sink
,VLM as Detector
,Time in Zone
,Mask Visualization
,Clip Comparison
,Buffer
,Florence-2 Model
,LMM For Classification
,Florence-2 Model
,Cache Set
,YOLO-World Model
,Line Counter
,Circle Visualization
,Grid Visualization
,Clip Comparison
,Crop Visualization
,Trace Visualization
,VLM as Detector
,Path Deviation
,Object Detection Model
,Polygon Visualization
,Detections Consensus
,Triangle Visualization
,Keypoint Detection Model
,Halo Visualization
,Dot Visualization
,Google Gemini
,Polygon Zone Visualization
,Size Measurement
,Instance Segmentation Model
,OpenAI
,VLM as Classifier
,Path Deviation
,Line Counter
,Object Detection Model
,VLM as Classifier
,Corner Visualization
,Time in Zone
,Keypoint Detection Model
,Line Counter Visualization
,Bounding Box Visualization
,Perspective Correction
,Label Visualization
,Color Visualization
,Ellipse Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Clip Comparison
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..texts
(list_of_values
): List of texts to calculate similarity against each input image.
-
output
similarity
(list_of_values
): List of values of any type.parent_id
(parent_id
): Identifier of parent for step output.root_parent_id
(parent_id
): Identifier of parent for step output.prediction_type
(prediction_type
): String value with type of prediction.
Example JSON definition of step Clip Comparison
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/clip_comparison@v1",
"images": "$inputs.image",
"texts": [
"a",
"b",
"c"
]
}