VLM As Classifier¶
v2¶
Class: VLMAsClassifierBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.formatters.vlm_as_classifier.v2.VLMAsClassifierBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Parse JSON strings from Visual Language Models (VLMs) and Large Language Models (LLMs) into standardized classification prediction format by extracting class predictions, mapping class names to class IDs, handling both single-class and multi-label formats, and converting VLM/LLM text outputs into workflow-compatible classification results for VLM-based classification, LLM classification parsing, and text-to-classification conversion workflows.
How This Block Works¶
This block converts VLM/LLM text outputs containing classification predictions into standardized classification prediction format. The block:
- Receives image and VLM output string containing classification results in JSON format
- Parses JSON content from VLM output:
Handles Markdown-wrapped JSON:
- Searches for JSON wrapped in Markdown code blocks (json ...)
- This format is common in LLM/VLM responses
- If multiple markdown JSON blocks are found, only the first block is parsed
- Extracts JSON content from within markdown tags
Handles raw JSON strings: - If no markdown blocks are found, attempts to parse the entire string as JSON - Supports standard JSON format strings 3. Detects classification format and parses accordingly:
Single-Class Classification Format: - Detects format containing "class_name" and "confidence" fields - Extracts the predicted class name and confidence score - Creates classification prediction with single top class - Maps class name to class ID using provided classes list
Multi-Label Classification Format:
- Detects format containing "predicted_classes" array
- Extracts all predicted classes with their confidence scores
- Handles duplicate classes by taking maximum confidence
- Maps all class names to class IDs using provided classes list
4. Creates class name to class ID mapping:
- Uses the provided classes list to create index mapping (class_name → class_id)
- Maps classes in order (first class = ID 0, second = ID 1, etc.)
- Classes not in the provided list get class_id = -1
5. Normalizes confidence scores:
- Scales confidence values to valid range [0.0, 1.0]
- Clamps values outside the range to 0.0 or 1.0
6. Constructs classification prediction:
- Includes image dimensions (width, height) from input image
- For single-class: includes "top" class, confidence, and predictions array
- For multi-label: includes "predicted_classes" list and predictions dictionary
- Includes inference_id and parent_id for tracking
- Formats prediction in standard classification prediction format
7. Handles errors:
- Sets error_status to True if JSON parsing fails
- Sets error_status to True if classification format cannot be determined
- Returns None for predictions when errors occur
- Always includes inference_id for tracking
8. Returns classification prediction:
- Outputs predictions in standard classification format (compatible with classification blocks)
- Outputs error_status indicating parsing success/failure
- Outputs inference_id with specific type for tracking and lineage
The block enables using VLMs/LLMs for classification by converting their text-based JSON outputs into standardized classification predictions that can be used in workflows like any other classification model output.
Common Use Cases¶
- VLM-Based Classification: Use Visual Language Models for image classification by parsing VLM outputs into classification predictions (e.g., classify images with VLMs, use GPT-4V for classification, parse Claude Vision classifications), enabling VLM classification workflows
- LLM Classification Parsing: Parse LLM text outputs containing classification results into standardized format (e.g., parse GPT classification outputs, convert LLM predictions to classification format, use LLMs for classification), enabling LLM classification workflows
- Text-to-Classification Conversion: Convert text-based classification outputs from models into workflow-compatible classification predictions (e.g., convert text predictions to classification format, parse text-based classifications, convert model outputs to classifications), enabling text-to-classification workflows
- Multi-Format Classification Support: Handle both single-class and multi-label classification formats from VLM/LLM outputs (e.g., support single-label VLM classifications, support multi-label VLM classifications, handle different classification formats), enabling flexible classification workflows
- VLM Integration: Integrate VLM outputs into classification workflows (e.g., use VLMs in classification pipelines, integrate VLM predictions with classification blocks, combine VLM and traditional classification), enabling VLM integration workflows
- Flexible Classification Sources: Enable classification from various model types that output text/JSON (e.g., use any text-output model for classification, convert model outputs to classifications, parse various classification formats), enabling flexible classification workflows
Connecting to Other Blocks¶
This block receives images and VLM outputs and produces classification predictions:
- After VLM/LLM blocks to parse classification outputs into standard format (e.g., VLM output to classification, LLM output to classification, parse model outputs), enabling VLM-to-classification workflows
- Before classification-based blocks to use parsed classifications (e.g., use parsed classifications in workflows, provide classifications to downstream blocks, use VLM classifications with classification blocks), enabling classification-to-workflow workflows
- Before filtering blocks to filter based on VLM classifications (e.g., filter by VLM classification results, use parsed classifications for filtering, apply filters to VLM predictions), enabling classification-to-filter workflows
- Before analytics blocks to analyze VLM classification results (e.g., analyze VLM classifications, perform analytics on parsed classifications, track VLM classification metrics), enabling classification analytics workflows
- Before visualization blocks to display VLM classification results (e.g., visualize VLM classifications, display parsed classification predictions, show VLM classification outputs), enabling classification visualization workflows
- In workflow outputs to provide VLM classifications as final output (e.g., VLM classification outputs, parsed classification results, VLM-based classification outputs), enabling classification output workflows
Version Differences¶
This version (v2) includes the following enhancements over v1:
- Improved Type System: The
inference_idoutput now usesINFERENCE_ID_KINDinstead of genericSTRING_KIND, providing better type safety and semantic clarity for inference ID values in the workflow type system
Requirements¶
This block requires an image input (for metadata and dimensions) and a VLM output string containing JSON classification data. The JSON can be raw JSON or wrapped in Markdown code blocks (json ...). The block supports two JSON formats: single-class (with "class_name" and "confidence" fields) and multi-label (with "predicted_classes" array). The classes parameter must contain a list of all class names used by the model to generate class_id mappings. Classes are mapped to IDs by index (first class = 0, second = 1, etc.). Classes not in the list get class_id = -1. Confidence scores are normalized to [0.0, 1.0] range. The block outputs classification predictions in standard format (compatible with classification blocks), error_status (boolean), and inference_id (INFERENCE_ID_KIND) for tracking.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/vlm_as_classifier@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
classes |
List[str] |
List of all class names used by the classification model, in order. Required to generate mapping between class names (from VLM output) and class IDs (for classification format). Classes are mapped to IDs by index: first class = ID 0, second = ID 1, etc. Classes from VLM output that are not in this list get class_id = -1. Should match the classes the VLM was asked to classify.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to VLM As Classifier in version v2.
- inputs:
Line Counter Visualization,Stability AI Outpainting,Google Gemma API,Image Slicer,Image Preprocessing,Google Gemini,Color Visualization,OpenAI,Ellipse Visualization,Polygon Visualization,Anthropic Claude,Relative Static Crop,Model Comparison Visualization,Trace Visualization,Camera Focus,Qwen 3.5 API,Buffer,Detections List Roll-Up,Size Measurement,Image Threshold,Stitch Images,Qwen 3.6 API,Heatmap Visualization,SIFT Comparison,Morphological Transformation,Florence-2 Model,Halo Visualization,Crop Visualization,Camera Calibration,Florence-2 Model,GLM-OCR,Dot Visualization,Icon Visualization,Google Gemini,Dynamic Zone,Clip Comparison,Image Contours,Pixelate Visualization,Polygon Zone Visualization,Reference Path Visualization,Dimension Collapse,Motion Detection,Blur Visualization,Anthropic Claude,Background Subtraction,Text Display,Clip Comparison,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Stability AI Inpainting,Polygon Visualization,Image Convert Grayscale,SIFT,OpenAI,Google Gemini,Label Visualization,Corner Visualization,Grid Visualization,Dynamic Crop,Contrast Equalization,Keypoint Visualization,Triangle Visualization,QR Code Generator,Halo Visualization,Circle Visualization,Camera Focus,Mask Visualization,Morphological Transformation,OpenAI,Contrast Enhancement,MoonshotAI Kimi,Llama 3.2 Vision,Background Color Visualization - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Email Notification,Object Detection Model,Gaze Detection,Instance Segmentation Model,Color Visualization,Object Detection Model,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,Single-Label Classification Model,Detections Consensus,Time in Zone,Detections Classes Replacement,Webhook Sink,Model Comparison Visualization,Trace Visualization,Object Detection Model,Roboflow Custom Metadata,Instance Segmentation Model,Single-Label Classification Model,Template Matching,Heatmap Visualization,SIFT Comparison,Halo Visualization,Instance Segmentation Model,Crop Visualization,Camera Calibration,Multi-Label Classification Model,Time in Zone,Dot Visualization,SAM 3,Twilio SMS Notification,Model Monitoring Inference Aggregator,Icon Visualization,Roboflow Dataset Upload,Dynamic Zone,Keypoint Detection Model,Pixelate Visualization,Twilio SMS/MMS Notification,Time in Zone,Polygon Zone Visualization,Reference Path Visualization,Motion Detection,Blur Visualization,Text Display,Perspective Correction,Bounding Box Visualization,Multi-Label Classification Model,Classification Label Visualization,Stability AI Inpainting,Polygon Visualization,SAM 3,Single-Label Classification Model,Roboflow Vision Events,Google Gemini,Label Visualization,Corner Visualization,Keypoint Detection Model,Keypoint Visualization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Keypoint Detection Model,Background Color Visualization,Email Notification,PTZ Tracking (ONVIF),Slack Notification
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
VLM As Classifier in version v2 has.
Bindings
-
input
image(image): Input image that was used to generate the VLM prediction. Used to extract image dimensions (width, height) and metadata (parent_id) for the classification prediction. The same image that was provided to the VLM/LLM block should be used here to maintain consistency..vlm_output(language_model_output): String output from a VLM or LLM block containing classification prediction in JSON format. Can be raw JSON string (e.g., '{"class_name": "dog", "confidence": 0.95}') or JSON wrapped in Markdown code blocks (e.g.,json {...}). Supports two formats: single-class (with 'class_name' and 'confidence' fields) or multi-label (with 'predicted_classes' array). If multiple markdown blocks exist, only the first is parsed..classes(list_of_values): List of all class names used by the classification model, in order. Required to generate mapping between class names (from VLM output) and class IDs (for classification format). Classes are mapped to IDs by index: first class = ID 0, second = ID 1, etc. Classes from VLM output that are not in this list get class_id = -1. Should match the classes the VLM was asked to classify..
-
output
error_status(boolean): Boolean flag.predictions(classification_prediction): Predictions from classifier.inference_id(inference_id): Inference identifier.
Example JSON definition of step VLM As Classifier in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/vlm_as_classifier@v2",
"image": "$inputs.image",
"vlm_output": "$steps.lmm.output",
"classes": [
"$steps.lmm.classes",
"$inputs.classes",
[
"dog",
"cat",
"bird"
],
[
"class_a",
"class_b"
]
]
}
v1¶
Class: VLMAsClassifierBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.formatters.vlm_as_classifier.v1.VLMAsClassifierBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Parse JSON strings from Visual Language Models (VLMs) and Large Language Models (LLMs) into standardized classification prediction format by extracting class predictions, mapping class names to class IDs, handling both single-class and multi-label formats, and converting VLM/LLM text outputs into workflow-compatible classification results for VLM-based classification, LLM classification parsing, and text-to-classification conversion workflows.
How This Block Works¶
This block converts VLM/LLM text outputs containing classification predictions into standardized classification prediction format. The block:
- Receives image and VLM output string containing classification results in JSON format
- Parses JSON content from VLM output:
Handles Markdown-wrapped JSON:
- Searches for JSON wrapped in Markdown code blocks (json ...)
- This format is common in LLM/VLM responses
- If multiple markdown JSON blocks are found, only the first block is parsed
- Extracts JSON content from within markdown tags
Handles raw JSON strings: - If no markdown blocks are found, attempts to parse the entire string as JSON - Supports standard JSON format strings 3. Detects classification format and parses accordingly:
Single-Class Classification Format: - Detects format containing "class_name" and "confidence" fields - Extracts the predicted class name and confidence score - Creates classification prediction with single top class - Maps class name to class ID using provided classes list
Multi-Label Classification Format:
- Detects format containing "predicted_classes" array
- Extracts all predicted classes with their confidence scores
- Handles duplicate classes by taking maximum confidence
- Maps all class names to class IDs using provided classes list
4. Creates class name to class ID mapping:
- Uses the provided classes list to create index mapping (class_name → class_id)
- Maps classes in order (first class = ID 0, second = ID 1, etc.)
- Classes not in the provided list get class_id = -1
5. Normalizes confidence scores:
- Scales confidence values to valid range [0.0, 1.0]
- Clamps values outside the range to 0.0 or 1.0
6. Constructs classification prediction:
- Includes image dimensions (width, height) from input image
- For single-class: includes "top" class, confidence, and predictions array
- For multi-label: includes "predicted_classes" list and predictions dictionary
- Includes inference_id and parent_id for tracking
- Formats prediction in standard classification prediction format
7. Handles errors:
- Sets error_status to True if JSON parsing fails
- Sets error_status to True if classification format cannot be determined
- Returns None for predictions when errors occur
- Always includes inference_id for tracking
8. Returns classification prediction:
- Outputs predictions in standard classification format (compatible with classification blocks)
- Outputs error_status indicating parsing success/failure
- Outputs inference_id for tracking and lineage
The block enables using VLMs/LLMs for classification by converting their text-based JSON outputs into standardized classification predictions that can be used in workflows like any other classification model output.
Common Use Cases¶
- VLM-Based Classification: Use Visual Language Models for image classification by parsing VLM outputs into classification predictions (e.g., classify images with VLMs, use GPT-4V for classification, parse Claude Vision classifications), enabling VLM classification workflows
- LLM Classification Parsing: Parse LLM text outputs containing classification results into standardized format (e.g., parse GPT classification outputs, convert LLM predictions to classification format, use LLMs for classification), enabling LLM classification workflows
- Text-to-Classification Conversion: Convert text-based classification outputs from models into workflow-compatible classification predictions (e.g., convert text predictions to classification format, parse text-based classifications, convert model outputs to classifications), enabling text-to-classification workflows
- Multi-Format Classification Support: Handle both single-class and multi-label classification formats from VLM/LLM outputs (e.g., support single-label VLM classifications, support multi-label VLM classifications, handle different classification formats), enabling flexible classification workflows
- VLM Integration: Integrate VLM outputs into classification workflows (e.g., use VLMs in classification pipelines, integrate VLM predictions with classification blocks, combine VLM and traditional classification), enabling VLM integration workflows
- Flexible Classification Sources: Enable classification from various model types that output text/JSON (e.g., use any text-output model for classification, convert model outputs to classifications, parse various classification formats), enabling flexible classification workflows
Connecting to Other Blocks¶
This block receives images and VLM outputs and produces classification predictions:
- After VLM/LLM blocks to parse classification outputs into standard format (e.g., VLM output to classification, LLM output to classification, parse model outputs), enabling VLM-to-classification workflows
- Before classification-based blocks to use parsed classifications (e.g., use parsed classifications in workflows, provide classifications to downstream blocks, use VLM classifications with classification blocks), enabling classification-to-workflow workflows
- Before filtering blocks to filter based on VLM classifications (e.g., filter by VLM classification results, use parsed classifications for filtering, apply filters to VLM predictions), enabling classification-to-filter workflows
- Before analytics blocks to analyze VLM classification results (e.g., analyze VLM classifications, perform analytics on parsed classifications, track VLM classification metrics), enabling classification analytics workflows
- Before visualization blocks to display VLM classification results (e.g., visualize VLM classifications, display parsed classification predictions, show VLM classification outputs), enabling classification visualization workflows
- In workflow outputs to provide VLM classifications as final output (e.g., VLM classification outputs, parsed classification results, VLM-based classification outputs), enabling classification output workflows
Requirements¶
This block requires an image input (for metadata and dimensions) and a VLM output string containing JSON classification data. The JSON can be raw JSON or wrapped in Markdown code blocks (json ...). The block supports two JSON formats: single-class (with "class_name" and "confidence" fields) and multi-label (with "predicted_classes" array). The classes parameter must contain a list of all class names used by the model to generate class_id mappings. Classes are mapped to IDs by index (first class = 0, second = 1, etc.). Classes not in the list get class_id = -1. Confidence scores are normalized to [0.0, 1.0] range. The block outputs classification predictions in standard format (compatible with classification blocks), error_status (boolean), and inference_id (string) for tracking.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/vlm_as_classifier@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
classes |
List[str] |
List of all class names used by the classification model, in order. Required to generate mapping between class names (from VLM output) and class IDs (for classification format). Classes are mapped to IDs by index: first class = ID 0, second = ID 1, etc. Classes from VLM output that are not in this list get class_id = -1. Should match the classes the VLM was asked to classify.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to VLM As Classifier in version v1.
- inputs:
Line Counter Visualization,Stability AI Outpainting,Google Gemma API,Image Slicer,Image Preprocessing,Google Gemini,Color Visualization,OpenAI,Ellipse Visualization,Polygon Visualization,Anthropic Claude,Relative Static Crop,Model Comparison Visualization,Trace Visualization,Camera Focus,Qwen 3.5 API,Buffer,Detections List Roll-Up,Size Measurement,Image Threshold,Stitch Images,Qwen 3.6 API,Heatmap Visualization,SIFT Comparison,Morphological Transformation,Florence-2 Model,Halo Visualization,Crop Visualization,Camera Calibration,Florence-2 Model,GLM-OCR,Dot Visualization,Icon Visualization,Google Gemini,Dynamic Zone,Clip Comparison,Image Contours,Pixelate Visualization,Polygon Zone Visualization,Reference Path Visualization,Dimension Collapse,Motion Detection,Blur Visualization,Anthropic Claude,Background Subtraction,Text Display,Clip Comparison,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Stability AI Inpainting,Polygon Visualization,Image Convert Grayscale,SIFT,OpenAI,Google Gemini,Label Visualization,Corner Visualization,Grid Visualization,Dynamic Crop,Contrast Equalization,Keypoint Visualization,Triangle Visualization,QR Code Generator,Halo Visualization,Circle Visualization,Camera Focus,Mask Visualization,Morphological Transformation,OpenAI,Contrast Enhancement,MoonshotAI Kimi,Llama 3.2 Vision,Background Color Visualization - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Gaze Detection,Instance Segmentation Model,Distance Measurement,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,Single-Label Classification Model,Detections Consensus,Detections Classes Replacement,Cache Set,Webhook Sink,Trace Visualization,Object Detection Model,Stitch OCR Detections,Qwen 3.5 API,OpenAI,SAM 3,Size Measurement,Image Threshold,Heatmap Visualization,Florence-2 Model,Halo Visualization,Path Deviation,GLM-OCR,Dot Visualization,S3 Sink,Path Deviation,Semantic Segmentation Model,Twilio SMS Notification,Seg Preview,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Pixelate Visualization,Line Counter,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Line Counter,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,Roboflow Vision Events,Google Gemini,Label Visualization,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,CLIP Embedding Model,Detections Stitch,Object Detection Model,Email Notification,Google Gemma API,Stability AI Outpainting,Google Vision OCR,Google Gemini,Image Preprocessing,Object Detection Model,OpenAI,Anthropic Claude,Time in Zone,Model Comparison Visualization,Roboflow Custom Metadata,YOLO-World Model,Instance Segmentation Model,Perception Encoder Embedding Model,Single-Label Classification Model,Template Matching,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Camera Calibration,Florence-2 Model,Multi-Label Classification Model,Time in Zone,SAM 3,Icon Visualization,Local File Sink,Keypoint Detection Model,Time in Zone,Reference Path Visualization,Anthropic Claude,Clip Comparison,LMM,Pixel Color Count,Multi-Label Classification Model,Classification Label Visualization,Image Blur,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Keypoint Detection Model,Dynamic Crop,Keypoint Visualization,Moondream2,QR Code Generator,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,Cache Get
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
VLM As Classifier in version v1 has.
Bindings
-
input
image(image): Input image that was used to generate the VLM prediction. Used to extract image dimensions (width, height) and metadata (parent_id) for the classification prediction. The same image that was provided to the VLM/LLM block should be used here to maintain consistency..vlm_output(language_model_output): String output from a VLM or LLM block containing classification prediction in JSON format. Can be raw JSON string (e.g., '{"class_name": "dog", "confidence": 0.95}') or JSON wrapped in Markdown code blocks (e.g.,json {...}). Supports two formats: single-class (with 'class_name' and 'confidence' fields) or multi-label (with 'predicted_classes' array). If multiple markdown blocks exist, only the first is parsed..classes(list_of_values): List of all class names used by the classification model, in order. Required to generate mapping between class names (from VLM output) and class IDs (for classification format). Classes are mapped to IDs by index: first class = ID 0, second = ID 1, etc. Classes from VLM output that are not in this list get class_id = -1. Should match the classes the VLM was asked to classify..
-
output
error_status(boolean): Boolean flag.predictions(classification_prediction): Predictions from classifier.inference_id(string): String value.
Example JSON definition of step VLM As Classifier in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/vlm_as_classifier@v1",
"image": "$inputs.image",
"vlm_output": "$steps.lmm.output",
"classes": [
"$steps.lmm.classes",
"$inputs.classes",
[
"dog",
"cat",
"bird"
],
[
"class_a",
"class_b"
]
]
}