LMM For Classification¶
Class: LMMForClassificationBlockV1
Source: inference.core.workflows.core_steps.models.foundation.lmm_classifier.v1.LMMForClassificationBlockV1
Classify an image into one or more categories using a Large Multimodal Model (LMM).
You can specify arbitrary classes to an LMMBlock.
The LLMBlock supports two LMMs:
- OpenAI's GPT-4 with Vision.
You need to provide your OpenAI API key to use the GPT-4 with Vision model.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/lmm_for_classification@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
lmm_type |
str |
Type of LMM to be used. | ✅ |
classes |
List[str] |
List of classes that LMM shall classify against. | ✅ |
lmm_config |
LMMConfig |
Configuration of LMM. | ❌ |
remote_api_key |
str |
Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v.. |
✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to LMM For Classification in version v1.
- inputs:
Google Vision OCR,Label Visualization,LMM For Classification,Blur Visualization,Background Color Visualization,Contrast Equalization,Bounding Box Visualization,Keypoint Visualization,Stability AI Outpainting,Reference Path Visualization,Image Slicer,Pixelate Visualization,Single-Label Classification Model,Clip Comparison,CSV Formatter,Image Preprocessing,Color Visualization,SIFT Comparison,Object Detection Model,Email Notification,Anthropic Claude,Circle Visualization,Image Contours,Polygon Zone Visualization,Ellipse Visualization,Clip Comparison,Email Notification,VLM as Classifier,Model Monitoring Inference Aggregator,OCR Model,Absolute Static Crop,Depth Estimation,LMM,Morphological Transformation,Roboflow Dataset Upload,Crop Visualization,OpenAI,Image Convert Grayscale,Florence-2 Model,CogVLM,Roboflow Custom Metadata,VLM as Detector,Classification Label Visualization,Buffer,Stitch OCR Detections,Keypoint Detection Model,Camera Calibration,Polygon Visualization,Icon Visualization,Triangle Visualization,Roboflow Dataset Upload,Anthropic Claude,Model Comparison Visualization,Corner Visualization,Florence-2 Model,Google Gemini,Google Gemini,EasyOCR,Line Counter Visualization,Grid Visualization,Halo Visualization,Size Measurement,Stability AI Image Generation,QR Code Generator,Dynamic Zone,Twilio SMS Notification,Relative Static Crop,Dot Visualization,Llama 3.2 Vision,Image Blur,Slack Notification,Dimension Collapse,OpenAI,Local File Sink,Multi-Label Classification Model,Image Slicer,OpenAI,Stability AI Inpainting,Dynamic Crop,Camera Focus,Webhook Sink,Image Threshold,Instance Segmentation Model,Perspective Correction,Mask Visualization,Trace Visualization,OpenAI,Stitch Images,SIFT - outputs:
Google Vision OCR,Label Visualization,LMM For Classification,Background Color Visualization,Contrast Equalization,Reference Path Visualization,Keypoint Visualization,Stability AI Outpainting,Bounding Box Visualization,SAM 3,Perception Encoder Embedding Model,Seg Preview,Image Preprocessing,SAM 3,Color Visualization,SIFT Comparison,Path Deviation,Email Notification,Cache Set,Anthropic Claude,Circle Visualization,Polygon Zone Visualization,Ellipse Visualization,Line Counter,Email Notification,Clip Comparison,Moondream2,Model Monitoring Inference Aggregator,Path Deviation,LMM,Time in Zone,Morphological Transformation,Roboflow Dataset Upload,Crop Visualization,OpenAI,Florence-2 Model,SAM 3,CogVLM,Roboflow Custom Metadata,Classification Label Visualization,Stitch OCR Detections,Segment Anything 2 Model,Time in Zone,Line Counter,YOLO-World Model,Polygon Visualization,PTZ Tracking (ONVIF).md),CLIP Embedding Model,Detections Classes Replacement,Icon Visualization,Cache Get,Triangle Visualization,Roboflow Dataset Upload,Anthropic Claude,Model Comparison Visualization,Florence-2 Model,Distance Measurement,Corner Visualization,Google Gemini,Google Gemini,Line Counter Visualization,Halo Visualization,Size Measurement,Stability AI Image Generation,QR Code Generator,Twilio SMS Notification,Time in Zone,Dot Visualization,Detections Stitch,Llama 3.2 Vision,Image Blur,Slack Notification,OpenAI,Local File Sink,Instance Segmentation Model,OpenAI,Stability AI Inpainting,Dynamic Crop,Pixel Color Count,Webhook Sink,Instance Segmentation Model,Image Threshold,Mask Visualization,OpenAI,Perspective Correction,Trace Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
LMM For Classification in version v1 has.
Bindings
-
input
images(image): The image to infer on..lmm_type(string): Type of LMM to be used.classes(list_of_values): List of classes that LMM shall classify against.remote_api_key(Union[string,secret]): Holds API key required to call LMM model - in current state of development, we require OpenAI key whenlmm_type=gpt_4v..
-
output
raw_output(string): String value.top(top_class): String value representing top class predicted by classification model.parent_id(parent_id): Identifier of parent for step output.root_parent_id(parent_id): Identifier of parent for step output.image(image_metadata): Dictionary with image metadata required by supervision.prediction_type(prediction_type): String value with type of prediction.
Example JSON definition of step LMM For Classification in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/lmm_for_classification@v1",
"images": "$inputs.image",
"lmm_type": "gpt_4v",
"classes": [
"a",
"b"
],
"lmm_config": {
"gpt_image_detail": "low",
"gpt_model_version": "gpt-4o",
"max_tokens": 200
},
"remote_api_key": "xxx-xxx"
}