LMM For Classification¶
Class: LMMForClassificationBlockV1
Source: inference.core.workflows.core_steps.models.foundation.lmm_classifier.v1.LMMForClassificationBlockV1
Classify an image into one or more categories using a Large Multimodal Model (LMM).
You can specify arbitrary classes to an LMMBlock.
The LLMBlock supports two LMMs:
- OpenAI's GPT-4 with Vision.
You need to provide your OpenAI API key to use the GPT-4 with Vision model.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/lmm_for_classification@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
lmm_type |
str |
Type of LMM to be used. | ✅ |
classes |
List[str] |
List of classes that LMM shall classify against. | ✅ |
lmm_config |
LMMConfig |
Configuration of LMM. | ❌ |
remote_api_key |
str |
Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v.. |
✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to LMM For Classification in version v1.
- inputs:
Icon Visualization,Image Preprocessing,LMM,Blur Visualization,Twilio SMS Notification,Morphological Transformation,Roboflow Custom Metadata,Stitch Images,Color Visualization,Contrast Equalization,Llama 3.2 Vision,Circle Visualization,Stability AI Image Generation,Image Blur,Reference Path Visualization,SIFT,Detections List Roll-Up,OpenAI,Buffer,Email Notification,Halo Visualization,EasyOCR,Google Gemini,Trace Visualization,Roboflow Dataset Upload,Dimension Collapse,Twilio SMS/MMS Notification,Single-Label Classification Model,Classification Label Visualization,Clip Comparison,Image Convert Grayscale,CogVLM,Google Vision OCR,Background Color Visualization,Stitch OCR Detections,Multi-Label Classification Model,Camera Calibration,VLM as Detector,LMM For Classification,Triangle Visualization,Text Display,Dynamic Zone,Ellipse Visualization,Slack Notification,Mask Visualization,OpenAI,Local File Sink,Anthropic Claude,Polygon Zone Visualization,Polygon Visualization,Absolute Static Crop,Model Comparison Visualization,Label Visualization,Google Gemini,Webhook Sink,Line Counter Visualization,Perspective Correction,Florence-2 Model,Image Slicer,QR Code Generator,Instance Segmentation Model,Stability AI Outpainting,Anthropic Claude,Object Detection Model,VLM as Classifier,Grid Visualization,Relative Static Crop,CSV Formatter,Image Slicer,Size Measurement,Image Contours,Stability AI Inpainting,Camera Focus,Google Gemini,Motion Detection,Florence-2 Model,SIFT Comparison,Keypoint Detection Model,Dot Visualization,Camera Focus,Crop Visualization,Clip Comparison,Bounding Box Visualization,OCR Model,Background Subtraction,OpenAI,Roboflow Dataset Upload,Dynamic Crop,Keypoint Visualization,Email Notification,Model Monitoring Inference Aggregator,Image Threshold,Anthropic Claude,Corner Visualization,Depth Estimation,OpenAI,Pixelate Visualization - outputs:
Icon Visualization,LMM,Image Preprocessing,Twilio SMS Notification,Moondream2,Morphological Transformation,Roboflow Custom Metadata,Detections Classes Replacement,Color Visualization,Contrast Equalization,Cache Set,Llama 3.2 Vision,Stability AI Image Generation,Circle Visualization,Image Blur,Reference Path Visualization,OpenAI,Email Notification,SAM 3,Perception Encoder Embedding Model,Halo Visualization,Google Gemini,Roboflow Dataset Upload,Trace Visualization,Twilio SMS/MMS Notification,Instance Segmentation Model,Classification Label Visualization,Clip Comparison,Google Vision OCR,CogVLM,Path Deviation,Background Color Visualization,Stitch OCR Detections,Segment Anything 2 Model,LMM For Classification,Triangle Visualization,Text Display,CLIP Embedding Model,SAM 3,Ellipse Visualization,Seg Preview,Slack Notification,Local File Sink,OpenAI,Mask Visualization,Anthropic Claude,Polygon Zone Visualization,Time in Zone,Google Gemini,Polygon Visualization,Model Comparison Visualization,Label Visualization,Webhook Sink,Line Counter Visualization,Perspective Correction,Florence-2 Model,QR Code Generator,Instance Segmentation Model,Stability AI Outpainting,Anthropic Claude,Size Measurement,Pixel Color Count,Stability AI Inpainting,Path Deviation,Line Counter,Time in Zone,Google Gemini,Florence-2 Model,SIFT Comparison,Detections Stitch,Line Counter,Cache Get,Dot Visualization,YOLO-World Model,Crop Visualization,Bounding Box Visualization,OpenAI,SAM 3,Roboflow Dataset Upload,Dynamic Crop,Distance Measurement,Keypoint Visualization,Email Notification,PTZ Tracking (ONVIF).md),Model Monitoring Inference Aggregator,Image Threshold,Time in Zone,Anthropic Claude,Corner Visualization,Depth Estimation,OpenAI
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
LMM For Classification in version v1 has.
Bindings
-
input
images(image): The image to infer on..lmm_type(string): Type of LMM to be used.classes(list_of_values): List of classes that LMM shall classify against.remote_api_key(Union[string,secret]): Holds API key required to call LMM model - in current state of development, we require OpenAI key whenlmm_type=gpt_4v..
-
output
raw_output(string): String value.top(top_class): String value representing top class predicted by classification model.parent_id(parent_id): Identifier of parent for step output.root_parent_id(parent_id): Identifier of parent for step output.image(image_metadata): Dictionary with image metadata required by supervision.prediction_type(prediction_type): String value with type of prediction.
Example JSON definition of step LMM For Classification in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/lmm_for_classification@v1",
"images": "$inputs.image",
"lmm_type": "gpt_4v",
"classes": [
"a",
"b"
],
"lmm_config": {
"gpt_image_detail": "low",
"gpt_model_version": "gpt-4o",
"max_tokens": 200
},
"remote_api_key": "xxx-xxx"
}