LMM¶
Class: LMMBlockV1
Source: inference.core.workflows.core_steps.models.foundation.lmm.v1.LMMBlockV1
Ask a question to a Large Multimodal Model (LMM) with an image and text.
You can specify arbitrary text prompts to an LMMBlock.
The LLMBlock supports two LMMs:
- OpenAI's GPT-4 with Vision;
You need to provide your OpenAI API key to use the GPT-4 with Vision model.
If you want to classify an image into one or more categories, we recommend using the dedicated LMMForClassificationBlock.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/lmm@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
prompt |
str |
Holds unconstrained text prompt to LMM mode. | ✅ |
lmm_type |
str |
Type of LMM to be used. | ✅ |
lmm_config |
LMMConfig |
Configuration of LMM. | ❌ |
remote_api_key |
str |
Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v.. |
✅ |
json_output |
Dict[str, str] |
Holds dictionary that maps name of requested output field into its description. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to LMM in version v1.
- inputs:
Llama 3.2 Vision,Blur Visualization,Perspective Correction,Polygon Zone Visualization,Bounding Box Visualization,QR Code Generator,Pixelate Visualization,Trace Visualization,Roboflow Custom Metadata,Image Threshold,Polygon Visualization,Dynamic Crop,Icon Visualization,Image Slicer,Stability AI Outpainting,Model Comparison Visualization,LMM,OpenAI,Classification Label Visualization,Stitch Images,Florence-2 Model,Mask Visualization,Single-Label Classification Model,Relative Static Crop,Absolute Static Crop,SIFT Comparison,Google Gemini,Circle Visualization,Florence-2 Model,LMM For Classification,Ellipse Visualization,Image Convert Grayscale,Object Detection Model,OCR Model,Image Preprocessing,Color Visualization,Image Blur,Stability AI Image Generation,Anthropic Claude,Google Vision OCR,Keypoint Visualization,Camera Calibration,Local File Sink,EasyOCR,Image Slicer,Email Notification,VLM as Detector,Roboflow Dataset Upload,Background Color Visualization,Triangle Visualization,Slack Notification,Keypoint Detection Model,Halo Visualization,Corner Visualization,Google Gemini,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Dot Visualization,Image Contours,Multi-Label Classification Model,Twilio SMS Notification,Instance Segmentation Model,VLM as Classifier,CSV Formatter,Reference Path Visualization,Morphological Transformation,OpenAI,Webhook Sink,Contrast Equalization,Camera Focus,Stitch OCR Detections,Stability AI Inpainting,CogVLM,Clip Comparison,Line Counter Visualization,Email Notification,Crop Visualization,Grid Visualization,OpenAI,SIFT,Depth Estimation,Background Subtraction,Label Visualization,Anthropic Claude,OpenAI - outputs:
Llama 3.2 Vision,SAM 3,Polygon Zone Visualization,Distance Measurement,Pixelate Visualization,Trace Visualization,Roboflow Custom Metadata,QR Code Detection,Detections Transformation,Image Threshold,Rate Limiter,Image Slicer,Icon Visualization,Stability AI Outpainting,Model Comparison Visualization,Single-Label Classification Model,Dynamic Zone,Clip Comparison,Cache Get,Stitch Images,Size Measurement,Florence-2 Model,Single-Label Classification Model,SAM 3,SIFT Comparison,Relative Static Crop,Absolute Static Crop,Moondream2,Florence-2 Model,LMM For Classification,Anthropic Claude,Image Blur,Stability AI Image Generation,Local File Sink,VLM as Detector,Camera Calibration,Keypoint Detection Model,First Non Empty Or Default,Detections Filter,Gaze Detection,Background Color Visualization,Keypoint Detection Model,Delta Filter,Multi-Label Classification Model,Google Gemini,Model Monitoring Inference Aggregator,Roboflow Dataset Upload,Byte Tracker,Instance Segmentation Model,VLM as Classifier,CSV Formatter,Morphological Transformation,Motion Detection,Data Aggregator,OpenAI,Barcode Detection,YOLO-World Model,JSON Parser,Clip Comparison,CogVLM,Expression,Identify Changes,Path Deviation,CLIP Embedding Model,Crop Visualization,Grid Visualization,Buffer,Bounding Rectangle,SIFT,SAM 3,Anthropic Claude,Time in Zone,Detection Offset,Qwen2.5-VL,SIFT Comparison,OpenAI,Line Counter,Detections Consensus,Path Deviation,Cosine Similarity,Perception Encoder Embedding Model,Blur Visualization,Dimension Collapse,Perspective Correction,Bounding Box Visualization,QR Code Generator,Velocity,Segment Anything 2 Model,Polygon Visualization,Dynamic Crop,Identify Outliers,LMM,OpenAI,Classification Label Visualization,Mask Visualization,Time in Zone,Google Gemini,Circle Visualization,Time in Zone,Ellipse Visualization,Image Convert Grayscale,Object Detection Model,OCR Model,SmolVLM2,Image Preprocessing,Color Visualization,Google Vision OCR,Keypoint Visualization,EasyOCR,Line Counter,Email Notification,Image Slicer,VLM as Detector,Property Definition,Detections Combine,Byte Tracker,Roboflow Dataset Upload,Overlap Filter,Triangle Visualization,Slack Notification,Halo Visualization,Detections Stabilizer,Corner Visualization,Object Detection Model,Dot Visualization,Image Contours,Twilio SMS Notification,Detections Merge,Multi-Label Classification Model,Seg Preview,Reference Path Visualization,Byte Tracker,Webhook Sink,PTZ Tracking (ONVIF).md),Detections Classes Replacement,Instance Segmentation Model,Detections Stitch,Contrast Equalization,Camera Focus,Stitch OCR Detections,Stability AI Inpainting,Line Counter Visualization,Cache Set,Template Matching,Email Notification,VLM as Classifier,OpenAI,Background Subtraction,Depth Estimation,Continue If,Label Visualization,Pixel Color Count,Dominant Color
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
LMM in version v1 has.
Bindings
-
input
images(image): The image to infer on..prompt(string): Holds unconstrained text prompt to LMM mode.lmm_type(string): Type of LMM to be used.remote_api_key(Union[secret,string]): Holds API key required to call LMM model - in current state of development, we require OpenAI key whenlmm_type=gpt_4v..
-
output
parent_id(parent_id): Identifier of parent for step output.root_parent_id(parent_id): Identifier of parent for step output.image(image_metadata): Dictionary with image metadata required by supervision.structured_output(dictionary): Dictionary.raw_output(string): String value.*(*): Equivalent of any element.
Example JSON definition of step LMM in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/lmm@v1",
"images": "$inputs.image",
"prompt": "my prompt",
"lmm_type": "gpt_4v",
"lmm_config": {
"gpt_image_detail": "low",
"gpt_model_version": "gpt-4o",
"max_tokens": 200
},
"remote_api_key": "xxx-xxx",
"json_output": {
"count": "number of cats in the picture"
}
}