LMM¶
Ask a question to a Large Multimodal Model (LMM) with an image and text.
You can specify arbitrary text prompts to an LMMBlock.
The LLMBlock supports two LMMs:
- OpenAI's GPT-4 with Vision, and;
- CogVLM.
You need to provide your OpenAI API key to use the GPT-4 with Vision model. You do not need to provide an API key to use CogVLM.
If you want to classify an image into one or more categories, we recommend using the dedicated LMMForClassificationBlock.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Unique name of step in workflows. | ❌ |
prompt |
str |
Holds unconstrained text prompt to LMM mode. | ✅ |
lmm_type |
str |
Type of LMM to be used. | ✅ |
lmm_config |
LMMConfig |
Configuration of LMM. | ❌ |
remote_api_key |
str |
Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v and do not require additional API key for CogVLM calls.. |
✅ |
json_output |
Dict[str, str] |
Holds dictionary that maps name of requested output field into its description. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Check what blocks you can connect to LMM
.
- inputs:
AbsoluteStaticCrop
,Crop
,RelativeStaticCrop
- outputs:
AbsoluteStaticCrop
,ActiveLearningDataCollector
,DetectionsConsensus
,RoboflowKeypointDetectionModel
,OCRModel
,RoboflowObjectDetectionModel
,ClipComparison
,RoboflowMultiLabelClassificationModel
,BarcodeDetector
,DetectionFilter
,RoboflowClassificationModel
,RelativeStaticCrop
,Crop
,DetectionOffset
,LMMForClassification
,YoloWorldModel
,RoboflowInstanceSegmentationModel
,LMM
,QRCodeDetector
,Condition
The available connections depend on its binding kinds. Check what binding kinds
LMM
has.
Bindings
-
input
images
(Batch[image]
): Reference at image to be used as input for step processing.prompt
(string
): Holds unconstrained text prompt to LMM mode.lmm_type
(string
): Type of LMM to be used.remote_api_key
(string
): Holds API key required to call LMM model - in current state of development, we require OpenAI key whenlmm_type=gpt_4v
and do not require additional API key for CogVLM calls..
-
output
parent_id
(Batch[parent_id]
): Identifier of parent for step output.image
(Batch[image_metadata]
): Dictionary with image metadata required by supervision.structured_output
(Batch[dictionary]
): Batch of dictionaries.raw_output
(Batch[string]
): Batch of string values.*
(*
): Equivalent of any element.
Example JSON definition of LMM step
{
"name": "<your_step_name_here>",
"type": "LMM",
"images": "$inputs.image",
"prompt": "my prompt",
"lmm_type": "gpt_4v",
"lmm_config": "<block_do_not_provide_example>",
"remote_api_key": "xxx-xxx",
"json_output": {
"count": "number of cats in the picture"
}
}