Skip to content

LMM

Class: LMMBlockV1

Source: inference.core.workflows.core_steps.models.foundation.lmm.v1.LMMBlockV1

Ask a question to a Large Multimodal Model (LMM) with an image and text.

You can specify arbitrary text prompts to an LMMBlock.

The LLMBlock supports two LMMs:

  • OpenAI's GPT-4 with Vision, and;
  • CogVLM.

You need to provide your OpenAI API key to use the GPT-4 with Vision model. You do not need to provide an API key to use CogVLM.

If you want to classify an image into one or more categories, we recommend using the dedicated LMMForClassificationBlock.

Type identifier

Use the following identifier in step "type" field: roboflow_core/lmm@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
prompt str Holds unconstrained text prompt to LMM mode.
lmm_type str Type of LMM to be used.
lmm_config LMMConfig Configuration of LMM.
remote_api_key str Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v and do not require additional API key for CogVLM calls..
json_output Dict[str, str] Holds dictionary that maps name of requested output field into its description.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to LMM in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds LMM in version v1 has.

Bindings
  • input

    • images (image): The image to infer on.
    • prompt (string): Holds unconstrained text prompt to LMM mode.
    • lmm_type (string): Type of LMM to be used.
    • remote_api_key (Union[secret, string]): Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v and do not require additional API key for CogVLM calls..
  • output

    • parent_id (parent_id): Identifier of parent for step output.
    • root_parent_id (parent_id): Identifier of parent for step output.
    • image (image_metadata): Dictionary with image metadata required by supervision.
    • structured_output (dictionary): Dictionary.
    • raw_output (string): String value.
    • * (*): Equivalent of any element.
Example JSON definition of step LMM in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/lmm@v1",
    "images": "$inputs.image",
    "prompt": "my prompt",
    "lmm_type": "gpt_4v",
    "lmm_config": {
        "gpt_image_detail": "low",
        "gpt_model_version": "gpt-4o",
        "max_tokens": 200
    },
    "remote_api_key": "xxx-xxx",
    "json_output": {
        "count": "number of cats in the picture"
    }
}