Skip to content

LMM For Classification

Class: LMMForClassificationBlockV1

Source: inference.core.workflows.core_steps.models.foundation.lmm_classifier.v1.LMMForClassificationBlockV1

Classify an image into one or more categories using a Large Multimodal Model (LMM).

You can specify arbitrary classes to an LMMBlock.

The LLMBlock supports two LMMs:

  • OpenAI's GPT-4 with Vision, and;
  • CogVLM.

You need to provide your OpenAI API key to use the GPT-4 with Vision model. You do not need to provide an API key to use CogVLM.

Type identifier

Use the following identifier in step "type" field: roboflow_core/lmm_for_classification@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
lmm_type str Type of LMM to be used.
classes List[str] List of classes that LMM shall classify against.
lmm_config LMMConfig Configuration of LMM.
remote_api_key str Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v and do not require additional API key for CogVLM calls..

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to LMM For Classification in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds LMM For Classification in version v1 has.

Bindings
  • input

    • images (image): The image to infer on.
    • lmm_type (string): Type of LMM to be used.
    • classes (list_of_values): List of classes that LMM shall classify against.
    • remote_api_key (Union[secret, string]): Holds API key required to call LMM model - in current state of development, we require OpenAI key when lmm_type=gpt_4v and do not require additional API key for CogVLM calls..
  • output

    • raw_output (string): String value.
    • top (top_class): String value representing top class predicted by classification model.
    • parent_id (parent_id): Identifier of parent for step output.
    • root_parent_id (parent_id): Identifier of parent for step output.
    • image (image_metadata): Dictionary with image metadata required by supervision.
    • prediction_type (prediction_type): String value with type of prediction.
Example JSON definition of step LMM For Classification in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/lmm_for_classification@v1",
    "images": "$inputs.image",
    "lmm_type": "gpt_4v",
    "classes": [
        "a",
        "b"
    ],
    "lmm_config": {
        "gpt_image_detail": "low",
        "gpt_model_version": "gpt-4o",
        "max_tokens": 200
    },
    "remote_api_key": "xxx-xxx"
}