Skip to content

CogVLM

Version v1

Ask a question to CogVLM, an open source vision-language model.

This model requires a GPU and can only be run on self-hosted devices, and is not available on the Roboflow Hosted API.

This model was previously part of the LMM block.

Type identifier

Use the following identifier in step "type" field: roboflow_core/cog_vlm@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str The unique name of this step..
prompt str Text prompt to the CogVLM model.
json_output_format Dict[str, str] Holds dictionary that maps name of requested output field into its description.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Check what blocks you can connect to CogVLM in version v1.

The available connections depend on its binding kinds. Check what binding kinds CogVLM in version v1 has.

Bindings
  • input

    • images (image): The image to infer on.
    • prompt (string): Text prompt to the CogVLM model.
  • output

    • parent_id (parent_id): Identifier of parent for step output.
    • root_parent_id (parent_id): Identifier of parent for step output.
    • image (image_metadata): Dictionary with image metadata required by supervision.
    • structured_output (dictionary): Dictionary.
    • raw_output (string): String value.
    • * (*): Equivalent of any element.
Example JSON definition of step CogVLM in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/cog_vlm@v1",
    "images": "$inputs.image",
    "prompt": "my prompt",
    "json_output_format": {
        "count": "number of cats in the picture"
    }
}