Skip to content

CogVLM

Class: CogVLMBlockV1

Source: inference.core.workflows.core_steps.models.foundation.cog_vlm.v1.CogVLMBlockV1

CogVLM reached End Of Life

Due to dependencies conflicts with newer models and security vulnerabilities discovered in transformers library patched in the versions of library incompatible with the model we announced End Of Life for CogVLM support in inference, effective since release 0.38.0.

We are leaving this block in ecosystem until release 0.42.0 for clients to get informed about change that was introduced.

Starting as of now, all Workflows using the block stop being functional (runtime error will be raised), after inference release 0.42.0 - this block will be removed and Execution Engine will raise compilation error seeing the block in Workflow definition.

Ask a question to CogVLM, an open source vision-language model.

This model requires a GPU and can only be run on self-hosted devices, and is not available on the Roboflow Hosted API.

This model was previously part of the LMM block.

Type identifier

Use the following identifier in step "type" field: roboflow_core/cog_vlm@v1to add the block as as step in your workflow.

Properties

Name Type Description Refs
name str Enter a unique identifier for this step..
prompt str Text prompt to the CogVLM model.
json_output_format Dict[str, str] Holds dictionary that maps name of requested output field into its description.

The Refs column marks possibility to parametrise the property with dynamic values available in workflow runtime. See Bindings for more info.

Available Connections

Compatible Blocks

Check what blocks you can connect to CogVLM in version v1.

Input and Output Bindings

The available connections depend on its binding kinds. Check what binding kinds CogVLM in version v1 has.

Bindings
  • input

    • images (image): The image to infer on..
    • prompt (string): Text prompt to the CogVLM model.
  • output

    • parent_id (parent_id): Identifier of parent for step output.
    • root_parent_id (parent_id): Identifier of parent for step output.
    • image (image_metadata): Dictionary with image metadata required by supervision.
    • structured_output (dictionary): Dictionary.
    • raw_output (string): String value.
    • * (*): Equivalent of any element.
Example JSON definition of step CogVLM in version v1
{
    "name": "<your_step_name_here>",
    "type": "roboflow_core/cog_vlm@v1",
    "images": "$inputs.image",
    "prompt": "my prompt",
    "json_output_format": {
        "count": "number of cats in the picture"
    }
}