Llama 3.2 Vision¶
Class: LlamaVisionBlockV1
Source: inference.core.workflows.core_steps.models.foundation.llama_vision.v1.LlamaVisionBlockV1
Ask a question to Llama 3.2 Vision model with vision capabilities.
You can specify arbitrary text prompts or predefined ones, the block supports the following types of prompt:
-
Open Prompt (
unconstrained
) - Use any prompt to generate a raw response -
Text Recognition (OCR) (
ocr
) - Model recognizes text in the image -
Visual Question Answering (
visual-question-answering
) - Model answers the question you submit in the prompt -
Captioning (short) (
caption
) - Model provides a short description of the image -
Captioning (
detailed-caption
) - Model provides a long description of the image -
Single-Label Classification (
classification
) - Model classifies the image content as one of the provided classes -
Multi-Label Classification (
multi-label-classification
) - Model classifies the image content as one or more of the provided classes -
Structured Output Generation (
structured-answering
) - Model returns a JSON response with the specified fields
Issues with structured prompting
Model tends to be quite unpredictable when structured output (in our case JSON document) is expected.
That problems may impact tasks like structured-answering
, classification
or multi-label-classification
.
The cause seems to be quite sensitive "filters" of inappropriate content embedded in model.
🛠️ API providers and model variants¶
Llama Vision 3.2 model is exposed via OpenRouter API and we require passing OpenRouter API Key to run.
There are different versions of the model supported:
-
smaller version (
11B
) is faster and cheaper, yet you can expect better quality of results using90B
version -
Regular
version is paid (and usually faster) API, whereasFree
is free for use for OpenRouter clients (state at 01.01.2025)
As for now, OpenRouter is the only provider for Llama 3.2 Vision model, but we will keep you posted if the state of the matter changes.
API Usage Charges
OpenRouter is external third party providing access to the model and incurring charges on the usage. Please check out pricing before use:
💡 Further reading and Acceptable Use Policy¶
Model license
Check out model license before use.
Click here for the original model card.
Usage of this model is subject to Meta's Acceptable Use Policy.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/llama_3_2_vision@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
task_type |
str |
Task type to be performed by model. Value determines required parameters and output response.. | ❌ |
prompt |
str |
Text prompt to the Llama model. | ✅ |
output_structure |
Dict[str, str] |
Dictionary with structure of expected JSON response. | ❌ |
classes |
List[str] |
List of classes to be used. | ✅ |
api_key |
str |
Your Llama Vision API key (dependent on provider, ex: OpenRouter API key). | ✅ |
model_version |
str |
Model to be used. | ✅ |
max_tokens |
int |
Maximum number of tokens the model can generate in it's response.. | ❌ |
temperature |
float |
Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are.. | ✅ |
max_concurrent_requests |
int |
Number of concurrent requests that can be executed by block when batch of input images provided. If not given - block defaults to value configured globally in Workflows Execution Engine. Please restrict if you hit limits.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Llama 3.2 Vision
in version v1
.
- inputs:
Blur Visualization
,Triangle Visualization
,Anthropic Claude
,Trace Visualization
,Label Visualization
,LMM
,Model Monitoring Inference Aggregator
,Roboflow Dataset Upload
,Absolute Static Crop
,Image Preprocessing
,Relative Static Crop
,Image Threshold
,Reference Path Visualization
,Slack Notification
,Stability AI Outpainting
,SIFT
,Roboflow Dataset Upload
,Google Vision OCR
,Dimension Collapse
,Stability AI Inpainting
,Background Color Visualization
,CSV Formatter
,Circle Visualization
,Image Blur
,Keypoint Visualization
,VLM as Detector
,Google Gemini
,OpenAI
,Image Convert Grayscale
,Line Counter Visualization
,Model Comparison Visualization
,Dynamic Zone
,Roboflow Custom Metadata
,Image Slicer
,Stitch OCR Detections
,Crop Visualization
,Corner Visualization
,Multi-Label Classification Model
,Pixelate Visualization
,Local File Sink
,Image Slicer
,Mask Visualization
,VLM as Classifier
,Clip Comparison
,Color Visualization
,Polygon Visualization
,Email Notification
,Keypoint Detection Model
,Size Measurement
,Gaze Detection
,Perspective Correction
,Camera Calibration
,OpenAI
,Bounding Box Visualization
,Buffer
,Camera Focus
,CogVLM
,Instance Segmentation Model
,Twilio SMS Notification
,OpenAI
,Dynamic Crop
,Depth Estimation
,Halo Visualization
,Florence-2 Model
,Dot Visualization
,Classification Label Visualization
,Webhook Sink
,SIFT Comparison
,Stability AI Image Generation
,Florence-2 Model
,LMM For Classification
,Ellipse Visualization
,Image Contours
,Llama 3.2 Vision
,Clip Comparison
,Single-Label Classification Model
,Identify Changes
,Grid Visualization
,Stitch Images
,OCR Model
,Cosine Similarity
,Object Detection Model
,Polygon Zone Visualization
- outputs:
Anthropic Claude
,Triangle Visualization
,Trace Visualization
,YOLO-World Model
,Label Visualization
,LMM
,Distance Measurement
,Model Monitoring Inference Aggregator
,Roboflow Dataset Upload
,Time in Zone
,Pixel Color Count
,Image Preprocessing
,Keypoint Detection Model
,Image Threshold
,Reference Path Visualization
,Segment Anything 2 Model
,Slack Notification
,Stability AI Outpainting
,Instance Segmentation Model
,Roboflow Dataset Upload
,Google Vision OCR
,Stability AI Inpainting
,Background Color Visualization
,Image Blur
,Circle Visualization
,Keypoint Visualization
,VLM as Detector
,Google Gemini
,OpenAI
,PTZ Tracking (ONVIF)
.md),Line Counter Visualization
,Model Comparison Visualization
,Perception Encoder Embedding Model
,Path Deviation
,JSON Parser
,Roboflow Custom Metadata
,Line Counter
,Crop Visualization
,Corner Visualization
,VLM as Classifier
,Local File Sink
,Cache Set
,Mask Visualization
,VLM as Classifier
,Clip Comparison
,Color Visualization
,Polygon Visualization
,Email Notification
,Keypoint Detection Model
,Size Measurement
,Perspective Correction
,Path Deviation
,Detections Consensus
,OpenAI
,Bounding Box Visualization
,Buffer
,Detections Stitch
,CogVLM
,Twilio SMS Notification
,OpenAI
,Instance Segmentation Model
,Dynamic Crop
,Detections Classes Replacement
,Halo Visualization
,Florence-2 Model
,Dot Visualization
,Classification Label Visualization
,Webhook Sink
,Time in Zone
,SIFT Comparison
,Stability AI Image Generation
,Florence-2 Model
,VLM as Detector
,LMM For Classification
,Object Detection Model
,Ellipse Visualization
,Llama 3.2 Vision
,Line Counter
,Clip Comparison
,Grid Visualization
,CLIP Embedding Model
,Object Detection Model
,Cache Get
,Polygon Zone Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Llama 3.2 Vision
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..prompt
(string
): Text prompt to the Llama model.classes
(list_of_values
): List of classes to be used.api_key
(string
): Your Llama Vision API key (dependent on provider, ex: OpenRouter API key).model_version
(string
): Model to be used.temperature
(float
): Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are..
-
output
output
(Union[string
,language_model_output
]): String value ifstring
or LLM / VLM output iflanguage_model_output
.classes
(list_of_values
): List of values of any type.
Example JSON definition of step Llama 3.2 Vision
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/llama_3_2_vision@v1",
"images": "$inputs.image",
"task_type": "<block_does_not_provide_example>",
"prompt": "my prompt",
"output_structure": {
"my_key": "description"
},
"classes": [
"class-a",
"class-b"
],
"api_key": "xxx-xxx",
"model_version": "11B (Free) - OpenRouter",
"max_tokens": "<block_does_not_provide_example>",
"temperature": "<block_does_not_provide_example>",
"max_concurrent_requests": "<block_does_not_provide_example>"
}