Anthropic Claude¶
Version v1
¶
Ask a question to Anthropic Claude model with vision capabilities.
You can specify arbitrary text prompts or predefined ones, the block supports the following types of prompt:
-
unconstrained
- any arbitrary prompt you like -
ocr
- predefined prompt to recognise text from image -
visual-question-answering
- your prompt is supposed to provide question and will be wrapped into structure that is suited for VQA task -
caption
- predefined prompt to generate short caption of the image -
detailed-caption
- predefined prompt to generate elaborated caption of the image -
classification
- predefined prompt to generate multi-class classification output (that can be parsed withVLM as Classifier
block) -
multi-label-classification
- predefined prompt to generate multi-label classification output (that can be parsed withVLM as Classifier
block) -
object-detection
- predefined prompt to generate object detection output (that can be parsed withVLM as Detector
block) -
structured-answering
- your input defines expected JSON output fields that can be parsed withJSON Parser
block.
You need to provide your Anthropic API key to use the Claude model.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/anthropic_claude@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
The unique name of this step.. | ❌ |
task_type |
str |
Task type to be performed by model. Value of parameter determine set of fields that are required. For unconstrained , visual-question-answering , - prompt parameter must be provided.For structured-answering - output-structure must be provided. For classification , multi-label-classification and object-detection - classes must be filled. ocr , caption , detailed-caption do notrequire any additional parameter.. |
❌ |
prompt |
str |
Text prompt to the Claude model. | ✅ |
output_structure |
Dict[str, str] |
Dictionary with structure of expected JSON response. | ❌ |
classes |
List[str] |
List of classes to be used. | ✅ |
api_key |
str |
Your Antropic API key. | ✅ |
model_version |
str |
Model to be used. | ✅ |
max_tokens |
int |
Maximum number of tokens the model can generate in it's response.. | ❌ |
temperature |
float |
Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are.. | ✅ |
max_image_size |
int |
Maximum size of the image - if input has larger side, it will be downscaled, keeping aspect ratio. | ✅ |
max_concurrent_requests |
int |
Number of concurrent requests that can be executed by block when batch of input images provided. If not given - block defaults to value configured globally in Workflows Execution Engine. Please restrict if you hit ANtropic API limits.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Check what blocks you can connect to Anthropic Claude
in version v1
.
- inputs:
Label Visualization
,Crop Visualization
,Mask Visualization
,Blur Visualization
,Image Contours
,Bounding Box Visualization
,Image Convert Grayscale
,Camera Focus
,Dot Visualization
,Color Visualization
,Corner Visualization
,Circle Visualization
,Perspective Correction
,Image Slicer
,Triangle Visualization
,Relative Static Crop
,Absolute Static Crop
,Halo Visualization
,Background Color Visualization
,SIFT
,Pixelate Visualization
,Polygon Visualization
,Dynamic Crop
,Image Blur
,Ellipse Visualization
,Image Threshold
- outputs:
JSON Parser
,VLM as Detector
,Roboflow Custom Metadata
,VLM as Classifier
,Perspective Correction
The available connections depend on its binding kinds. Check what binding kinds
Anthropic Claude
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on.prompt
(string
): Text prompt to the Claude model.classes
(list_of_values
): List of classes to be used.api_key
(string
): Your Antropic API key.model_version
(string
): Model to be used.temperature
(float
): Temperature to sample from the model - value in range 0.0-2.0, the higher - the more random / "creative" the generations are..max_image_size
(integer
): Maximum size of the image - if input has larger side, it will be downscaled, keeping aspect ratio.
-
output
output
(Union[string
,language_model_output
]): String value ifstring
or LLM / VLM output iflanguage_model_output
.classes
(list_of_values
): List of values of any type.
Example JSON definition of step Anthropic Claude
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/anthropic_claude@v1",
"images": "$inputs.image",
"task_type": "<block_does_not_provide_example>",
"prompt": "my prompt",
"output_structure": {
"my_key": "description"
},
"classes": [
"class-a",
"class-b"
],
"api_key": "xxx-xxx",
"model_version": "claude-3-5-sonnet",
"max_tokens": "<block_does_not_provide_example>",
"temperature": "<block_does_not_provide_example>",
"max_image_size": "<block_does_not_provide_example>",
"max_concurrent_requests": "<block_does_not_provide_example>"
}