Object Detection Model¶
v2¶
Class: RoboflowObjectDetectionModelBlockV2
(there are multiple versions of this block)
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Run inference on a object-detection model hosted on or uploaded to Roboflow.
You can query any model that is private to your account, or any public model available on Roboflow Universe.
You will need to set your Roboflow API key in your Inference environment to use this block. To learn more about setting your Roboflow API key, refer to the Inference documentation.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/roboflow_object_detection_model@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
model_id |
str |
Roboflow model identifier.. | ✅ |
confidence |
float |
Confidence threshold for predictions.. | ✅ |
class_filter |
List[str] |
List of accepted classes. Classes must exist in the model's training set.. | ✅ |
iou_threshold |
float |
Minimum overlap threshold between boxes to combine them into a single detection, used in NMS. Learn more.. | ✅ |
max_detections |
int |
Maximum number of detections to return.. | ✅ |
class_agnostic_nms |
bool |
Boolean flag to specify if NMS is to be used in class-agnostic mode.. | ✅ |
max_candidates |
int |
Maximum number of candidates as NMS input to be taken into account.. | ✅ |
disable_active_learning |
bool |
Boolean flag to disable project-level active learning for this block.. | ✅ |
active_learning_target_dataset |
str |
Target dataset for active learning, if enabled.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Object Detection Model
in version v2
.
- inputs:
Grid Visualization
,Image Blur
,Image Preprocessing
,Image Slicer
,OpenAI
,Instance Segmentation Model
,Dynamic Crop
,Multi-Label Classification Model
,Absolute Static Crop
,Roboflow Dataset Upload
,Color Visualization
,Corner Visualization
,Google Gemini
,Depth Estimation
,Stability AI Outpainting
,Keypoint Visualization
,Keypoint Detection Model
,PTZ Tracking (ONVIF)
.md),Trace Visualization
,Clip Comparison
,Email Notification
,Dimension Collapse
,Model Comparison Visualization
,Mask Visualization
,Image Slicer
,Model Monitoring Inference Aggregator
,Clip Comparison
,Size Measurement
,Buffer
,Detections Consensus
,Image Threshold
,Contrast Equalization
,Line Counter
,Morphological Transformation
,Classification Label Visualization
,Relative Static Crop
,Camera Calibration
,Dynamic Zone
,Florence-2 Model
,Blur Visualization
,Stitch Images
,Roboflow Dataset Upload
,JSON Parser
,Triangle Visualization
,Perspective Correction
,SIFT
,Icon Visualization
,Object Detection Model
,Label Visualization
,Stability AI Image Generation
,Pixel Color Count
,Llama 3.2 Vision
,Ellipse Visualization
,SIFT Comparison
,VLM as Detector
,Single-Label Classification Model
,Line Counter Visualization
,Florence-2 Model
,Line Counter
,SIFT Comparison
,Distance Measurement
,Slack Notification
,Local File Sink
,Image Convert Grayscale
,Roboflow Custom Metadata
,Twilio SMS Notification
,Background Color Visualization
,VLM as Classifier
,QR Code Generator
,Identify Changes
,Polygon Zone Visualization
,Anthropic Claude
,VLM as Detector
,Polygon Visualization
,Camera Focus
,Dot Visualization
,Template Matching
,Identify Outliers
,Circle Visualization
,Bounding Box Visualization
,Image Contours
,OpenAI
,Halo Visualization
,Reference Path Visualization
,VLM as Classifier
,Pixelate Visualization
,Webhook Sink
,Stability AI Inpainting
,Crop Visualization
- outputs:
Ellipse Visualization
,Detections Stabilizer
,Byte Tracker
,Dynamic Crop
,Instance Segmentation Model
,Time in Zone
,SmolVLM2
,Roboflow Dataset Upload
,Single-Label Classification Model
,Color Visualization
,Multi-Label Classification Model
,Corner Visualization
,Florence-2 Model
,Line Counter
,Overlap Filter
,Distance Measurement
,Byte Tracker
,Moondream2
,Keypoint Detection Model
,PTZ Tracking (ONVIF)
.md),Detection Offset
,Detections Combine
,Trace Visualization
,Roboflow Custom Metadata
,Background Color Visualization
,Keypoint Detection Model
,Single-Label Classification Model
,Time in Zone
,Segment Anything 2 Model
,Model Comparison Visualization
,Model Monitoring Inference Aggregator
,Size Measurement
,Detections Stitch
,Multi-Label Classification Model
,Byte Tracker
,Detections Consensus
,Line Counter
,Icon Visualization
,Dot Visualization
,Detections Filter
,Path Deviation
,Object Detection Model
,Velocity
,Time in Zone
,Detections Classes Replacement
,Path Deviation
,Instance Segmentation Model
,Circle Visualization
,Bounding Box Visualization
,Florence-2 Model
,Blur Visualization
,Object Detection Model
,Roboflow Dataset Upload
,Detections Merge
,Qwen2.5-VL
,Triangle Visualization
,Pixelate Visualization
,Perspective Correction
,Webhook Sink
,Detections Transformation
,Label Visualization
,Stitch OCR Detections
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Object Detection Model
in version v2
has.
Bindings
-
input
images
(image
): The image to infer on..model_id
(roboflow_model_id
): Roboflow model identifier..confidence
(float_zero_to_one
): Confidence threshold for predictions..class_filter
(list_of_values
): List of accepted classes. Classes must exist in the model's training set..iou_threshold
(float_zero_to_one
): Minimum overlap threshold between boxes to combine them into a single detection, used in NMS. Learn more..max_detections
(integer
): Maximum number of detections to return..class_agnostic_nms
(boolean
): Boolean flag to specify if NMS is to be used in class-agnostic mode..max_candidates
(integer
): Maximum number of candidates as NMS input to be taken into account..disable_active_learning
(boolean
): Boolean flag to disable project-level active learning for this block..active_learning_target_dataset
(roboflow_project
): Target dataset for active learning, if enabled..
-
output
inference_id
(inference_id
): Inference identifier.predictions
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.model_id
(roboflow_model_id
): Roboflow model id.
Example JSON definition of step Object Detection Model
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/roboflow_object_detection_model@v2",
"images": "$inputs.image",
"model_id": "my_project/3",
"confidence": 0.3,
"class_filter": [
"a",
"b",
"c"
],
"iou_threshold": 0.4,
"max_detections": 300,
"class_agnostic_nms": true,
"max_candidates": 3000,
"disable_active_learning": true,
"active_learning_target_dataset": "my_project"
}
v1¶
Class: RoboflowObjectDetectionModelBlockV1
(there are multiple versions of this block)
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Run inference on a object-detection model hosted on or uploaded to Roboflow.
You can query any model that is private to your account, or any public model available on Roboflow Universe.
You will need to set your Roboflow API key in your Inference environment to use this block. To learn more about setting your Roboflow API key, refer to the Inference documentation.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/roboflow_object_detection_model@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
model_id |
str |
Roboflow model identifier.. | ✅ |
confidence |
float |
Confidence threshold for predictions.. | ✅ |
class_filter |
List[str] |
List of accepted classes. Classes must exist in the model's training set.. | ✅ |
iou_threshold |
float |
Minimum overlap threshold between boxes to combine them into a single detection, used in NMS. Learn more.. | ✅ |
max_detections |
int |
Maximum number of detections to return.. | ✅ |
class_agnostic_nms |
bool |
Boolean flag to specify if NMS is to be used in class-agnostic mode.. | ✅ |
max_candidates |
int |
Maximum number of candidates as NMS input to be taken into account.. | ✅ |
disable_active_learning |
bool |
Boolean flag to disable project-level active learning for this block.. | ✅ |
active_learning_target_dataset |
str |
Target dataset for active learning, if enabled.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Object Detection Model
in version v1
.
- inputs:
Grid Visualization
,Image Blur
,Image Preprocessing
,Image Slicer
,OpenAI
,Instance Segmentation Model
,Dynamic Crop
,Multi-Label Classification Model
,Absolute Static Crop
,Roboflow Dataset Upload
,Color Visualization
,Corner Visualization
,Google Gemini
,Depth Estimation
,Stability AI Outpainting
,Keypoint Visualization
,Keypoint Detection Model
,PTZ Tracking (ONVIF)
.md),Trace Visualization
,Clip Comparison
,Email Notification
,Dimension Collapse
,Model Comparison Visualization
,Mask Visualization
,Image Slicer
,Model Monitoring Inference Aggregator
,Clip Comparison
,Size Measurement
,Buffer
,Detections Consensus
,Image Threshold
,Contrast Equalization
,Line Counter
,Morphological Transformation
,Classification Label Visualization
,Relative Static Crop
,Camera Calibration
,Dynamic Zone
,Florence-2 Model
,Blur Visualization
,Stitch Images
,Roboflow Dataset Upload
,JSON Parser
,Triangle Visualization
,Perspective Correction
,SIFT
,Icon Visualization
,Object Detection Model
,Label Visualization
,Stability AI Image Generation
,Pixel Color Count
,Llama 3.2 Vision
,Ellipse Visualization
,SIFT Comparison
,VLM as Detector
,Single-Label Classification Model
,Line Counter Visualization
,Florence-2 Model
,Line Counter
,SIFT Comparison
,Distance Measurement
,Slack Notification
,Local File Sink
,Image Convert Grayscale
,Roboflow Custom Metadata
,Twilio SMS Notification
,Background Color Visualization
,VLM as Classifier
,QR Code Generator
,Identify Changes
,Polygon Zone Visualization
,Anthropic Claude
,VLM as Detector
,Polygon Visualization
,Camera Focus
,Dot Visualization
,Template Matching
,Identify Outliers
,Circle Visualization
,Bounding Box Visualization
,Image Contours
,OpenAI
,Halo Visualization
,Reference Path Visualization
,VLM as Classifier
,Pixelate Visualization
,Webhook Sink
,Stability AI Inpainting
,Crop Visualization
- outputs:
Image Blur
,OpenAI
,Image Preprocessing
,Instance Segmentation Model
,Dynamic Crop
,Time in Zone
,Roboflow Dataset Upload
,LMM
,Moondream2
,Color Visualization
,Corner Visualization
,Google Gemini
,Stability AI Outpainting
,Keypoint Visualization
,PTZ Tracking (ONVIF)
.md),Trace Visualization
,Clip Comparison
,Google Vision OCR
,Email Notification
,YOLO-World Model
,Time in Zone
,Model Comparison Visualization
,Mask Visualization
,Model Monitoring Inference Aggregator
,Size Measurement
,Detections Consensus
,Image Threshold
,Contrast Equalization
,Line Counter
,OpenAI
,Detections Filter
,Path Deviation
,Morphological Transformation
,Classification Label Visualization
,Velocity
,Time in Zone
,Path Deviation
,Florence-2 Model
,Blur Visualization
,Cache Set
,Roboflow Dataset Upload
,Triangle Visualization
,Perspective Correction
,Icon Visualization
,Label Visualization
,Pixel Color Count
,Stability AI Image Generation
,Stitch OCR Detections
,Detections Transformation
,Llama 3.2 Vision
,Ellipse Visualization
,CogVLM
,Detections Stabilizer
,Byte Tracker
,Line Counter Visualization
,Florence-2 Model
,Line Counter
,Overlap Filter
,Local File Sink
,Distance Measurement
,Slack Notification
,SIFT Comparison
,Byte Tracker
,Detection Offset
,Detections Combine
,Roboflow Custom Metadata
,Perception Encoder Embedding Model
,Twilio SMS Notification
,Background Color Visualization
,QR Code Generator
,Segment Anything 2 Model
,Cache Get
,Polygon Zone Visualization
,Anthropic Claude
,Detections Stitch
,Byte Tracker
,Polygon Visualization
,Dot Visualization
,LMM For Classification
,CLIP Embedding Model
,Detections Classes Replacement
,Instance Segmentation Model
,Circle Visualization
,Bounding Box Visualization
,OpenAI
,Halo Visualization
,Reference Path Visualization
,Detections Merge
,Pixelate Visualization
,Webhook Sink
,Stability AI Inpainting
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Object Detection Model
in version v1
has.
Bindings
-
input
images
(image
): The image to infer on..model_id
(roboflow_model_id
): Roboflow model identifier..confidence
(float_zero_to_one
): Confidence threshold for predictions..class_filter
(list_of_values
): List of accepted classes. Classes must exist in the model's training set..iou_threshold
(float_zero_to_one
): Minimum overlap threshold between boxes to combine them into a single detection, used in NMS. Learn more..max_detections
(integer
): Maximum number of detections to return..class_agnostic_nms
(boolean
): Boolean flag to specify if NMS is to be used in class-agnostic mode..max_candidates
(integer
): Maximum number of candidates as NMS input to be taken into account..disable_active_learning
(boolean
): Boolean flag to disable project-level active learning for this block..active_learning_target_dataset
(roboflow_project
): Target dataset for active learning, if enabled..
-
output
inference_id
(string
): String value.predictions
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Object Detection Model
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/roboflow_object_detection_model@v1",
"images": "$inputs.image",
"model_id": "my_project/3",
"confidence": 0.3,
"class_filter": [
"a",
"b",
"c"
],
"iou_threshold": 0.4,
"max_detections": 300,
"class_agnostic_nms": true,
"max_candidates": 3000,
"disable_active_learning": true,
"active_learning_target_dataset": "my_project"
}