Detections Classes Replacement¶
Class: DetectionsClassesReplacementBlockV1
Replace class labels of detection bounding boxes with classes predicted by a classification model applied to cropped regions, combining generic detection results with specialized classification predictions to enable two-stage detection workflows, fine-grained classification, and class refinement workflows where generic detections are refined with specific class labels from specialized classifiers.
How This Block Works¶
This block combines results from a detection model (with bounding boxes and generic classes) with classification predictions (from a specialized classifier applied to cropped regions) to replace generic class labels with specific ones. The block:
- Receives two inputs with different dimensionality levels:
object_detection_predictions: Detection results (dimensionality level 1) containing bounding boxes with generic classes (e.g., "dog", "person", "vehicle")classification_predictions: Classification results (dimensionality level 2) from a classifier applied to cropped regions of each detection (e.g., "Golden Retriever", "Labrador" for dog detections). Can also be a list of strings (e.g. from OCR).- Matches classifications to detections:
- Uses
PARENT_ID_KEY(detection_id) in classification predictions to link each classification result to its source detection, OR - Uses positional mapping (order-based) if predictions are raw strings/lists without parent IDs.
- Extracts leading class from each classification prediction:
For single-label classifications: - Uses the "top" class (predicted class) from the classification result - Extracts class name, class ID, and confidence from the classification prediction
For multi-label classifications: - Finds the class with the highest confidence score - Uses the most confident label as the replacement class - Extracts class name, class ID, and confidence from the highest-confidence prediction
For string predictions: - Uses the string as the class name - Assigns a default confidence of 1.0 and class ID of 0
- Handles missing classifications:
- Detections without corresponding classification predictions are discarded by default
- If
fallback_class_nameis provided, detections without classifications use the fallback class instead of being discarded - Fallback class ID is set to the provided value, or
sys.maxsizeif not specified or negative - Filters detections:
- Keeps only detections that have classification results (or fallback if specified)
- Removes detections that cannot be matched to classification predictions
- Replaces class information:
- Replaces class names in detections with classification class names
- Replaces class IDs in detections with classification class IDs
- Replaces confidence scores in detections with classification confidence scores
- Updates all detection metadata to reflect the new class information
- Generates new detection IDs:
- Creates new unique detection IDs for updated detections (prevents ID conflicts)
- Ensures detection IDs are unique after class replacement
- Returns updated detections:
- Outputs detections with replaced classes, maintaining bounding box coordinates and other properties
- Output dimensionality matches input detection predictions (dimensionality level 1)
The block enables two-stage detection workflows where a generic detection model locates objects and a specialized classification model provides fine-grained labels. This is useful when you need generic localization (e.g., "dog") combined with specific classification (e.g., "Golden Retriever", "German Shepherd") without losing spatial information.
Common Use Cases¶
- Two-Stage Detection and Classification: Combine generic detection with specialized classification for fine-grained labeling (e.g., detect "dog" then classify breed, detect "vehicle" then classify type, detect "person" then classify age group), enabling two-stage detection workflows
- Class Refinement: Refine generic class labels with specific classifications from specialized models (e.g., refine "animal" to specific species, refine "vehicle" to specific models, refine "food" to specific dishes), enabling class refinement workflows
- Multi-Model Workflows: Combine detection and classification models to leverage the strengths of both (e.g., use generic detector for localization and specialist classifier for identification, combine coarse and fine-grained models, leverage specialized classifiers with general detectors), enabling multi-model workflows
- Hierarchical Classification: Apply hierarchical classification where detection provides high-level classes and classification provides detailed sub-classes (e.g., detect "mammal" then classify species, detect "plant" then classify variety, detect "structure" then classify type), enabling hierarchical classification workflows
- Crop-Based Classification: Use classification results from cropped regions to enhance detection results (e.g., classify crops to improve detection labels, apply specialized classifiers to detected regions, refine detections with crop classifications), enabling crop-based classification workflows
- Fine-Grained Object Recognition: Enable fine-grained recognition by combining localization and detailed classification (e.g., recognize specific product models, identify specific animal breeds, classify specific vehicle types), enabling fine-grained recognition workflows
Connecting to Other Blocks¶
This block receives detection and classification predictions and produces detections with replaced classes:
- After detection and classification model blocks to combine generic detection with specialized classification (e.g., object detection + classification to refined detections, detection model + classifier to labeled detections), enabling detection-classification fusion workflows
- After crop blocks that create crops from detections for classification (e.g., crop detections then classify crops, create crops for classification then replace classes), enabling crop-classification workflows
- Before visualization blocks to display detections with refined classes (e.g., visualize refined detections, display detections with specific labels, show classification-enhanced detections), enabling refined detection visualization workflows
- Before filtering blocks to filter detections with refined classes (e.g., filter by specific classes, filter refined detections, apply filters to classified detections), enabling refined detection filtering workflows
- Before analytics blocks to perform analytics on refined detections (e.g., analyze specific classes, perform analytics on classified detections, track refined detection metrics), enabling refined detection analytics workflows
- In workflow outputs to provide refined detections as final output (e.g., two-stage detection outputs, classification-enhanced detection outputs, refined detection results), enabling refined detection output workflows
Requirements¶
This block requires object detection predictions (with bounding boxes) and classification predictions from crops of those bounding boxes. The classification predictions must have PARENT_ID_KEY (detection_id) to link classifications to their source detections. The block accepts different dimensionality levels: detection predictions at level 1 and classification predictions at level 2 (from crops). For single-label classifications, the "top" class is used. For multi-label classifications, the most confident class is selected. Detections without classification results are discarded unless fallback_class_name is provided. The block outputs detections with replaced classes, class IDs, and confidences, with new detection IDs generated. Output dimensionality matches input detection predictions (level 1).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_classes_replacement@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
fallback_class_name |
str |
Optional class name to use for detections that don't have corresponding classification predictions. If not provided (default None), detections without classifications are discarded. If provided, detections without classifications use this fallback class name instead of being removed. Useful for preserving detections when classification fails or is unavailable.. | โ |
fallback_class_id |
int |
Optional class ID to use with fallback_class_name for detections without classification predictions. If not specified or negative, the class ID is set to sys.maxsize. Only used when fallback_class_name is provided. Should match the class ID mapping used in your model.. | โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Classes Replacement in version v1.
- inputs:
Roboflow Dataset Upload,Mask Edge Snap,OCR Model,Gaze Detection,Instance Segmentation Model,Distance Measurement,Multi-Label Classification Model,Bounding Rectangle,ByteTrack Tracker,Single-Label Classification Model,Byte Tracker,Detections Consensus,Detections Classes Replacement,Webhook Sink,Stitch OCR Detections,Object Detection Model,Camera Focus,Qwen 3.5 API,OpenAI,Buffer,SAM 3,Size Measurement,SORT Tracker,Florence-2 Model,Detections Transformation,Path Deviation,GLM-OCR,S3 Sink,Path Deviation,Seg Preview,Twilio SMS Notification,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Clip Comparison,VLM As Classifier,Line Counter,Twilio SMS/MMS Notification,Motion Detection,CSV Formatter,Detections Merge,Perspective Correction,Overlap Filter,Anthropic Claude,Velocity,Line Counter,Roboflow Vision Events,VLM As Detector,Google Gemini,Qwen3.5-VL,Per-Class Confidence Filter,Segment Anything 2 Model,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Detections Stitch,Detections Stabilizer,Object Detection Model,Email Notification,Google Gemma API,Google Vision OCR,Google Gemini,EasyOCR,Detections Combine,Object Detection Model,SAM2 Video Tracker,Detection Event Log,Byte Tracker,OpenAI,Anthropic Claude,Time in Zone,Roboflow Custom Metadata,YOLO-World Model,Detection Offset,Instance Segmentation Model,Single-Label Classification Model,Detections List Roll-Up,VLM As Classifier,Template Matching,Mask Area Measurement,Qwen 3.6 API,SIFT Comparison,Instance Segmentation Model,CogVLM,Florence-2 Model,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,Local File Sink,Detections Filter,Image Contours,Keypoint Detection Model,Time in Zone,Dimension Collapse,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Multi-Label Classification Model,Byte Tracker,SAM 3,Single-Label Classification Model,OpenAI,Dynamic Crop,Moondream2,Keypoint Detection Model,LMM For Classification,Keypoint Detection Model,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison - outputs:
Detections Stabilizer,Detections Stitch,Roboflow Dataset Upload,Mask Edge Snap,Distance Measurement,Color Visualization,Detections Combine,SAM2 Video Tracker,Bounding Rectangle,Ellipse Visualization,ByteTrack Tracker,Polygon Visualization,Detection Event Log,Byte Tracker,Byte Tracker,Detections Consensus,Detections Classes Replacement,Time in Zone,Model Comparison Visualization,Stitch OCR Detections,Trace Visualization,Camera Focus,Roboflow Custom Metadata,Detection Offset,Detections List Roll-Up,Size Measurement,Mask Area Measurement,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Crop Visualization,Florence-2 Model,Path Deviation,Time in Zone,Dot Visualization,OC-SORT Tracker,Path Deviation,Model Monitoring Inference Aggregator,Icon Visualization,Detections Filter,Roboflow Dataset Upload,Dynamic Zone,Pixelate Visualization,Line Counter,Time in Zone,Blur Visualization,Detections Merge,Perspective Correction,Overlap Filter,Line Counter,Velocity,Bounding Box Visualization,Byte Tracker,Stability AI Inpainting,Polygon Visualization,Roboflow Vision Events,Label Visualization,Corner Visualization,Dynamic Crop,Per-Class Confidence Filter,Keypoint Visualization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Classes Replacement in version v1 has.
Bindings
-
input
object_detection_predictions(Union[object_detection_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Detection predictions (object detection, instance segmentation, or keypoint detection) containing bounding boxes with generic class labels that will be replaced with classification results. These detections should correspond to the regions that were cropped and classified. Detections must have detection IDs that match the PARENT_ID_KEY in classification predictions. Detections at dimensionality level 1..classification_predictions(Union[classification_prediction,string,list_of_values]): Labels to replace detection class names with. Accepts classification predictions (linked via parent_id), plain strings, or lists of strings (e.g. OCR/LMM output like Gemini). String inputs are matched to detections positionally (1:1 by index). Classification inputs support single-label ('top' class) and multi-label (most confident class)..fallback_class_name(string): Optional class name to use for detections that don't have corresponding classification predictions. If not provided (default None), detections without classifications are discarded. If provided, detections without classifications use this fallback class name instead of being removed. Useful for preserving detections when classification fails or is unavailable..fallback_class_id(integer): Optional class ID to use with fallback_class_name for detections without classification predictions. If not specified or negative, the class ID is set to sys.maxsize. Only used when fallback_class_name is provided. Should match the class ID mapping used in your model..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detections Classes Replacement in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_classes_replacement@v1",
"object_detection_predictions": "$steps.object_detection_model.predictions",
"classification_predictions": "$steps.classification_model.predictions",
"fallback_class_name": null,
"fallback_class_id": null
}