Detections Classes Replacement¶
Class: DetectionsClassesReplacementBlockV1
Replace class labels of detection bounding boxes with classes predicted by a classification model applied to cropped regions, combining generic detection results with specialized classification predictions to enable two-stage detection workflows, fine-grained classification, and class refinement workflows where generic detections are refined with specific class labels from specialized classifiers.
How This Block Works¶
This block combines results from a detection model (with bounding boxes and generic classes) with classification predictions (from a specialized classifier applied to cropped regions) to replace generic class labels with specific ones. The block:
- Receives two inputs with different dimensionality levels:
object_detection_predictions: Detection results (dimensionality level 1) containing bounding boxes with generic classes (e.g., "dog", "person", "vehicle")classification_predictions: Classification results (dimensionality level 2) from a classifier applied to cropped regions of each detection (e.g., "Golden Retriever", "Labrador" for dog detections). Can also be a list of strings (e.g. from OCR).- Matches classifications to detections:
- Uses
PARENT_ID_KEY(detection_id) in classification predictions to link each classification result to its source detection, OR - Uses positional mapping (order-based) if predictions are raw strings/lists without parent IDs.
- Extracts leading class from each classification prediction:
For single-label classifications: - Uses the "top" class (predicted class) from the classification result - Extracts class name, class ID, and confidence from the classification prediction
For multi-label classifications: - Finds the class with the highest confidence score - Uses the most confident label as the replacement class - Extracts class name, class ID, and confidence from the highest-confidence prediction
For string predictions: - Uses the string as the class name - Assigns a default confidence of 1.0 and class ID of 0
- Handles missing classifications:
- Detections without corresponding classification predictions are discarded by default
- If
fallback_class_nameis provided, detections without classifications use the fallback class instead of being discarded - Fallback class ID is set to the provided value, or
sys.maxsizeif not specified or negative - Filters detections:
- Keeps only detections that have classification results (or fallback if specified)
- Removes detections that cannot be matched to classification predictions
- Replaces class information:
- Replaces class names in detections with classification class names
- Replaces class IDs in detections with classification class IDs
- Replaces confidence scores in detections with classification confidence scores
- Updates all detection metadata to reflect the new class information
- Generates new detection IDs:
- Creates new unique detection IDs for updated detections (prevents ID conflicts)
- Ensures detection IDs are unique after class replacement
- Returns updated detections:
- Outputs detections with replaced classes, maintaining bounding box coordinates and other properties
- Output dimensionality matches input detection predictions (dimensionality level 1)
The block enables two-stage detection workflows where a generic detection model locates objects and a specialized classification model provides fine-grained labels. This is useful when you need generic localization (e.g., "dog") combined with specific classification (e.g., "Golden Retriever", "German Shepherd") without losing spatial information.
Common Use Cases¶
- Two-Stage Detection and Classification: Combine generic detection with specialized classification for fine-grained labeling (e.g., detect "dog" then classify breed, detect "vehicle" then classify type, detect "person" then classify age group), enabling two-stage detection workflows
- Class Refinement: Refine generic class labels with specific classifications from specialized models (e.g., refine "animal" to specific species, refine "vehicle" to specific models, refine "food" to specific dishes), enabling class refinement workflows
- Multi-Model Workflows: Combine detection and classification models to leverage the strengths of both (e.g., use generic detector for localization and specialist classifier for identification, combine coarse and fine-grained models, leverage specialized classifiers with general detectors), enabling multi-model workflows
- Hierarchical Classification: Apply hierarchical classification where detection provides high-level classes and classification provides detailed sub-classes (e.g., detect "mammal" then classify species, detect "plant" then classify variety, detect "structure" then classify type), enabling hierarchical classification workflows
- Crop-Based Classification: Use classification results from cropped regions to enhance detection results (e.g., classify crops to improve detection labels, apply specialized classifiers to detected regions, refine detections with crop classifications), enabling crop-based classification workflows
- Fine-Grained Object Recognition: Enable fine-grained recognition by combining localization and detailed classification (e.g., recognize specific product models, identify specific animal breeds, classify specific vehicle types), enabling fine-grained recognition workflows
Connecting to Other Blocks¶
This block receives detection and classification predictions and produces detections with replaced classes:
- After detection and classification model blocks to combine generic detection with specialized classification (e.g., object detection + classification to refined detections, detection model + classifier to labeled detections), enabling detection-classification fusion workflows
- After crop blocks that create crops from detections for classification (e.g., crop detections then classify crops, create crops for classification then replace classes), enabling crop-classification workflows
- Before visualization blocks to display detections with refined classes (e.g., visualize refined detections, display detections with specific labels, show classification-enhanced detections), enabling refined detection visualization workflows
- Before filtering blocks to filter detections with refined classes (e.g., filter by specific classes, filter refined detections, apply filters to classified detections), enabling refined detection filtering workflows
- Before analytics blocks to perform analytics on refined detections (e.g., analyze specific classes, perform analytics on classified detections, track refined detection metrics), enabling refined detection analytics workflows
- In workflow outputs to provide refined detections as final output (e.g., two-stage detection outputs, classification-enhanced detection outputs, refined detection results), enabling refined detection output workflows
Requirements¶
This block requires object detection predictions (with bounding boxes) and classification predictions from crops of those bounding boxes. The classification predictions must have PARENT_ID_KEY (detection_id) to link classifications to their source detections. The block accepts different dimensionality levels: detection predictions at level 1 and classification predictions at level 2 (from crops). For single-label classifications, the "top" class is used. For multi-label classifications, the most confident class is selected. Detections without classification results are discarded unless fallback_class_name is provided. The block outputs detections with replaced classes, class IDs, and confidences, with new detection IDs generated. Output dimensionality matches input detection predictions (level 1).
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/detections_classes_replacement@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
fallback_class_name |
str |
Optional class name to use for detections that don't have corresponding classification predictions. If not provided (default None), detections without classifications are discarded. If provided, detections without classifications use this fallback class name instead of being removed. Useful for preserving detections when classification fails or is unavailable.. | โ |
fallback_class_id |
int |
Optional class ID to use with fallback_class_name for detections without classification predictions. If not specified or negative, the class ID is set to sys.maxsize. Only used when fallback_class_name is provided. Should match the class ID mapping used in your model.. | โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Detections Classes Replacement in version v1.
- inputs:
Email Notification,OpenAI,Time in Zone,Object Detection Model,Roboflow Dataset Upload,Stitch OCR Detections,EasyOCR,Gaze Detection,Dimension Collapse,Multi-Label Classification Model,CogVLM,Google Gemini,Time in Zone,Dynamic Crop,Velocity,Instance Segmentation Model,Detections Combine,LMM For Classification,Detection Event Log,ByteTrack Tracker,GLM-OCR,Model Monitoring Inference Aggregator,Keypoint Detection Model,Roboflow Custom Metadata,Pixel Color Count,S3 Sink,Detections Classes Replacement,Keypoint Detection Model,Twilio SMS Notification,Detections Stabilizer,Camera Focus,SIFT Comparison,Anthropic Claude,SAM 3,OC-SORT Tracker,OCR Model,Detections Consensus,Instance Segmentation Model,Llama 3.2 Vision,CSV Formatter,Roboflow Dataset Upload,Detection Offset,SORT Tracker,Webhook Sink,Byte Tracker,Detections List Roll-Up,Google Vision OCR,Byte Tracker,Florence-2 Model,Segment Anything 2 Model,Florence-2 Model,VLM As Classifier,Overlap Filter,Google Gemini,SAM 3,Perspective Correction,Anthropic Claude,OpenAI,VLM As Detector,OpenAI,PTZ Tracking (ONVIF),Bounding Rectangle,Qwen3.5-VL,Template Matching,Anthropic Claude,Size Measurement,Email Notification,SIFT Comparison,Multi-Label Classification Model,Line Counter,Time in Zone,Path Deviation,Detections Filter,Stitch OCR Detections,LMM,Detections Merge,Single-Label Classification Model,Detections Transformation,Byte Tracker,Line Counter,SAM 3,Motion Detection,Dynamic Zone,Seg Preview,Single-Label Classification Model,Object Detection Model,OpenAI,Roboflow Vision Events,Local File Sink,Image Contours,VLM As Classifier,Mask Area Measurement,Detections Stitch,YOLO-World Model,Clip Comparison,Twilio SMS/MMS Notification,Buffer,Clip Comparison,Distance Measurement,Google Gemini,Path Deviation,Moondream2,VLM As Detector,Slack Notification - outputs:
Corner Visualization,Ellipse Visualization,Roboflow Dataset Upload,Time in Zone,Stitch OCR Detections,Time in Zone,Dynamic Crop,Velocity,Detections Combine,Trace Visualization,Detection Event Log,Halo Visualization,Dot Visualization,ByteTrack Tracker,Polygon Visualization,Model Monitoring Inference Aggregator,Roboflow Custom Metadata,Pixelate Visualization,Circle Visualization,Icon Visualization,Detections Classes Replacement,Detections Stabilizer,Halo Visualization,Camera Focus,OC-SORT Tracker,Detections Consensus,Polygon Visualization,Crop Visualization,Roboflow Dataset Upload,Mask Visualization,Detection Offset,SORT Tracker,Heatmap Visualization,Byte Tracker,Byte Tracker,Label Visualization,Detections List Roll-Up,Florence-2 Model,Segment Anything 2 Model,Florence-2 Model,Stability AI Inpainting,Overlap Filter,Perspective Correction,PTZ Tracking (ONVIF),Bounding Rectangle,Background Color Visualization,Size Measurement,Keypoint Visualization,Time in Zone,Line Counter,Path Deviation,Detections Filter,Stitch OCR Detections,Detections Merge,Detections Transformation,Byte Tracker,Line Counter,Color Visualization,Dynamic Zone,Roboflow Vision Events,Mask Area Measurement,Detections Stitch,Triangle Visualization,Blur Visualization,Bounding Box Visualization,Distance Measurement,Path Deviation,Model Comparison Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Detections Classes Replacement in version v1 has.
Bindings
-
input
object_detection_predictions(Union[keypoint_detection_prediction,instance_segmentation_prediction,object_detection_prediction]): Detection predictions (object detection, instance segmentation, or keypoint detection) containing bounding boxes with generic class labels that will be replaced with classification results. These detections should correspond to the regions that were cropped and classified. Detections must have detection IDs that match the PARENT_ID_KEY in classification predictions. Detections at dimensionality level 1..classification_predictions(Union[string,list_of_values,classification_prediction]): Labels to replace detection class names with. Accepts classification predictions (linked via parent_id), plain strings, or lists of strings (e.g. OCR/LMM output like Gemini). String inputs are matched to detections positionally (1:1 by index). Classification inputs support single-label ('top' class) and multi-label (most confident class)..fallback_class_name(string): Optional class name to use for detections that don't have corresponding classification predictions. If not provided (default None), detections without classifications are discarded. If provided, detections without classifications use this fallback class name instead of being removed. Useful for preserving detections when classification fails or is unavailable..fallback_class_id(integer): Optional class ID to use with fallback_class_name for detections without classification predictions. If not specified or negative, the class ID is set to sys.maxsize. Only used when fallback_class_name is provided. Should match the class ID mapping used in your model..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Detections Classes Replacement in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/detections_classes_replacement@v1",
"object_detection_predictions": "$steps.object_detection_model.predictions",
"classification_predictions": "$steps.classification_model.predictions",
"fallback_class_name": null,
"fallback_class_id": null
}