Per-Class Confidence Filter¶
Class: PerClassConfidenceFilterBlockV1
Filter detection predictions by applying a different confidence threshold to each class, keeping only detections whose confidence meets or exceeds the threshold configured for their class (with a configurable fallback threshold for classes that are not listed).
How This Block Works¶
This block applies class-aware confidence filtering to detection predictions, enabling precise control over which detections are retained based on per-class quality requirements. The block:
- Takes detection predictions (object detection, instance segmentation, or keypoint detection) and a dictionary mapping class names to confidence thresholds
- Iterates through each detection, looking up the threshold associated with the detection's class name
- If the class is not present in the dictionary, falls back to the configurable
default_thresholdvalue - Keeps only the detections whose confidence is greater than or equal to the resolved threshold
- Returns the filtered detections while preserving all original metadata (class ids, masks, keypoints, tracker ids, etc.)
Unlike a single global confidence threshold, this block lets you demand high-confidence predictions for classes that are prone to false positives while keeping a more permissive threshold for classes that are harder to detect. Unlike the generic detections filter, it exposes a purpose-built dictionary input that maps cleanly to a simple {"class_name": threshold} JSON object.
Common Use Cases¶
- Noise-prone classes: Demand very high confidence (e.g. 0.9) for classes that frequently produce false positives, while accepting lower confidence for well-behaved classes
- Hard-to-detect classes: Lower the threshold for classes that the model rarely detects with high confidence so that they are not filtered out entirely
- Production-grade filtering: Apply domain-specific thresholds tuned during evaluation so that downstream analytics, alerts, or counting blocks only see detections that meet the project's quality bar
- Multi-class pipelines: Combine with object detection models that predict many classes at once when a single global confidence threshold is too coarse
Connecting to Other Blocks¶
The filtered predictions from this block can be connected to:
- Visualization blocks (Bounding Box Visualization, Label Visualization, Polygon Visualization) to render only detections that cleared their per-class threshold
- Counting and analytics blocks (Line Counter, Time in Zone, Velocity) so that metrics reflect only high-quality detections
- Tracking blocks (Byte Tracker) so that tracker associations are not polluted by low-confidence noise
- Storage or sink blocks (Roboflow Dataset Upload, Webhook Sink, CSV Formatter) so that only detections meeting the quality bar are persisted or transmitted
- Downstream transformation blocks (Dynamic Crop, Detection Offset) for subsequent processing on the filtered subset
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/per_class_confidence_filter@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | โ |
class_thresholds |
Dict[str, float] |
Mapping of class name to minimum confidence threshold. Detections whose class name is present in this dictionary are kept only if their confidence is at least the corresponding threshold. Classes not present fall back to default_threshold. Thresholds should be in the [0.0, 1.0] range.. | โ |
default_threshold |
float |
Confidence threshold applied to detections whose class name is not listed in class_thresholds. Must be in the [0.0, 1.0] range.. | โ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Per-Class Confidence Filter in version v1.
- inputs:
Detections Stitch,Detections Stabilizer,Seg Preview,Detections Filter,Object Detection Model,Dynamic Zone,Mask Edge Snap,OCR Model,Gaze Detection,Keypoint Detection Model,Google Vision OCR,Line Counter,Identify Outliers,Time in Zone,Qwen2.5-VL,Instance Segmentation Model,EasyOCR,Detections Combine,Object Detection Model,Motion Detection,SAM2 Video Tracker,Detection Event Log,Byte Tracker,ByteTrack Tracker,Bounding Rectangle,Clip Comparison,VLM As Detector,Byte Tracker,LMM,Detections Consensus,Detections Classes Replacement,Time in Zone,Detections Merge,Qwen3-VL,Perspective Correction,Overlap Filter,Velocity,Object Detection Model,Identify Changes,Byte Tracker,YOLO-World Model,OpenAI,Detection Offset,SmolVLM2,SAM 3,Instance Segmentation Model,SAM 3,VLM As Detector,Detections List Roll-Up,Template Matching,Mask Area Measurement,Dynamic Crop,SORT Tracker,Per-Class Confidence Filter,Moondream2,Keypoint Detection Model,Qwen3.5-VL,Florence-2 Model,Segment Anything 2 Model,Detections Transformation,Instance Segmentation Model,CogVLM,Keypoint Detection Model,Florence-2 Model,Path Deviation,Time in Zone,OC-SORT Tracker,PTZ Tracking (ONVIF),Path Deviation,SAM 3 - outputs:
Detections Stabilizer,Detections Stitch,Roboflow Dataset Upload,Mask Edge Snap,Distance Measurement,Color Visualization,Detections Combine,SAM2 Video Tracker,Bounding Rectangle,Ellipse Visualization,ByteTrack Tracker,Polygon Visualization,Detection Event Log,Byte Tracker,Byte Tracker,Detections Consensus,Detections Classes Replacement,Time in Zone,Model Comparison Visualization,Stitch OCR Detections,Trace Visualization,Camera Focus,Roboflow Custom Metadata,Detection Offset,Detections List Roll-Up,Size Measurement,Mask Area Measurement,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Crop Visualization,Florence-2 Model,Path Deviation,Time in Zone,Dot Visualization,OC-SORT Tracker,Path Deviation,Model Monitoring Inference Aggregator,Icon Visualization,Detections Filter,Roboflow Dataset Upload,Dynamic Zone,Pixelate Visualization,Line Counter,Time in Zone,Blur Visualization,Detections Merge,Perspective Correction,Overlap Filter,Line Counter,Velocity,Bounding Box Visualization,Byte Tracker,Stability AI Inpainting,Polygon Visualization,Roboflow Vision Events,Label Visualization,Corner Visualization,Dynamic Crop,Per-Class Confidence Filter,Keypoint Visualization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Per-Class Confidence Filter in version v1 has.
Bindings
-
input
predictions(Union[object_detection_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Detection predictions to filter. Each detection is kept only if its confidence is greater than or equal to the threshold configured for its class (with a fallback to default_threshold for classes that are not listed in class_thresholds)..class_thresholds(dictionary): Mapping of class name to minimum confidence threshold. Detections whose class name is present in this dictionary are kept only if their confidence is at least the corresponding threshold. Classes not present fall back to default_threshold. Thresholds should be in the [0.0, 1.0] range..default_threshold(float_zero_to_one): Confidence threshold applied to detections whose class name is not listed in class_thresholds. Must be in the [0.0, 1.0] range..
-
output
predictions(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_prediction.
Example JSON definition of step Per-Class Confidence Filter in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/per_class_confidence_filter@v1",
"predictions": "$steps.object_detection_model.predictions",
"class_thresholds": {
"car": 0.5,
"person": 0.98
},
"default_threshold": 0.3
}