SORT Tracker¶
Class: SORTBlockV1
Source: inference.core.workflows.core_steps.trackers.sort.v1.SORTBlockV1
Track objects across video frames using the SORT algorithm from the roboflow/trackers package.
SORT pairs a Kalman filter motion model with single-stage IoU-based Hungarian assignment. It has the fewest parameters and lowest overhead, processing hundreds of frames per second. However, it lacks re-identification and occlusion-recovery mechanisms, so tracks may fragment or switch IDs when objects are temporarily hidden.
When to use SORT: - Controlled environments with reliable, high-confidence detections. - Real-time pipelines where maximum throughput is critical. - Simple scenes with minimal occlusion and predictable linear motion.
When to consider alternatives: - If you see fragmented tracks or missed weak detections, try ByteTrack. - If objects undergo heavy occlusion or non-linear motion, try OC-SORT.
Outputs three detection sets: - tracked_detections: All confirmed tracked detections with assigned track IDs. - new_instances: Detections whose track ID appears for the first time. - already_seen_instances: Detections whose track ID has been seen in a prior frame.
The block maintains separate tracker state and instance cache per video_identifier,
enabling multi-stream tracking within a single workflow.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/trackers_sort@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
minimum_iou_threshold |
float |
Minimum IoU required to associate a detection with an existing track. Default: 0.3.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames a track must be matched before it is emitted as a confirmed track (tracker_id != -1). Default: 3.. | ✅ |
lost_track_buffer |
int |
Number of frames to keep a track alive after it loses its matched detection. Higher values improve occlusion recovery. Default: 30.. | ✅ |
track_activation_threshold |
float |
Minimum detection confidence required to spawn a new track. Detections below this threshold are not used to create new tracks. Default: 0.25.. | ✅ |
instances_cache_size |
int |
Maximum number of track IDs retained in the instance cache for new/already-seen categorisation. Uses FIFO eviction. Default: 16384.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to SORT Tracker in version v1.
- inputs:
Depth Estimation,Pixelate Visualization,Object Detection Model,Velocity,Label Visualization,Reference Path Visualization,Google Vision OCR,VLM As Detector,Keypoint Detection Model,Crop Visualization,Detections Classes Replacement,Halo Visualization,Detections Consensus,SAM 3,Corner Visualization,Clip Comparison,Detection Event Log,Segment Anything 2 Model,Stability AI Inpainting,Detections Stitch,Byte Tracker,OC-SORT Tracker,Template Matching,SIFT Comparison,ByteTrack Tracker,Absolute Static Crop,Object Detection Model,Halo Visualization,Instance Segmentation Model,Polygon Visualization,Keypoint Visualization,SIFT,Identify Outliers,Time in Zone,SAM 3,Trace Visualization,Image Slicer,Camera Focus,Circle Visualization,Icon Visualization,Background Subtraction,Byte Tracker,VLM As Detector,Stability AI Outpainting,Triangle Visualization,QR Code Generator,Ellipse Visualization,Instance Segmentation Model,Stitch Images,Moondream2,EasyOCR,Text Display,Color Visualization,Pixel Color Count,Time in Zone,OCR Model,Path Deviation,Image Threshold,Detections List Roll-Up,Line Counter,Time in Zone,SIFT Comparison,Mask Visualization,Seg Preview,Camera Calibration,YOLO-World Model,Grid Visualization,Morphological Transformation,Model Comparison Visualization,Path Deviation,Image Preprocessing,SORT Tracker,Detections Transformation,Dynamic Zone,Mask Area Measurement,Image Contours,Detections Combine,Image Blur,Overlap Filter,Perspective Correction,Identify Changes,Bounding Box Visualization,Heatmap Visualization,Detections Merge,Gaze Detection,Distance Measurement,Byte Tracker,Polygon Zone Visualization,Contrast Equalization,Detections Filter,Line Counter,PTZ Tracking (ONVIF),Dynamic Crop,Line Counter Visualization,Background Color Visualization,Camera Focus,Keypoint Detection Model,SAM 3,Motion Detection,Image Convert Grayscale,Polygon Visualization,Classification Label Visualization,Detections Stabilizer,Image Slicer,Bounding Rectangle,Stability AI Image Generation,Detection Offset,Relative Static Crop,Blur Visualization,Dot Visualization - outputs:
Florence-2 Model,Pixelate Visualization,Velocity,Label Visualization,Crop Visualization,Detections Classes Replacement,Detections Consensus,Roboflow Dataset Upload,Corner Visualization,Segment Anything 2 Model,Detection Event Log,Detections Stitch,OC-SORT Tracker,Byte Tracker,ByteTrack Tracker,Time in Zone,Trace Visualization,Circle Visualization,Byte Tracker,Icon Visualization,Triangle Visualization,Ellipse Visualization,Color Visualization,Time in Zone,Model Monitoring Inference Aggregator,Path Deviation,Detections List Roll-Up,Line Counter,Time in Zone,Path Deviation,Model Comparison Visualization,SORT Tracker,Detections Transformation,Roboflow Custom Metadata,Mask Area Measurement,Detections Combine,Overlap Filter,Perspective Correction,Bounding Box Visualization,Detections Merge,Heatmap Visualization,Distance Measurement,Byte Tracker,Detections Filter,Stitch OCR Detections,Line Counter,PTZ Tracking (ONVIF),Dynamic Crop,Background Color Visualization,Camera Focus,Florence-2 Model,Detections Stabilizer,Detection Offset,Size Measurement,Blur Visualization,Roboflow Dataset Upload,Stitch OCR Detections,Dot Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
SORT Tracker in version v1 has.
Bindings
-
input
image(image): Input image with embedded video metadata (fps and video_identifier). Used to initialise and retrieve per-video tracker state..detections(Union[instance_segmentation_prediction,object_detection_prediction,keypoint_detection_prediction]): Detection predictions for the current frame to track..minimum_iou_threshold(float_zero_to_one): Minimum IoU required to associate a detection with an existing track. Default: 0.3..minimum_consecutive_frames(integer): Number of consecutive frames a track must be matched before it is emitted as a confirmed track (tracker_id != -1). Default: 3..lost_track_buffer(integer): Number of frames to keep a track alive after it loses its matched detection. Higher values improve occlusion recovery. Default: 30..track_activation_threshold(float_zero_to_one): Minimum detection confidence required to spawn a new track. Detections below this threshold are not used to create new tracks. Default: 0.25..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step SORT Tracker in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/trackers_sort@v1",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"minimum_iou_threshold": 0.3,
"minimum_consecutive_frames": 3,
"lost_track_buffer": 30,
"track_activation_threshold": 0.25,
"instances_cache_size": "<block_does_not_provide_example>"
}