OC-SORT Tracker¶
Class: OCSORTBlockV1
Source: inference.core.workflows.core_steps.trackers.ocsort.v1.OCSORTBlockV1
Track objects across video frames using the OC-SORT algorithm from the roboflow/trackers package.
OC-SORT extends SORT with two key mechanisms:
- Observation-Centric Re-Update (OCR): When a track reappears after occlusion, OC-SORT retroactively corrects the Kalman filter using the real observations before and after the gap, reducing accumulated drift.
- Observation-Centric Momentum (OCM): A direction-consistency cost is blended with IoU during association, penalising matches where the candidate detection lies in a direction inconsistent with the track's recent motion.
This makes OC-SORT significantly more robust than SORT in scenes with heavy occlusion, erratic motion, and uniform appearance.
When to use OC-SORT: - Crowded scenes with frequent and prolonged occlusions (e.g. pedestrians, warehouse workers). - Non-linear or erratic motion patterns (e.g. dancing, sports with abrupt direction changes). - When identity consistency over long sequences is more important than raw speed.
When to consider alternatives: - For general-purpose tracking with mixed-confidence detections, try ByteTrack. - For maximum simplicity and speed with a strong detector, try SORT.
Outputs three detection sets: - tracked_detections: All confirmed tracked detections with assigned track IDs. - new_instances: Detections whose track ID appears for the first time. - already_seen_instances: Detections whose track ID has been seen in a prior frame.
The block maintains separate tracker state and instance cache per video_identifier,
enabling multi-stream tracking within a single workflow.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/trackers_ocsort@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
minimum_iou_threshold |
float |
Minimum IoU required to associate a detection with an existing track. Default: 0.3.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames a track must be matched before it is emitted as a confirmed track (tracker_id != -1). Default: 3.. | ✅ |
lost_track_buffer |
int |
Number of frames to keep a track alive after it loses its matched detection. Higher values improve occlusion recovery. Default: 30.. | ✅ |
high_conf_det_threshold |
float |
Confidence threshold for high-confidence detections used in association. Default: 0.6.. | ✅ |
direction_consistency_weight |
float |
Weight for the direction consistency term in the OC-SORT association cost. Higher values prioritise alignment between historical motion direction and the direction to the candidate detection. Default: 0.2.. | ✅ |
delta_t |
int |
Number of past frames used by OC-SORT to estimate per-track velocity for direction consistency momentum. Default: 3.. | ✅ |
instances_cache_size |
int |
Maximum number of track IDs retained in the instance cache for new/already-seen categorisation. Uses FIFO eviction. Default: 16384.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to OC-SORT Tracker in version v1.
- inputs:
Detections Stitch,Detections Stabilizer,Line Counter Visualization,Stability AI Outpainting,Object Detection Model,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Google Vision OCR,Identify Outliers,Image Preprocessing,Instance Segmentation Model,Distance Measurement,EasyOCR,Color Visualization,Detections Combine,Object Detection Model,SAM2 Video Tracker,Detection Event Log,Ellipse Visualization,Polygon Visualization,Byte Tracker,ByteTrack Tracker,Bounding Rectangle,Relative Static Crop,Byte Tracker,Detections Consensus,Detections Classes Replacement,Time in Zone,Model Comparison Visualization,Trace Visualization,Object Detection Model,Camera Focus,YOLO-World Model,Detection Offset,SAM 3,Instance Segmentation Model,Detections List Roll-Up,Image Threshold,Template Matching,Mask Area Measurement,Stitch Images,Heatmap Visualization,SORT Tracker,SIFT Comparison,Morphological Transformation,Halo Visualization,Detections Transformation,Instance Segmentation Model,Crop Visualization,Camera Calibration,Path Deviation,Time in Zone,Dot Visualization,OC-SORT Tracker,Path Deviation,SAM 3,Seg Preview,Icon Visualization,Detections Filter,Dynamic Zone,Image Contours,Pixelate Visualization,Keypoint Detection Model,Line Counter,Time in Zone,Polygon Zone Visualization,Pixel Color Count,Reference Path Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Clip Comparison,VLM As Detector,Stability AI Image Generation,Detections Merge,Perspective Correction,Overlap Filter,Line Counter,Bounding Box Visualization,Velocity,Depth Estimation,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Stability AI Inpainting,Byte Tracker,Polygon Visualization,Image Convert Grayscale,SAM 3,SIFT,VLM As Detector,Label Visualization,Corner Visualization,Grid Visualization,Dynamic Crop,Contrast Equalization,Keypoint Visualization,Triangle Visualization,Per-Class Confidence Filter,Moondream2,Keypoint Detection Model,QR Code Generator,Halo Visualization,Circle Visualization,Camera Focus,Segment Anything 2 Model,Mask Visualization,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),SIFT Comparison - outputs:
Detections Stabilizer,Detections Stitch,Roboflow Dataset Upload,Mask Edge Snap,Distance Measurement,Color Visualization,Detections Combine,SAM2 Video Tracker,Bounding Rectangle,Ellipse Visualization,ByteTrack Tracker,Polygon Visualization,Detection Event Log,Byte Tracker,Byte Tracker,Detections Consensus,Detections Classes Replacement,Time in Zone,Model Comparison Visualization,Stitch OCR Detections,Trace Visualization,Camera Focus,Roboflow Custom Metadata,Detection Offset,Detections List Roll-Up,Size Measurement,Mask Area Measurement,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Crop Visualization,Florence-2 Model,Path Deviation,Time in Zone,Dot Visualization,OC-SORT Tracker,Path Deviation,Model Monitoring Inference Aggregator,Icon Visualization,Detections Filter,Roboflow Dataset Upload,Dynamic Zone,Pixelate Visualization,Line Counter,Time in Zone,Blur Visualization,Detections Merge,Perspective Correction,Overlap Filter,Line Counter,Velocity,Bounding Box Visualization,Byte Tracker,Stability AI Inpainting,Polygon Visualization,Roboflow Vision Events,Label Visualization,Corner Visualization,Dynamic Crop,Per-Class Confidence Filter,Keypoint Visualization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
OC-SORT Tracker in version v1 has.
Bindings
-
input
image(image): Input image with embedded video metadata (fps and video_identifier). Used to initialise and retrieve per-video tracker state..detections(Union[object_detection_prediction,rle_instance_segmentation_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Detection predictions for the current frame to track..minimum_iou_threshold(float_zero_to_one): Minimum IoU required to associate a detection with an existing track. Default: 0.3..minimum_consecutive_frames(integer): Number of consecutive frames a track must be matched before it is emitted as a confirmed track (tracker_id != -1). Default: 3..lost_track_buffer(integer): Number of frames to keep a track alive after it loses its matched detection. Higher values improve occlusion recovery. Default: 30..high_conf_det_threshold(float_zero_to_one): Confidence threshold for high-confidence detections used in association. Default: 0.6..direction_consistency_weight(float_zero_to_one): Weight for the direction consistency term in the OC-SORT association cost. Higher values prioritise alignment between historical motion direction and the direction to the candidate detection. Default: 0.2..delta_t(integer): Number of past frames used by OC-SORT to estimate per-track velocity for direction consistency momentum. Default: 3..
-
output
tracked_detections(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction,rle_instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_predictionor Prediction with detected bounding boxes and RLE-encoded segmentation masks in form of sv.Detections(...) object ifrle_instance_segmentation_prediction.new_instances(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction,rle_instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_predictionor Prediction with detected bounding boxes and RLE-encoded segmentation masks in form of sv.Detections(...) object ifrle_instance_segmentation_prediction.already_seen_instances(Union[object_detection_prediction,instance_segmentation_prediction,keypoint_detection_prediction,rle_instance_segmentation_prediction]): Prediction with detected bounding boxes in form of sv.Detections(...) object ifobject_detection_predictionor Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object ifinstance_segmentation_predictionor Prediction with detected bounding boxes and detected keypoints in form of sv.Detections(...) object ifkeypoint_detection_predictionor Prediction with detected bounding boxes and RLE-encoded segmentation masks in form of sv.Detections(...) object ifrle_instance_segmentation_prediction.
Example JSON definition of step OC-SORT Tracker in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/trackers_ocsort@v1",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"minimum_iou_threshold": 0.3,
"minimum_consecutive_frames": 3,
"lost_track_buffer": 30,
"high_conf_det_threshold": 0.6,
"direction_consistency_weight": 0.2,
"delta_t": 3,
"instances_cache_size": "<block_does_not_provide_example>"
}