Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2 apart from the fact that there are two
new outputs added:
-
new_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v3.
- inputs:
Time in Zone,Byte Tracker,Circle Visualization,Byte Tracker,QR Code Generator,Google Vision OCR,Overlap Filter,Distance Measurement,Velocity,Detections Stitch,Label Visualization,Image Slicer,Byte Tracker,Mask Visualization,Classification Label Visualization,Color Visualization,Stability AI Outpainting,Instance Segmentation Model,Moondream2,Identify Outliers,Grid Visualization,Dynamic Zone,PTZ Tracking (ONVIF).md),Halo Visualization,Time in Zone,Image Slicer,Pixel Color Count,Polygon Visualization,Bounding Rectangle,OCR Model,Model Comparison Visualization,Detections Filter,Template Matching,Detections Consensus,Stability AI Inpainting,Detections Merge,Image Preprocessing,SAM 3,Seg Preview,SIFT,Dynamic Crop,Ellipse Visualization,Identify Changes,Camera Focus,Stitch Images,VLM as Detector,Keypoint Detection Model,Contrast Equalization,Gaze Detection,Line Counter,Depth Estimation,Keypoint Detection Model,Image Threshold,Blur Visualization,Detections Combine,Pixelate Visualization,Corner Visualization,VLM as Detector,EasyOCR,Detections Classes Replacement,Relative Static Crop,Path Deviation,Image Blur,SIFT Comparison,Background Subtraction,Image Contours,SIFT Comparison,Line Counter Visualization,Reference Path Visualization,Image Convert Grayscale,YOLO-World Model,Bounding Box Visualization,Triangle Visualization,Segment Anything 2 Model,Instance Segmentation Model,Perspective Correction,Dot Visualization,Detections Transformation,Time in Zone,Detection Offset,Background Color Visualization,SAM 3,Path Deviation,SAM 3,Trace Visualization,Camera Calibration,Morphological Transformation,Motion Detection,Absolute Static Crop,Stability AI Image Generation,Clip Comparison,Camera Focus,Icon Visualization,Object Detection Model,Crop Visualization,Detections Stabilizer,Object Detection Model,Line Counter,Keypoint Visualization,Polygon Zone Visualization - outputs:
Time in Zone,Byte Tracker,Circle Visualization,Byte Tracker,Overlap Filter,Distance Measurement,Velocity,Detections Stitch,Label Visualization,Byte Tracker,Florence-2 Model,Color Visualization,Roboflow Custom Metadata,PTZ Tracking (ONVIF).md),Time in Zone,Roboflow Dataset Upload,Model Comparison Visualization,Detections Filter,Detections Consensus,Detections Merge,Dynamic Crop,Ellipse Visualization,Line Counter,Detections Combine,Blur Visualization,Pixelate Visualization,Corner Visualization,Stitch OCR Detections,Detections Classes Replacement,Path Deviation,Size Measurement,Roboflow Dataset Upload,Segment Anything 2 Model,Bounding Box Visualization,Triangle Visualization,Perspective Correction,Dot Visualization,Model Monitoring Inference Aggregator,Detections Transformation,Time in Zone,Detection Offset,Background Color Visualization,Path Deviation,Trace Visualization,Florence-2 Model,Camera Focus,Icon Visualization,Crop Visualization,Detections Stabilizer,Line Counter
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v3 has.
Bindings
-
input
image(image): not available.detections(Union[keypoint_detection_prediction,object_detection_prediction,instance_segmentation_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v2.
- inputs:
Line Counter,Time in Zone,Byte Tracker,Detections Combine,Byte Tracker,Google Vision OCR,VLM as Detector,EasyOCR,Detections Classes Replacement,Overlap Filter,Distance Measurement,Velocity,Detections Stitch,Path Deviation,SIFT Comparison,Byte Tracker,Image Contours,SIFT Comparison,Instance Segmentation Model,Moondream2,Identify Outliers,YOLO-World Model,Segment Anything 2 Model,Instance Segmentation Model,Dynamic Zone,PTZ Tracking (ONVIF).md),Perspective Correction,Time in Zone,Pixel Color Count,Detections Transformation,Time in Zone,Detection Offset,SAM 3,Path Deviation,Bounding Rectangle,SAM 3,OCR Model,Motion Detection,Detections Filter,Template Matching,Detections Consensus,Detections Merge,SAM 3,Seg Preview,Clip Comparison,Dynamic Crop,Object Detection Model,Identify Changes,Detections Stabilizer,Object Detection Model,VLM as Detector,Line Counter - outputs:
Line Counter,Time in Zone,Byte Tracker,Detections Combine,Blur Visualization,Pixelate Visualization,Circle Visualization,Byte Tracker,Corner Visualization,Stitch OCR Detections,Detections Classes Replacement,Overlap Filter,Distance Measurement,Velocity,Path Deviation,Detections Stitch,Label Visualization,Size Measurement,Byte Tracker,Florence-2 Model,Color Visualization,Roboflow Dataset Upload,Segment Anything 2 Model,Bounding Box Visualization,Triangle Visualization,Roboflow Custom Metadata,PTZ Tracking (ONVIF).md),Perspective Correction,Dot Visualization,Time in Zone,Roboflow Dataset Upload,Model Monitoring Inference Aggregator,Detections Transformation,Time in Zone,Detection Offset,Background Color Visualization,Path Deviation,Trace Visualization,Florence-2 Model,Model Comparison Visualization,Detections Filter,Detections Consensus,Detections Merge,Camera Focus,Dynamic Crop,Ellipse Visualization,Icon Visualization,Crop Visualization,Detections Stabilizer,Line Counter
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v2 has.
Bindings
-
input
image(image): not available.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v1.
- inputs:
Line Counter,Time in Zone,Byte Tracker,Detections Combine,Byte Tracker,Google Vision OCR,VLM as Detector,EasyOCR,Detections Classes Replacement,Overlap Filter,Distance Measurement,Velocity,Detections Stitch,Path Deviation,SIFT Comparison,Byte Tracker,Image Contours,SIFT Comparison,Instance Segmentation Model,Moondream2,Identify Outliers,YOLO-World Model,Segment Anything 2 Model,Instance Segmentation Model,Dynamic Zone,PTZ Tracking (ONVIF).md),Perspective Correction,Time in Zone,Pixel Color Count,Detections Transformation,Time in Zone,Detection Offset,SAM 3,Path Deviation,Bounding Rectangle,SAM 3,OCR Model,Motion Detection,Detections Filter,Template Matching,Detections Consensus,Detections Merge,SAM 3,Seg Preview,Clip Comparison,Dynamic Crop,Object Detection Model,Identify Changes,Detections Stabilizer,Object Detection Model,VLM as Detector,Line Counter - outputs:
Line Counter,Time in Zone,Byte Tracker,Detections Combine,Blur Visualization,Pixelate Visualization,Circle Visualization,Byte Tracker,Corner Visualization,Stitch OCR Detections,Detections Classes Replacement,Overlap Filter,Distance Measurement,Velocity,Path Deviation,Detections Stitch,Label Visualization,Size Measurement,Byte Tracker,Florence-2 Model,Color Visualization,Roboflow Dataset Upload,Segment Anything 2 Model,Bounding Box Visualization,Triangle Visualization,Roboflow Custom Metadata,PTZ Tracking (ONVIF).md),Perspective Correction,Dot Visualization,Time in Zone,Roboflow Dataset Upload,Model Monitoring Inference Aggregator,Detections Transformation,Time in Zone,Detection Offset,Background Color Visualization,Path Deviation,Trace Visualization,Florence-2 Model,Model Comparison Visualization,Detections Filter,Detections Consensus,Detections Merge,Camera Focus,Dynamic Crop,Ellipse Visualization,Icon Visualization,Crop Visualization,Detections Stabilizer,Line Counter
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v1 has.
Bindings
-
input
metadata(video_metadata): not available.detections(Union[object_detection_prediction,instance_segmentation_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}