Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2 apart from the fact that there are two
new outputs added:
-
new_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v3.
- inputs:
Google Vision OCR,Label Visualization,Blur Visualization,Background Color Visualization,Contrast Equalization,Bounding Box Visualization,Keypoint Visualization,Stability AI Outpainting,Reference Path Visualization,Image Slicer,Detections Filter,Pixelate Visualization,SAM 3,Seg Preview,Byte Tracker,Overlap Filter,Image Preprocessing,SAM 3,Color Visualization,SIFT Comparison,Object Detection Model,Path Deviation,Detections Combine,Circle Visualization,Image Contours,Object Detection Model,Polygon Zone Visualization,Ellipse Visualization,Line Counter,Clip Comparison,Moondream2,OCR Model,Absolute Static Crop,Depth Estimation,Path Deviation,Time in Zone,Morphological Transformation,Gaze Detection,Detections Consensus,Crop Visualization,Image Convert Grayscale,VLM as Detector,SAM 3,Classification Label Visualization,Byte Tracker,Keypoint Detection Model,Bounding Rectangle,Segment Anything 2 Model,Keypoint Detection Model,SIFT Comparison,Time in Zone,Line Counter,Camera Calibration,Polygon Visualization,YOLO-World Model,PTZ Tracking (ONVIF).md),Detection Offset,Detections Classes Replacement,Icon Visualization,Detections Transformation,Identify Changes,Triangle Visualization,Template Matching,Model Comparison Visualization,Corner Visualization,Distance Measurement,EasyOCR,VLM as Detector,Line Counter Visualization,Grid Visualization,Halo Visualization,Stability AI Image Generation,Identify Outliers,QR Code Generator,Dynamic Zone,Time in Zone,Relative Static Crop,Dot Visualization,Detections Stitch,Image Blur,Velocity,Byte Tracker,Instance Segmentation Model,Image Slicer,Stability AI Inpainting,Dynamic Crop,Camera Focus,Pixel Color Count,Detections Stabilizer,Image Threshold,Instance Segmentation Model,Perspective Correction,Mask Visualization,Trace Visualization,Detections Merge,Stitch Images,SIFT - outputs:
Label Visualization,Blur Visualization,Background Color Visualization,Bounding Box Visualization,Detections Filter,Pixelate Visualization,Byte Tracker,Overlap Filter,Color Visualization,Path Deviation,Detections Combine,Circle Visualization,Line Counter,Ellipse Visualization,Model Monitoring Inference Aggregator,Path Deviation,Time in Zone,Roboflow Dataset Upload,Detections Consensus,Crop Visualization,Florence-2 Model,Roboflow Custom Metadata,Byte Tracker,Stitch OCR Detections,Segment Anything 2 Model,Time in Zone,Line Counter,PTZ Tracking (ONVIF).md),Detection Offset,Detections Classes Replacement,Icon Visualization,Detections Transformation,Triangle Visualization,Roboflow Dataset Upload,Model Comparison Visualization,Corner Visualization,Distance Measurement,Florence-2 Model,Size Measurement,Time in Zone,Dot Visualization,Detections Stitch,Velocity,Byte Tracker,Dynamic Crop,Detections Stabilizer,Detections Merge,Perspective Correction,Trace Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v3 has.
Bindings
-
input
image(image): not available.detections(Union[keypoint_detection_prediction,object_detection_prediction,instance_segmentation_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v2.
- inputs:
Google Vision OCR,SIFT Comparison,Time in Zone,Line Counter,Detections Filter,YOLO-World Model,PTZ Tracking (ONVIF).md),Detection Offset,Detections Classes Replacement,Detections Transformation,Identify Changes,SAM 3,Template Matching,Seg Preview,Byte Tracker,Overlap Filter,Distance Measurement,SAM 3,SIFT Comparison,Object Detection Model,Path Deviation,VLM as Detector,EasyOCR,Detections Combine,Identify Outliers,Dynamic Zone,Image Contours,Time in Zone,Object Detection Model,Detections Stitch,Line Counter,Clip Comparison,Velocity,Moondream2,Byte Tracker,OCR Model,Instance Segmentation Model,Path Deviation,Time in Zone,Dynamic Crop,Pixel Color Count,Detections Consensus,Detections Stabilizer,Instance Segmentation Model,Perspective Correction,Detections Merge,VLM as Detector,SAM 3,Byte Tracker,Bounding Rectangle,Segment Anything 2 Model - outputs:
Label Visualization,Time in Zone,Line Counter,Blur Visualization,Background Color Visualization,Bounding Box Visualization,Detections Filter,PTZ Tracking (ONVIF).md),Detection Offset,Pixelate Visualization,Detections Classes Replacement,Icon Visualization,Detections Transformation,Triangle Visualization,Roboflow Dataset Upload,Model Comparison Visualization,Byte Tracker,Overlap Filter,Distance Measurement,Corner Visualization,Florence-2 Model,Color Visualization,Path Deviation,Detections Combine,Size Measurement,Circle Visualization,Time in Zone,Dot Visualization,Detections Stitch,Line Counter,Ellipse Visualization,Velocity,Model Monitoring Inference Aggregator,Byte Tracker,Path Deviation,Time in Zone,Roboflow Dataset Upload,Dynamic Crop,Detections Stabilizer,Detections Consensus,Crop Visualization,Detections Merge,Perspective Correction,Florence-2 Model,Roboflow Custom Metadata,Trace Visualization,Byte Tracker,Stitch OCR Detections,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v2 has.
Bindings
-
input
image(image): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v1.
- inputs:
Google Vision OCR,SIFT Comparison,Time in Zone,Line Counter,Detections Filter,YOLO-World Model,PTZ Tracking (ONVIF).md),Detection Offset,Detections Classes Replacement,Detections Transformation,Identify Changes,SAM 3,Template Matching,Seg Preview,Byte Tracker,Overlap Filter,Distance Measurement,SAM 3,SIFT Comparison,Object Detection Model,Path Deviation,VLM as Detector,EasyOCR,Detections Combine,Identify Outliers,Dynamic Zone,Image Contours,Time in Zone,Object Detection Model,Detections Stitch,Line Counter,Clip Comparison,Velocity,Moondream2,Byte Tracker,OCR Model,Instance Segmentation Model,Path Deviation,Time in Zone,Dynamic Crop,Pixel Color Count,Detections Consensus,Detections Stabilizer,Instance Segmentation Model,Perspective Correction,Detections Merge,VLM as Detector,SAM 3,Byte Tracker,Bounding Rectangle,Segment Anything 2 Model - outputs:
Label Visualization,Time in Zone,Line Counter,Blur Visualization,Background Color Visualization,Bounding Box Visualization,Detections Filter,PTZ Tracking (ONVIF).md),Detection Offset,Pixelate Visualization,Detections Classes Replacement,Icon Visualization,Detections Transformation,Triangle Visualization,Roboflow Dataset Upload,Model Comparison Visualization,Byte Tracker,Overlap Filter,Distance Measurement,Corner Visualization,Florence-2 Model,Color Visualization,Path Deviation,Detections Combine,Size Measurement,Circle Visualization,Time in Zone,Dot Visualization,Detections Stitch,Line Counter,Ellipse Visualization,Velocity,Model Monitoring Inference Aggregator,Byte Tracker,Path Deviation,Time in Zone,Roboflow Dataset Upload,Dynamic Crop,Detections Stabilizer,Detections Consensus,Crop Visualization,Detections Merge,Perspective Correction,Florence-2 Model,Roboflow Custom Metadata,Trace Visualization,Byte Tracker,Stitch OCR Detections,Segment Anything 2 Model
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v1 has.
Bindings
-
input
metadata(video_metadata): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}