Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2 apart from the fact that there are two
new outputs added:
-
new_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v3.
- inputs:
Background Color Visualization,Stitch Images,Image Slicer,Dynamic Zone,Identify Outliers,Detections Transformation,Corner Visualization,Detections Classes Replacement,Camera Calibration,Mask Visualization,Object Detection Model,Line Counter,Model Comparison Visualization,Keypoint Detection Model,Pixelate Visualization,Time in Zone,Relative Static Crop,Ellipse Visualization,Triangle Visualization,Segment Anything 2 Model,Camera Focus,OCR Model,QR Code Generator,Byte Tracker,Label Visualization,Time in Zone,Pixel Color Count,Identify Changes,Blur Visualization,Dot Visualization,Overlap Filter,Stability AI Image Generation,Detection Offset,Perspective Correction,Google Vision OCR,EasyOCR,Line Counter,Detections Stabilizer,Absolute Static Crop,SAM 3,Morphological Transformation,Velocity,Image Blur,Image Threshold,Clip Comparison,Depth Estimation,Stability AI Outpainting,Halo Visualization,Byte Tracker,Stability AI Inpainting,Polygon Visualization,Grid Visualization,Path Deviation,Template Matching,Distance Measurement,Classification Label Visualization,VLM as Detector,Instance Segmentation Model,Bounding Box Visualization,Byte Tracker,Image Convert Grayscale,Polygon Zone Visualization,Detections Stitch,Keypoint Detection Model,Crop Visualization,Object Detection Model,Image Slicer,YOLO-World Model,Detections Merge,Gaze Detection,Moondream2,Icon Visualization,Seg Preview,Color Visualization,Path Deviation,Keypoint Visualization,Contrast Equalization,Image Contours,Instance Segmentation Model,Detections Filter,Circle Visualization,Bounding Rectangle,Time in Zone,Detections Combine,Reference Path Visualization,SIFT Comparison,VLM as Detector,Dynamic Crop,Detections Consensus,SIFT,Line Counter Visualization,PTZ Tracking (ONVIF).md),Image Preprocessing,Trace Visualization,SIFT Comparison - outputs:
Background Color Visualization,Size Measurement,Detections Transformation,Corner Visualization,Model Monitoring Inference Aggregator,Detections Classes Replacement,Line Counter,Model Comparison Visualization,Pixelate Visualization,Time in Zone,Florence-2 Model,Ellipse Visualization,Triangle Visualization,Segment Anything 2 Model,Byte Tracker,Label Visualization,Time in Zone,Roboflow Custom Metadata,Florence-2 Model,Blur Visualization,Dot Visualization,Overlap Filter,Detection Offset,Perspective Correction,Line Counter,Detections Stabilizer,Velocity,Stitch OCR Detections,Byte Tracker,Roboflow Dataset Upload,Path Deviation,Distance Measurement,Bounding Box Visualization,Byte Tracker,Detections Stitch,Crop Visualization,Detections Merge,Icon Visualization,Color Visualization,Roboflow Dataset Upload,Path Deviation,Detections Filter,Circle Visualization,Detections Combine,Time in Zone,Dynamic Crop,Detections Consensus,PTZ Tracking (ONVIF).md),Trace Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v3 has.
Bindings
-
input
image(image): not available.detections(Union[instance_segmentation_prediction,keypoint_detection_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v2.
- inputs:
Path Deviation,Dynamic Zone,Detections Transformation,Template Matching,Identify Outliers,Detections Classes Replacement,Distance Measurement,VLM as Detector,Object Detection Model,Instance Segmentation Model,Line Counter,Byte Tracker,Time in Zone,Detections Stitch,Object Detection Model,Segment Anything 2 Model,YOLO-World Model,Detections Merge,Moondream2,OCR Model,Seg Preview,Byte Tracker,Path Deviation,Time in Zone,Pixel Color Count,Identify Changes,Image Contours,Instance Segmentation Model,Detections Filter,Overlap Filter,Detection Offset,Bounding Rectangle,Google Vision OCR,EasyOCR,Line Counter,Detections Stabilizer,Time in Zone,Detections Combine,SAM 3,SIFT Comparison,VLM as Detector,Dynamic Crop,Velocity,Detections Consensus,Clip Comparison,Byte Tracker,PTZ Tracking (ONVIF).md),Perspective Correction,SIFT Comparison - outputs:
Roboflow Dataset Upload,Background Color Visualization,Size Measurement,Path Deviation,Detections Transformation,Corner Visualization,Model Monitoring Inference Aggregator,Detections Classes Replacement,Distance Measurement,Trace Visualization,Line Counter,Bounding Box Visualization,Model Comparison Visualization,Byte Tracker,Pixelate Visualization,Time in Zone,Florence-2 Model,Detections Stitch,Crop Visualization,Triangle Visualization,Segment Anything 2 Model,Ellipse Visualization,Detections Merge,Icon Visualization,Byte Tracker,Color Visualization,Roboflow Dataset Upload,Path Deviation,Label Visualization,Time in Zone,Roboflow Custom Metadata,Florence-2 Model,Blur Visualization,Dot Visualization,Overlap Filter,Detections Filter,Circle Visualization,Detection Offset,Perspective Correction,Line Counter,Detections Combine,Detections Stabilizer,Time in Zone,Velocity,Detections Consensus,Stitch OCR Detections,PTZ Tracking (ONVIF).md),Byte Tracker,Dynamic Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v2 has.
Bindings
-
input
image(image): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v1.
- inputs:
Path Deviation,Dynamic Zone,Detections Transformation,Template Matching,Identify Outliers,Detections Classes Replacement,Distance Measurement,VLM as Detector,Object Detection Model,Instance Segmentation Model,Line Counter,Byte Tracker,Time in Zone,Detections Stitch,Object Detection Model,Segment Anything 2 Model,YOLO-World Model,Detections Merge,Moondream2,OCR Model,Seg Preview,Byte Tracker,Path Deviation,Time in Zone,Pixel Color Count,Identify Changes,Image Contours,Instance Segmentation Model,Detections Filter,Overlap Filter,Detection Offset,Bounding Rectangle,Google Vision OCR,EasyOCR,Line Counter,Detections Stabilizer,Time in Zone,Detections Combine,SAM 3,SIFT Comparison,VLM as Detector,Dynamic Crop,Velocity,Detections Consensus,Clip Comparison,Byte Tracker,PTZ Tracking (ONVIF).md),Perspective Correction,SIFT Comparison - outputs:
Roboflow Dataset Upload,Background Color Visualization,Size Measurement,Path Deviation,Detections Transformation,Corner Visualization,Model Monitoring Inference Aggregator,Detections Classes Replacement,Distance Measurement,Trace Visualization,Line Counter,Bounding Box Visualization,Model Comparison Visualization,Byte Tracker,Pixelate Visualization,Time in Zone,Florence-2 Model,Detections Stitch,Crop Visualization,Triangle Visualization,Segment Anything 2 Model,Ellipse Visualization,Detections Merge,Icon Visualization,Byte Tracker,Color Visualization,Roboflow Dataset Upload,Path Deviation,Label Visualization,Time in Zone,Roboflow Custom Metadata,Florence-2 Model,Blur Visualization,Dot Visualization,Overlap Filter,Detections Filter,Circle Visualization,Detection Offset,Perspective Correction,Line Counter,Detections Combine,Detections Stabilizer,Time in Zone,Velocity,Detections Consensus,Stitch OCR Detections,PTZ Tracking (ONVIF).md),Byte Tracker,Dynamic Crop
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v1 has.
Bindings
-
input
metadata(video_metadata): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}