Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2 apart from the fact that there are two
new outputs added:
-
new_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v3to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v3.
- inputs:
PTZ Tracking (ONVIF).md),Dot Visualization,Stability AI Inpainting,Reference Path Visualization,Bounding Rectangle,Object Detection Model,Stability AI Outpainting,Detections Merge,Distance Measurement,QR Code Generator,Path Deviation,Time in Zone,Line Counter Visualization,Detections Classes Replacement,Ellipse Visualization,Identify Outliers,Detections Combine,Background Color Visualization,Polygon Zone Visualization,Contrast Equalization,EasyOCR,Object Detection Model,Image Slicer,Dynamic Zone,Byte Tracker,Gaze Detection,Google Vision OCR,Image Threshold,SIFT Comparison,Detections Consensus,Identify Changes,Image Preprocessing,Icon Visualization,OCR Model,YOLO-World Model,Detections Filter,Detection Offset,Absolute Static Crop,Pixelate Visualization,Image Blur,Perspective Correction,Relative Static Crop,Line Counter,Pixel Color Count,VLM as Detector,Detections Stitch,Line Counter,Time in Zone,Clip Comparison,SIFT,Halo Visualization,Image Convert Grayscale,Triangle Visualization,Depth Estimation,Image Contours,Mask Visualization,Keypoint Detection Model,Image Slicer,Model Comparison Visualization,Template Matching,Byte Tracker,Time in Zone,Moondream2,Polygon Visualization,Corner Visualization,Crop Visualization,Stitch Images,Blur Visualization,Dynamic Crop,Byte Tracker,Detections Stabilizer,Detections Transformation,Camera Focus,Overlap Filter,Keypoint Detection Model,VLM as Detector,Segment Anything 2 Model,Instance Segmentation Model,Color Visualization,Velocity,Classification Label Visualization,Label Visualization,Circle Visualization,Keypoint Visualization,Trace Visualization,Camera Calibration,SIFT Comparison,Instance Segmentation Model,Morphological Transformation,Bounding Box Visualization,Path Deviation,Grid Visualization,Seg Preview,Stability AI Image Generation - outputs:
PTZ Tracking (ONVIF).md),Dot Visualization,Detections Merge,Distance Measurement,Path Deviation,Time in Zone,Detections Classes Replacement,Size Measurement,Ellipse Visualization,Roboflow Custom Metadata,Detections Combine,Background Color Visualization,Roboflow Dataset Upload,Byte Tracker,Florence-2 Model,Detections Consensus,Icon Visualization,Roboflow Dataset Upload,Detections Filter,Detection Offset,Pixelate Visualization,Perspective Correction,Florence-2 Model,Line Counter,Detections Stitch,Line Counter,Time in Zone,Model Monitoring Inference Aggregator,Triangle Visualization,Model Comparison Visualization,Stitch OCR Detections,Time in Zone,Corner Visualization,Crop Visualization,Blur Visualization,Dynamic Crop,Byte Tracker,Overlap Filter,Detections Transformation,Detections Stabilizer,Segment Anything 2 Model,Color Visualization,Velocity,Label Visualization,Circle Visualization,Trace Visualization,Bounding Box Visualization,Path Deviation,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v3 has.
Bindings
-
input
image(image): not available.detections(Union[instance_segmentation_prediction,keypoint_detection_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v2.
- inputs:
VLM as Detector,PTZ Tracking (ONVIF).md),Detections Stitch,Line Counter,Time in Zone,Clip Comparison,Bounding Rectangle,Object Detection Model,Detections Merge,Distance Measurement,Path Deviation,Time in Zone,Detections Classes Replacement,Image Contours,Template Matching,Time in Zone,Moondream2,Detections Combine,Identify Outliers,EasyOCR,Dynamic Crop,Byte Tracker,Object Detection Model,Detections Transformation,Detections Stabilizer,Overlap Filter,Dynamic Zone,VLM as Detector,Segment Anything 2 Model,Instance Segmentation Model,Byte Tracker,Velocity,Google Vision OCR,Identify Changes,Detections Consensus,SIFT Comparison,SIFT Comparison,Instance Segmentation Model,OCR Model,YOLO-World Model,Detections Filter,Detection Offset,Path Deviation,Seg Preview,Perspective Correction,Line Counter,Byte Tracker,Pixel Color Count - outputs:
PTZ Tracking (ONVIF).md),Detections Stitch,Dot Visualization,Time in Zone,Line Counter,Model Monitoring Inference Aggregator,Detections Merge,Distance Measurement,Florence-2 Model,Path Deviation,Time in Zone,Triangle Visualization,Detections Classes Replacement,Size Measurement,Ellipse Visualization,Model Comparison Visualization,Stitch OCR Detections,Roboflow Custom Metadata,Time in Zone,Detections Combine,Background Color Visualization,Corner Visualization,Crop Visualization,Roboflow Dataset Upload,Blur Visualization,Dynamic Crop,Byte Tracker,Overlap Filter,Detections Transformation,Detections Stabilizer,Segment Anything 2 Model,Byte Tracker,Color Visualization,Florence-2 Model,Velocity,Label Visualization,Circle Visualization,Detections Consensus,Trace Visualization,Icon Visualization,Roboflow Dataset Upload,Bounding Box Visualization,Detections Filter,Detection Offset,Path Deviation,Pixelate Visualization,Perspective Correction,Line Counter,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v2 has.
Bindings
-
input
image(image): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/byte_tracker@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker in version v1.
- inputs:
VLM as Detector,PTZ Tracking (ONVIF).md),Detections Stitch,Line Counter,Time in Zone,Clip Comparison,Bounding Rectangle,Object Detection Model,Detections Merge,Distance Measurement,Path Deviation,Time in Zone,Detections Classes Replacement,Image Contours,Template Matching,Time in Zone,Moondream2,Detections Combine,Identify Outliers,EasyOCR,Dynamic Crop,Byte Tracker,Object Detection Model,Detections Transformation,Detections Stabilizer,Overlap Filter,Dynamic Zone,VLM as Detector,Segment Anything 2 Model,Instance Segmentation Model,Byte Tracker,Velocity,Google Vision OCR,Identify Changes,Detections Consensus,SIFT Comparison,SIFT Comparison,Instance Segmentation Model,OCR Model,YOLO-World Model,Detections Filter,Detection Offset,Path Deviation,Seg Preview,Perspective Correction,Line Counter,Byte Tracker,Pixel Color Count - outputs:
PTZ Tracking (ONVIF).md),Detections Stitch,Dot Visualization,Time in Zone,Line Counter,Model Monitoring Inference Aggregator,Detections Merge,Distance Measurement,Florence-2 Model,Path Deviation,Time in Zone,Triangle Visualization,Detections Classes Replacement,Size Measurement,Ellipse Visualization,Model Comparison Visualization,Stitch OCR Detections,Roboflow Custom Metadata,Time in Zone,Detections Combine,Background Color Visualization,Corner Visualization,Crop Visualization,Roboflow Dataset Upload,Blur Visualization,Dynamic Crop,Byte Tracker,Overlap Filter,Detections Transformation,Detections Stabilizer,Segment Anything 2 Model,Byte Tracker,Color Visualization,Florence-2 Model,Velocity,Label Visualization,Circle Visualization,Detections Consensus,Trace Visualization,Icon Visualization,Roboflow Dataset Upload,Bounding Box Visualization,Detections Filter,Detection Offset,Path Deviation,Pixelate Visualization,Perspective Correction,Line Counter,Byte Tracker
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker in version v1 has.
Bindings
-
input
metadata(video_metadata): not available.detections(Union[instance_segmentation_prediction,object_detection_prediction]): Objects to be tracked..track_activation_threshold(float_zero_to_one): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer(integer): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold(float_zero_to_one): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames(integer): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}