Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,VLM as Detector
,Polygon Zone Visualization
,Camera Focus
,Image Slicer
,Line Counter
,Image Blur
,Dot Visualization
,Path Deviation
,Detections Merge
,Detection Offset
,Stability AI Inpainting
,Pixelate Visualization
,Line Counter
,Detections Consensus
,Distance Measurement
,Image Convert Grayscale
,Absolute Static Crop
,Stability AI Image Generation
,Color Visualization
,Image Threshold
,Halo Visualization
,Polygon Visualization
,Detections Classes Replacement
,Dynamic Zone
,Instance Segmentation Model
,Camera Calibration
,Object Detection Model
,Classification Label Visualization
,Google Vision OCR
,Byte Tracker
,Ellipse Visualization
,Pixel Color Count
,Bounding Box Visualization
,Object Detection Model
,Line Counter Visualization
,Image Preprocessing
,Trace Visualization
,Label Visualization
,Image Slicer
,Detections Transformation
,Crop Visualization
,Detections Stitch
,Identify Outliers
,YOLO-World Model
,Relative Static Crop
,Model Comparison Visualization
,Perspective Correction
,Byte Tracker
,Path Deviation
,Mask Visualization
,Time in Zone
,Detections Filter
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Template Matching
,Byte Tracker
,Instance Segmentation Model
,Image Contours
,SIFT
,SIFT Comparison
,Reference Path Visualization
,Triangle Visualization
,Bounding Rectangle
,Velocity
,SIFT Comparison
,VLM as Detector
,Keypoint Visualization
,Identify Changes
,Grid Visualization
,Segment Anything 2 Model
,Detections Stabilizer
,Stitch Images
,Blur Visualization
- outputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Path Deviation
,Detections Merge
,Dot Visualization
,Detection Offset
,Roboflow Dataset Upload
,Pixelate Visualization
,Line Counter
,Detections Consensus
,Distance Measurement
,Color Visualization
,Detections Classes Replacement
,Roboflow Dataset Upload
,Byte Tracker
,Ellipse Visualization
,Size Measurement
,Bounding Box Visualization
,Trace Visualization
,Label Visualization
,Detections Transformation
,Crop Visualization
,Detections Stitch
,Model Comparison Visualization
,Stitch OCR Detections
,Perspective Correction
,Byte Tracker
,Path Deviation
,Time in Zone
,Detections Filter
,Time in Zone
,Dynamic Crop
,Byte Tracker
,Florence-2 Model
,Florence-2 Model
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Velocity
,Roboflow Custom Metadata
,Line Counter
,Segment Anything 2 Model
,Detections Stabilizer
,Blur Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Object Detection Model
,VLM as Detector
,Detections Transformation
,Identify Outliers
,Detections Stitch
,Path Deviation
,Detections Merge
,YOLO-World Model
,Detection Offset
,Line Counter
,Perspective Correction
,Byte Tracker
,Detections Consensus
,Path Deviation
,Distance Measurement
,Time in Zone
,Detections Filter
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Template Matching
,Byte Tracker
,Detections Classes Replacement
,Instance Segmentation Model
,Image Contours
,Dynamic Zone
,Instance Segmentation Model
,SIFT Comparison
,Object Detection Model
,Bounding Rectangle
,Velocity
,SIFT Comparison
,Google Vision OCR
,VLM as Detector
,Identify Changes
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Detections Stabilizer
,Pixel Color Count
- outputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Bounding Box Visualization
,Trace Visualization
,Label Visualization
,Detections Transformation
,Crop Visualization
,Detections Stitch
,Path Deviation
,Detections Merge
,Dot Visualization
,Detection Offset
,Model Comparison Visualization
,Roboflow Dataset Upload
,Stitch OCR Detections
,Line Counter
,Perspective Correction
,Pixelate Visualization
,Detections Stabilizer
,Path Deviation
,Detections Consensus
,Byte Tracker
,Distance Measurement
,Time in Zone
,Detections Filter
,Color Visualization
,Time in Zone
,Dynamic Crop
,Byte Tracker
,Detections Classes Replacement
,Florence-2 Model
,Florence-2 Model
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Velocity
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Ellipse Visualization
,Size Measurement
,Blur Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Object Detection Model
,VLM as Detector
,Detections Transformation
,Identify Outliers
,Detections Stitch
,Path Deviation
,Detections Merge
,YOLO-World Model
,Detection Offset
,Line Counter
,Perspective Correction
,Byte Tracker
,Detections Consensus
,Path Deviation
,Distance Measurement
,Time in Zone
,Detections Filter
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Template Matching
,Byte Tracker
,Detections Classes Replacement
,Instance Segmentation Model
,Image Contours
,Dynamic Zone
,Instance Segmentation Model
,SIFT Comparison
,Object Detection Model
,Bounding Rectangle
,Velocity
,SIFT Comparison
,Google Vision OCR
,VLM as Detector
,Identify Changes
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Detections Stabilizer
,Pixel Color Count
- outputs:
Circle Visualization
,Background Color Visualization
,Corner Visualization
,Bounding Box Visualization
,Trace Visualization
,Label Visualization
,Detections Transformation
,Crop Visualization
,Detections Stitch
,Path Deviation
,Detections Merge
,Dot Visualization
,Detection Offset
,Model Comparison Visualization
,Roboflow Dataset Upload
,Stitch OCR Detections
,Line Counter
,Perspective Correction
,Pixelate Visualization
,Detections Stabilizer
,Path Deviation
,Detections Consensus
,Byte Tracker
,Distance Measurement
,Time in Zone
,Detections Filter
,Color Visualization
,Time in Zone
,Dynamic Crop
,Byte Tracker
,Detections Classes Replacement
,Florence-2 Model
,Florence-2 Model
,Triangle Visualization
,Model Monitoring Inference Aggregator
,Velocity
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Ellipse Visualization
,Size Measurement
,Blur Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}