Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Identify Changes
,Relative Static Crop
,Background Color Visualization
,Line Counter Visualization
,Grid Visualization
,Gaze Detection
,Detections Stabilizer
,Detections Transformation
,Object Detection Model
,Camera Focus
,Color Visualization
,Instance Segmentation Model
,Pixel Color Count
,Label Visualization
,YOLO-World Model
,Detections Filter
,Moondream2
,Dot Visualization
,Overlap Filter
,Byte Tracker
,Object Detection Model
,Bounding Box Visualization
,Detections Consensus
,Template Matching
,Google Vision OCR
,Path Deviation
,Dynamic Crop
,Clip Comparison
,Identify Outliers
,Model Comparison Visualization
,Depth Estimation
,Corner Visualization
,SIFT Comparison
,Byte Tracker
,Line Counter
,SIFT Comparison
,Camera Calibration
,Detection Offset
,Keypoint Visualization
,Mask Visualization
,Image Threshold
,VLM as Detector
,Halo Visualization
,Polygon Zone Visualization
,Polygon Visualization
,Image Preprocessing
,Bounding Rectangle
,Instance Segmentation Model
,SIFT
,Dynamic Zone
,Reference Path Visualization
,Blur Visualization
,Image Contours
,Detections Classes Replacement
,VLM as Detector
,Pixelate Visualization
,Keypoint Detection Model
,Detections Stitch
,Ellipse Visualization
,Byte Tracker
,Crop Visualization
,Stitch Images
,Line Counter
,Image Slicer
,Perspective Correction
,Velocity
,Image Blur
,Stability AI Image Generation
,Time in Zone
,Image Convert Grayscale
,Stability AI Inpainting
,Triangle Visualization
,Classification Label Visualization
,Trace Visualization
,Keypoint Detection Model
,Distance Measurement
,Segment Anything 2 Model
,Path Deviation
,Time in Zone
,Absolute Static Crop
,Circle Visualization
,Detections Merge
,Image Slicer
- outputs:
Background Color Visualization
,Roboflow Dataset Upload
,Detections Stabilizer
,Detections Transformation
,Roboflow Dataset Upload
,Color Visualization
,Label Visualization
,Detections Filter
,Overlap Filter
,Dot Visualization
,Byte Tracker
,Bounding Box Visualization
,Detections Consensus
,Path Deviation
,Dynamic Crop
,Model Comparison Visualization
,Byte Tracker
,Line Counter
,Corner Visualization
,Detection Offset
,Florence-2 Model
,Roboflow Custom Metadata
,Blur Visualization
,Detections Classes Replacement
,Pixelate Visualization
,Detections Stitch
,Ellipse Visualization
,Byte Tracker
,Crop Visualization
,Line Counter
,Florence-2 Model
,Stitch OCR Detections
,Perspective Correction
,Size Measurement
,Velocity
,Time in Zone
,Model Monitoring Inference Aggregator
,Triangle Visualization
,Trace Visualization
,Distance Measurement
,Segment Anything 2 Model
,Path Deviation
,Time in Zone
,Circle Visualization
,Detections Merge
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Instance Segmentation Model
,Identify Changes
,Dynamic Zone
,Detections Classes Replacement
,VLM as Detector
,Image Contours
,Detections Stabilizer
,Detections Transformation
,Object Detection Model
,Detections Stitch
,Instance Segmentation Model
,Byte Tracker
,Pixel Color Count
,YOLO-World Model
,Detections Filter
,Moondream2
,Overlap Filter
,Byte Tracker
,Line Counter
,Object Detection Model
,Detections Consensus
,Template Matching
,Perspective Correction
,Google Vision OCR
,Path Deviation
,Velocity
,Time in Zone
,Dynamic Crop
,Clip Comparison
,Identify Outliers
,Distance Measurement
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,SIFT Comparison
,SIFT Comparison
,Detection Offset
,Path Deviation
,Time in Zone
,VLM as Detector
,Detections Merge
,Bounding Rectangle
- outputs:
Background Color Visualization
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Blur Visualization
,Detections Classes Replacement
,Detections Stabilizer
,Detections Transformation
,Pixelate Visualization
,Detections Stitch
,Roboflow Dataset Upload
,Color Visualization
,Ellipse Visualization
,Byte Tracker
,Label Visualization
,Crop Visualization
,Detections Filter
,Overlap Filter
,Line Counter
,Dot Visualization
,Byte Tracker
,Bounding Box Visualization
,Florence-2 Model
,Stitch OCR Detections
,Detections Consensus
,Perspective Correction
,Size Measurement
,Path Deviation
,Velocity
,Time in Zone
,Dynamic Crop
,Model Monitoring Inference Aggregator
,Model Comparison Visualization
,Triangle Visualization
,Trace Visualization
,Distance Measurement
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Corner Visualization
,Detection Offset
,Path Deviation
,Time in Zone
,Florence-2 Model
,Circle Visualization
,Detections Merge
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Instance Segmentation Model
,Identify Changes
,Dynamic Zone
,Detections Classes Replacement
,VLM as Detector
,Image Contours
,Detections Stabilizer
,Detections Transformation
,Object Detection Model
,Detections Stitch
,Instance Segmentation Model
,Byte Tracker
,Pixel Color Count
,YOLO-World Model
,Detections Filter
,Moondream2
,Overlap Filter
,Byte Tracker
,Line Counter
,Object Detection Model
,Detections Consensus
,Template Matching
,Perspective Correction
,Google Vision OCR
,Path Deviation
,Velocity
,Time in Zone
,Dynamic Crop
,Clip Comparison
,Identify Outliers
,Distance Measurement
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,SIFT Comparison
,SIFT Comparison
,Detection Offset
,Path Deviation
,Time in Zone
,VLM as Detector
,Detections Merge
,Bounding Rectangle
- outputs:
Background Color Visualization
,Roboflow Dataset Upload
,Roboflow Custom Metadata
,Blur Visualization
,Detections Classes Replacement
,Detections Stabilizer
,Detections Transformation
,Pixelate Visualization
,Detections Stitch
,Roboflow Dataset Upload
,Color Visualization
,Ellipse Visualization
,Byte Tracker
,Label Visualization
,Crop Visualization
,Detections Filter
,Overlap Filter
,Line Counter
,Dot Visualization
,Byte Tracker
,Bounding Box Visualization
,Florence-2 Model
,Stitch OCR Detections
,Detections Consensus
,Perspective Correction
,Size Measurement
,Path Deviation
,Velocity
,Time in Zone
,Dynamic Crop
,Model Monitoring Inference Aggregator
,Model Comparison Visualization
,Triangle Visualization
,Trace Visualization
,Distance Measurement
,Byte Tracker
,Line Counter
,Segment Anything 2 Model
,Corner Visualization
,Detection Offset
,Path Deviation
,Time in Zone
,Florence-2 Model
,Circle Visualization
,Detections Merge
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}