Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Segment Anything 2 Model
,Image Slicer
,Stability AI Inpainting
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Object Detection Model
,SIFT Comparison
,Detection Offset
,Grid Visualization
,Ellipse Visualization
,SIFT
,VLM as Detector
,Image Contours
,Absolute Static Crop
,Camera Focus
,Trace Visualization
,VLM as Detector
,Dot Visualization
,Google Vision OCR
,Identify Changes
,Polygon Zone Visualization
,Identify Outliers
,Classification Label Visualization
,Corner Visualization
,Byte Tracker
,Dynamic Crop
,Reference Path Visualization
,Line Counter
,Label Visualization
,Detections Stabilizer
,Mask Visualization
,Triangle Visualization
,Line Counter Visualization
,Template Matching
,Dynamic Zone
,Detections Transformation
,Time in Zone
,Blur Visualization
,Line Counter
,Instance Segmentation Model
,SIFT Comparison
,Time in Zone
,Instance Segmentation Model
,Detections Filter
,Pixelate Visualization
,Path Deviation
,Relative Static Crop
,Detections Consensus
,Model Comparison Visualization
,Halo Visualization
,Crop Visualization
,Byte Tracker
,Image Blur
,Distance Measurement
,Circle Visualization
,Velocity
,Image Preprocessing
,Background Color Visualization
,Bounding Rectangle
,Pixel Color Count
,Bounding Box Visualization
,Byte Tracker
,Image Slicer
,Stitch Images
,Stability AI Image Generation
,Image Threshold
,Detections Stitch
,Keypoint Visualization
,Color Visualization
,Path Deviation
,YOLO-World Model
,Image Convert Grayscale
,Detections Classes Replacement
,Polygon Visualization
- outputs:
Segment Anything 2 Model
,Perspective Correction
,Roboflow Custom Metadata
,Detection Offset
,Ellipse Visualization
,Trace Visualization
,Dot Visualization
,Roboflow Dataset Upload
,Byte Tracker
,Corner Visualization
,Dynamic Crop
,Line Counter
,Detections Stabilizer
,Label Visualization
,Triangle Visualization
,Detections Transformation
,Model Monitoring Inference Aggregator
,Time in Zone
,Blur Visualization
,Line Counter
,Time in Zone
,Detections Filter
,Stitch OCR Detections
,Pixelate Visualization
,Path Deviation
,Detections Consensus
,Roboflow Dataset Upload
,Model Comparison Visualization
,Crop Visualization
,Byte Tracker
,Distance Measurement
,Circle Visualization
,Velocity
,Background Color Visualization
,Size Measurement
,Florence-2 Model
,Bounding Box Visualization
,Florence-2 Model
,Byte Tracker
,Detections Stitch
,Color Visualization
,Path Deviation
,Detections Classes Replacement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Segment Anything 2 Model
,Detections Filter
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Path Deviation
,Detections Consensus
,Object Detection Model
,SIFT Comparison
,Detection Offset
,VLM as Detector
,Image Contours
,Byte Tracker
,Distance Measurement
,Velocity
,VLM as Detector
,Google Vision OCR
,Bounding Rectangle
,Identify Changes
,Identify Outliers
,Pixel Color Count
,Byte Tracker
,Byte Tracker
,Line Counter
,Detections Stabilizer
,Template Matching
,Dynamic Zone
,Detections Transformation
,Detections Stitch
,Time in Zone
,Path Deviation
,YOLO-World Model
,Line Counter
,Instance Segmentation Model
,SIFT Comparison
,Time in Zone
,Instance Segmentation Model
,Detections Classes Replacement
- outputs:
Segment Anything 2 Model
,Detections Filter
,Stitch OCR Detections
,Pixelate Visualization
,Perspective Correction
,Path Deviation
,Roboflow Custom Metadata
,Detections Consensus
,Detection Offset
,Roboflow Dataset Upload
,Ellipse Visualization
,Model Comparison Visualization
,Crop Visualization
,Byte Tracker
,Trace Visualization
,Distance Measurement
,Circle Visualization
,Velocity
,Background Color Visualization
,Dot Visualization
,Roboflow Dataset Upload
,Size Measurement
,Florence-2 Model
,Byte Tracker
,Corner Visualization
,Florence-2 Model
,Byte Tracker
,Bounding Box Visualization
,Dynamic Crop
,Line Counter
,Detections Stabilizer
,Label Visualization
,Triangle Visualization
,Detections Stitch
,Detections Transformation
,Model Monitoring Inference Aggregator
,Color Visualization
,Path Deviation
,Time in Zone
,Blur Visualization
,Line Counter
,Time in Zone
,Detections Classes Replacement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Segment Anything 2 Model
,Detections Filter
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Path Deviation
,Detections Consensus
,Object Detection Model
,SIFT Comparison
,Detection Offset
,VLM as Detector
,Image Contours
,Byte Tracker
,Distance Measurement
,Velocity
,VLM as Detector
,Google Vision OCR
,Bounding Rectangle
,Identify Changes
,Identify Outliers
,Pixel Color Count
,Byte Tracker
,Byte Tracker
,Line Counter
,Detections Stabilizer
,Template Matching
,Dynamic Zone
,Detections Transformation
,Detections Stitch
,Time in Zone
,Path Deviation
,YOLO-World Model
,Line Counter
,Instance Segmentation Model
,SIFT Comparison
,Time in Zone
,Instance Segmentation Model
,Detections Classes Replacement
- outputs:
Segment Anything 2 Model
,Detections Filter
,Stitch OCR Detections
,Pixelate Visualization
,Perspective Correction
,Path Deviation
,Roboflow Custom Metadata
,Detections Consensus
,Detection Offset
,Roboflow Dataset Upload
,Ellipse Visualization
,Model Comparison Visualization
,Crop Visualization
,Byte Tracker
,Trace Visualization
,Distance Measurement
,Circle Visualization
,Velocity
,Background Color Visualization
,Dot Visualization
,Roboflow Dataset Upload
,Size Measurement
,Florence-2 Model
,Byte Tracker
,Corner Visualization
,Florence-2 Model
,Byte Tracker
,Bounding Box Visualization
,Dynamic Crop
,Line Counter
,Detections Stabilizer
,Label Visualization
,Triangle Visualization
,Detections Stitch
,Detections Transformation
,Model Monitoring Inference Aggregator
,Color Visualization
,Path Deviation
,Time in Zone
,Blur Visualization
,Line Counter
,Time in Zone
,Detections Classes Replacement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}