Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Detection Offset
,Image Convert Grayscale
,VLM as Detector
,Absolute Static Crop
,Distance Measurement
,Relative Static Crop
,Line Counter Visualization
,Detections Classes Replacement
,Gaze Detection
,Background Color Visualization
,Camera Focus
,Image Contours
,Image Slicer
,Reference Path Visualization
,Keypoint Detection Model
,Instance Segmentation Model
,SIFT Comparison
,Object Detection Model
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Depth Estimation
,Google Vision OCR
,Dynamic Zone
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Crop Visualization
,Identify Changes
,Dot Visualization
,Detections Filter
,Model Comparison Visualization
,Classification Label Visualization
,Camera Calibration
,Instance Segmentation Model
,Stability AI Image Generation
,Detections Merge
,Time in Zone
,Trace Visualization
,Time in Zone
,Corner Visualization
,Line Counter
,Image Threshold
,Blur Visualization
,Stability AI Inpainting
,Keypoint Detection Model
,SIFT
,Circle Visualization
,Overlap Filter
,Moondream2
,Path Deviation
,Label Visualization
,Stitch Images
,Image Preprocessing
,Bounding Rectangle
,Detections Stitch
,Template Matching
,Byte Tracker
,Grid Visualization
,Polygon Zone Visualization
,Keypoint Visualization
,Bounding Box Visualization
,Image Blur
,Halo Visualization
,Ellipse Visualization
,Color Visualization
,Pixelate Visualization
,SIFT Comparison
,Pixel Color Count
,YOLO-World Model
,VLM as Detector
,Velocity
,Polygon Visualization
,Segment Anything 2 Model
,Image Slicer
,Mask Visualization
,Identify Outliers
,Byte Tracker
,Dynamic Crop
,Detections Consensus
- outputs:
Size Measurement
,Detection Offset
,Roboflow Custom Metadata
,Distance Measurement
,Detections Classes Replacement
,Background Color Visualization
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Roboflow Dataset Upload
,Perspective Correction
,Crop Visualization
,Dot Visualization
,Detections Filter
,Model Comparison Visualization
,Detections Merge
,Time in Zone
,Line Counter
,Time in Zone
,Trace Visualization
,Corner Visualization
,Blur Visualization
,Overlap Filter
,Circle Visualization
,Path Deviation
,Florence-2 Model
,Label Visualization
,Detections Stitch
,Byte Tracker
,Stitch OCR Detections
,Bounding Box Visualization
,Ellipse Visualization
,Color Visualization
,Pixelate Visualization
,Velocity
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Byte Tracker
,Dynamic Crop
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
,keypoint_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Detection Offset
,Moondream2
,Path Deviation
,VLM as Detector
,Distance Measurement
,Detections Classes Replacement
,Bounding Rectangle
,Image Contours
,Detections Stitch
,Template Matching
,Instance Segmentation Model
,Byte Tracker
,SIFT Comparison
,Object Detection Model
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Google Vision OCR
,Dynamic Zone
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Identify Changes
,Detections Filter
,SIFT Comparison
,Instance Segmentation Model
,Pixel Color Count
,YOLO-World Model
,VLM as Detector
,Detections Merge
,Velocity
,Segment Anything 2 Model
,Time in Zone
,Time in Zone
,Line Counter
,Identify Outliers
,Overlap Filter
,Byte Tracker
,Dynamic Crop
,Detections Consensus
- outputs:
Size Measurement
,Detection Offset
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Distance Measurement
,Label Visualization
,Detections Classes Replacement
,Background Color Visualization
,Detections Stitch
,Byte Tracker
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Stitch OCR Detections
,Roboflow Dataset Upload
,Bounding Box Visualization
,Perspective Correction
,Ellipse Visualization
,Color Visualization
,Crop Visualization
,Dot Visualization
,Pixelate Visualization
,Detections Filter
,Model Comparison Visualization
,Detections Merge
,Velocity
,Overlap Filter
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Time in Zone
,Line Counter
,Time in Zone
,Trace Visualization
,Corner Visualization
,Blur Visualization
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Circle Visualization
,Byte Tracker
,Dynamic Crop
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Detection Offset
,Moondream2
,Path Deviation
,VLM as Detector
,Distance Measurement
,Detections Classes Replacement
,Bounding Rectangle
,Image Contours
,Detections Stitch
,Template Matching
,Instance Segmentation Model
,Byte Tracker
,SIFT Comparison
,Object Detection Model
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Google Vision OCR
,Dynamic Zone
,Clip Comparison
,Perspective Correction
,Object Detection Model
,Identify Changes
,Detections Filter
,SIFT Comparison
,Instance Segmentation Model
,Pixel Color Count
,YOLO-World Model
,VLM as Detector
,Detections Merge
,Velocity
,Segment Anything 2 Model
,Time in Zone
,Time in Zone
,Line Counter
,Identify Outliers
,Overlap Filter
,Byte Tracker
,Dynamic Crop
,Detections Consensus
- outputs:
Size Measurement
,Detection Offset
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Distance Measurement
,Label Visualization
,Detections Classes Replacement
,Background Color Visualization
,Detections Stitch
,Byte Tracker
,Triangle Visualization
,Path Deviation
,Line Counter
,Detections Stabilizer
,Detections Transformation
,Byte Tracker
,Stitch OCR Detections
,Roboflow Dataset Upload
,Bounding Box Visualization
,Perspective Correction
,Ellipse Visualization
,Color Visualization
,Crop Visualization
,Dot Visualization
,Pixelate Visualization
,Detections Filter
,Model Comparison Visualization
,Detections Merge
,Velocity
,Overlap Filter
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Time in Zone
,Line Counter
,Time in Zone
,Trace Visualization
,Corner Visualization
,Blur Visualization
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Circle Visualization
,Byte Tracker
,Dynamic Crop
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}