Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Identify Changes
,Detection Offset
,Camera Focus
,Line Counter
,Polygon Zone Visualization
,Detections Filter
,Time in Zone
,Grid Visualization
,YOLO-World Model
,Image Convert Grayscale
,Trace Visualization
,Instance Segmentation Model
,Absolute Static Crop
,Perspective Correction
,Distance Measurement
,Circle Visualization
,Clip Comparison
,Image Slicer
,Triangle Visualization
,Halo Visualization
,Gaze Detection
,Line Counter
,Byte Tracker
,Byte Tracker
,Corner Visualization
,Detections Classes Replacement
,Object Detection Model
,Template Matching
,Detections Consensus
,Overlap Filter
,Dynamic Crop
,Depth Estimation
,Dynamic Zone
,Velocity
,Stitch Images
,Segment Anything 2 Model
,Object Detection Model
,Model Comparison Visualization
,Keypoint Detection Model
,Crop Visualization
,Blur Visualization
,SIFT Comparison
,Image Threshold
,Stability AI Inpainting
,VLM as Detector
,Relative Static Crop
,Image Preprocessing
,Keypoint Visualization
,Background Color Visualization
,Path Deviation
,Pixel Color Count
,Color Visualization
,Moondream2
,Classification Label Visualization
,Google Vision OCR
,Camera Calibration
,Pixelate Visualization
,Label Visualization
,Image Slicer
,Time in Zone
,Reference Path Visualization
,Identify Outliers
,Line Counter Visualization
,Byte Tracker
,Image Blur
,Detections Transformation
,SIFT Comparison
,Detections Stabilizer
,Image Contours
,Polygon Visualization
,Instance Segmentation Model
,SIFT
,Detections Merge
,Ellipse Visualization
,Mask Visualization
,Keypoint Detection Model
,Detections Stitch
,Bounding Box Visualization
,Dot Visualization
,Bounding Rectangle
,Stability AI Image Generation
,VLM as Detector
,Path Deviation
- outputs:
Detection Offset
,Line Counter
,Detections Filter
,Time in Zone
,Trace Visualization
,Roboflow Custom Metadata
,Perspective Correction
,Distance Measurement
,Circle Visualization
,Triangle Visualization
,Line Counter
,Byte Tracker
,Size Measurement
,Byte Tracker
,Corner Visualization
,Detections Classes Replacement
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,Velocity
,Model Monitoring Inference Aggregator
,Segment Anything 2 Model
,Model Comparison Visualization
,Crop Visualization
,Blur Visualization
,Background Color Visualization
,Path Deviation
,Color Visualization
,Pixelate Visualization
,Stitch OCR Detections
,Label Visualization
,Time in Zone
,Roboflow Dataset Upload
,Byte Tracker
,Detections Transformation
,Florence-2 Model
,Detections Stabilizer
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Detections Stitch
,Bounding Box Visualization
,Dot Visualization
,Path Deviation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Identify Changes
,Detection Offset
,SIFT Comparison
,Line Counter
,Detections Filter
,VLM as Detector
,Time in Zone
,Pixel Color Count
,Path Deviation
,YOLO-World Model
,Instance Segmentation Model
,Perspective Correction
,Moondream2
,Distance Measurement
,Google Vision OCR
,Clip Comparison
,Time in Zone
,Identify Outliers
,Byte Tracker
,Line Counter
,Detections Transformation
,Byte Tracker
,SIFT Comparison
,Detections Stabilizer
,Detections Classes Replacement
,Object Detection Model
,Template Matching
,Detections Consensus
,Image Contours
,Overlap Filter
,Dynamic Crop
,Instance Segmentation Model
,Detections Merge
,Dynamic Zone
,Velocity
,Detections Stitch
,Bounding Rectangle
,Byte Tracker
,Segment Anything 2 Model
,Object Detection Model
,VLM as Detector
,Path Deviation
- outputs:
Detection Offset
,Blur Visualization
,Line Counter
,Detections Filter
,Time in Zone
,Dot Visualization
,Background Color Visualization
,Path Deviation
,Trace Visualization
,Roboflow Custom Metadata
,Color Visualization
,Perspective Correction
,Distance Measurement
,Circle Visualization
,Pixelate Visualization
,Stitch OCR Detections
,Label Visualization
,Triangle Visualization
,Time in Zone
,Roboflow Dataset Upload
,Byte Tracker
,Line Counter
,Size Measurement
,Detections Transformation
,Corner Visualization
,Byte Tracker
,Florence-2 Model
,Detections Stabilizer
,Detections Classes Replacement
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Velocity
,Model Monitoring Inference Aggregator
,Detections Stitch
,Segment Anything 2 Model
,Byte Tracker
,Bounding Box Visualization
,Model Comparison Visualization
,Path Deviation
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Identify Changes
,Detection Offset
,SIFT Comparison
,Line Counter
,Detections Filter
,VLM as Detector
,Time in Zone
,Pixel Color Count
,Path Deviation
,YOLO-World Model
,Instance Segmentation Model
,Perspective Correction
,Moondream2
,Distance Measurement
,Google Vision OCR
,Clip Comparison
,Time in Zone
,Identify Outliers
,Byte Tracker
,Line Counter
,Detections Transformation
,Byte Tracker
,SIFT Comparison
,Detections Stabilizer
,Detections Classes Replacement
,Object Detection Model
,Template Matching
,Detections Consensus
,Image Contours
,Overlap Filter
,Dynamic Crop
,Instance Segmentation Model
,Detections Merge
,Dynamic Zone
,Velocity
,Detections Stitch
,Bounding Rectangle
,Byte Tracker
,Segment Anything 2 Model
,Object Detection Model
,VLM as Detector
,Path Deviation
- outputs:
Detection Offset
,Blur Visualization
,Line Counter
,Detections Filter
,Time in Zone
,Dot Visualization
,Background Color Visualization
,Path Deviation
,Trace Visualization
,Roboflow Custom Metadata
,Color Visualization
,Perspective Correction
,Distance Measurement
,Circle Visualization
,Pixelate Visualization
,Stitch OCR Detections
,Label Visualization
,Triangle Visualization
,Time in Zone
,Roboflow Dataset Upload
,Byte Tracker
,Line Counter
,Size Measurement
,Detections Transformation
,Corner Visualization
,Byte Tracker
,Florence-2 Model
,Detections Stabilizer
,Detections Classes Replacement
,Detections Consensus
,Roboflow Dataset Upload
,Overlap Filter
,Dynamic Crop
,Florence-2 Model
,Detections Merge
,Ellipse Visualization
,Velocity
,Model Monitoring Inference Aggregator
,Detections Stitch
,Segment Anything 2 Model
,Byte Tracker
,Bounding Box Visualization
,Model Comparison Visualization
,Path Deviation
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}