Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Keypoint Detection Model
,Image Convert Grayscale
,Google Vision OCR
,Gaze Detection
,Detections Classes Replacement
,Detection Offset
,Distance Measurement
,Pixelate Visualization
,SIFT
,VLM as Detector
,Moondream2
,Stability AI Image Generation
,YOLO-World Model
,SIFT Comparison
,Mask Visualization
,Object Detection Model
,Image Slicer
,Overlap Filter
,Triangle Visualization
,VLM as Detector
,Path Deviation
,Polygon Zone Visualization
,Model Comparison Visualization
,Crop Visualization
,Classification Label Visualization
,Segment Anything 2 Model
,Keypoint Detection Model
,Reference Path Visualization
,Line Counter
,Bounding Box Visualization
,Image Contours
,Byte Tracker
,Circle Visualization
,Perspective Correction
,Pixel Color Count
,Polygon Visualization
,Detections Transformation
,Instance Segmentation Model
,Trace Visualization
,Detections Merge
,Color Visualization
,Identify Changes
,Detections Consensus
,Image Threshold
,Detections Stitch
,SIFT Comparison
,Absolute Static Crop
,Line Counter Visualization
,Stitch Images
,Line Counter
,Dot Visualization
,Detections Filter
,Identify Outliers
,Dynamic Zone
,Instance Segmentation Model
,Background Color Visualization
,Detections Stabilizer
,Image Slicer
,Template Matching
,Bounding Rectangle
,Image Blur
,Time in Zone
,Camera Focus
,Grid Visualization
,Path Deviation
,Object Detection Model
,Blur Visualization
,Label Visualization
,Stability AI Inpainting
,Depth Estimation
,Image Preprocessing
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,Byte Tracker
,Halo Visualization
,Corner Visualization
,Camera Calibration
,Clip Comparison
,Time in Zone
,Dynamic Crop
,Relative Static Crop
,Keypoint Visualization
,Velocity
- outputs:
Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Distance Measurement
,Pixelate Visualization
,Overlap Filter
,Triangle Visualization
,Path Deviation
,Model Comparison Visualization
,Crop Visualization
,Segment Anything 2 Model
,Line Counter
,Bounding Box Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Size Measurement
,Circle Visualization
,Perspective Correction
,Detections Transformation
,Detections Merge
,Trace Visualization
,Color Visualization
,Detections Consensus
,Detections Stitch
,Line Counter
,Detections Filter
,Dot Visualization
,Background Color Visualization
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Detections Stabilizer
,Stitch OCR Detections
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Time in Zone
,Path Deviation
,Blur Visualization
,Label Visualization
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,Byte Tracker
,Corner Visualization
,Time in Zone
,Dynamic Crop
,Velocity
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
,keypoint_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Detections Stitch
,Google Vision OCR
,SIFT Comparison
,Detections Classes Replacement
,Detection Offset
,Distance Measurement
,VLM as Detector
,Line Counter
,YOLO-World Model
,Moondream2
,SIFT Comparison
,Detections Filter
,Object Detection Model
,Overlap Filter
,Identify Outliers
,Dynamic Zone
,Instance Segmentation Model
,Detections Stabilizer
,Template Matching
,Bounding Rectangle
,VLM as Detector
,Path Deviation
,Time in Zone
,Path Deviation
,Object Detection Model
,Segment Anything 2 Model
,Byte Tracker
,ONVIF Control
,Byte Tracker
,Line Counter
,Byte Tracker
,Image Contours
,Clip Comparison
,Time in Zone
,Perspective Correction
,Dynamic Crop
,Pixel Color Count
,Detections Transformation
,Instance Segmentation Model
,Detections Merge
,Velocity
,Identify Changes
,Detections Consensus
- outputs:
Detections Stitch
,Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Distance Measurement
,Pixelate Visualization
,Line Counter
,Detections Filter
,Dot Visualization
,Overlap Filter
,Background Color Visualization
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Detections Stabilizer
,Triangle Visualization
,Stitch OCR Detections
,Path Deviation
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Time in Zone
,Path Deviation
,Model Comparison Visualization
,Crop Visualization
,Blur Visualization
,Label Visualization
,Segment Anything 2 Model
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,Byte Tracker
,Line Counter
,Bounding Box Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Size Measurement
,Corner Visualization
,Time in Zone
,Circle Visualization
,Perspective Correction
,Dynamic Crop
,Detections Transformation
,Detections Merge
,Trace Visualization
,Velocity
,Color Visualization
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Detections Stitch
,Google Vision OCR
,SIFT Comparison
,Detections Classes Replacement
,Detection Offset
,Distance Measurement
,VLM as Detector
,Line Counter
,YOLO-World Model
,Moondream2
,SIFT Comparison
,Detections Filter
,Object Detection Model
,Overlap Filter
,Identify Outliers
,Dynamic Zone
,Instance Segmentation Model
,Detections Stabilizer
,Template Matching
,Bounding Rectangle
,VLM as Detector
,Path Deviation
,Time in Zone
,Path Deviation
,Object Detection Model
,Segment Anything 2 Model
,Byte Tracker
,ONVIF Control
,Byte Tracker
,Line Counter
,Byte Tracker
,Image Contours
,Clip Comparison
,Time in Zone
,Perspective Correction
,Dynamic Crop
,Pixel Color Count
,Detections Transformation
,Instance Segmentation Model
,Detections Merge
,Velocity
,Identify Changes
,Detections Consensus
- outputs:
Detections Stitch
,Detections Classes Replacement
,Florence-2 Model
,Detection Offset
,Distance Measurement
,Pixelate Visualization
,Line Counter
,Detections Filter
,Dot Visualization
,Overlap Filter
,Background Color Visualization
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Detections Stabilizer
,Triangle Visualization
,Stitch OCR Detections
,Path Deviation
,Roboflow Dataset Upload
,Roboflow Dataset Upload
,Time in Zone
,Path Deviation
,Model Comparison Visualization
,Crop Visualization
,Blur Visualization
,Label Visualization
,Segment Anything 2 Model
,Byte Tracker
,Ellipse Visualization
,ONVIF Control
,Byte Tracker
,Line Counter
,Bounding Box Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Size Measurement
,Corner Visualization
,Time in Zone
,Circle Visualization
,Perspective Correction
,Dynamic Crop
,Detections Transformation
,Detections Merge
,Trace Visualization
,Velocity
,Color Visualization
,Detections Consensus
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}