Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Bounding Box Visualization
,Keypoint Detection Model
,Stability AI Outpainting
,Detections Merge
,Moondream2
,Overlap Filter
,Image Threshold
,Detections Stabilizer
,Model Comparison Visualization
,SIFT Comparison
,Gaze Detection
,Image Slicer
,Corner Visualization
,Background Color Visualization
,Distance Measurement
,Image Contours
,Time in Zone
,Mask Visualization
,QR Code Generator
,Classification Label Visualization
,Detections Transformation
,Trace Visualization
,Polygon Visualization
,Perspective Correction
,Instance Segmentation Model
,Path Deviation
,Grid Visualization
,SIFT Comparison
,Clip Comparison
,Instance Segmentation Model
,Byte Tracker
,Image Convert Grayscale
,PTZ Tracking (ONVIF)
.md),Line Counter
,Dot Visualization
,Relative Static Crop
,Ellipse Visualization
,Object Detection Model
,Keypoint Detection Model
,Pixel Color Count
,Dynamic Zone
,Halo Visualization
,Polygon Zone Visualization
,Time in Zone
,VLM as Detector
,Icon Visualization
,Triangle Visualization
,Crop Visualization
,Pixelate Visualization
,Path Deviation
,Stitch Images
,SIFT
,Color Visualization
,Blur Visualization
,Camera Focus
,Absolute Static Crop
,Label Visualization
,Line Counter Visualization
,Detection Offset
,Detections Consensus
,Reference Path Visualization
,Velocity
,Camera Calibration
,Image Blur
,Dynamic Crop
,Byte Tracker
,Segment Anything 2 Model
,Circle Visualization
,Template Matching
,YOLO-World Model
,Identify Outliers
,Byte Tracker
,Image Slicer
,Depth Estimation
,Bounding Rectangle
,Image Preprocessing
,Stability AI Inpainting
,Line Counter
,Keypoint Visualization
,Identify Changes
,Detections Classes Replacement
,Object Detection Model
,Stability AI Image Generation
,VLM as Detector
,Detections Filter
,Detections Stitch
,Google Vision OCR
- outputs:
Model Monitoring Inference Aggregator
,Bounding Box Visualization
,Detections Merge
,Overlap Filter
,Detections Stabilizer
,Model Comparison Visualization
,Corner Visualization
,Distance Measurement
,Background Color Visualization
,Time in Zone
,Detections Transformation
,Trace Visualization
,Perspective Correction
,Florence-2 Model
,Path Deviation
,Byte Tracker
,PTZ Tracking (ONVIF)
.md),Line Counter
,Dot Visualization
,Ellipse Visualization
,Time in Zone
,Icon Visualization
,Triangle Visualization
,Crop Visualization
,Size Measurement
,Pixelate Visualization
,Path Deviation
,Stitch OCR Detections
,Color Visualization
,Blur Visualization
,Florence-2 Model
,Label Visualization
,Detection Offset
,Detections Consensus
,Velocity
,Dynamic Crop
,Byte Tracker
,Roboflow Dataset Upload
,Circle Visualization
,Segment Anything 2 Model
,Byte Tracker
,Line Counter
,Detections Classes Replacement
,Roboflow Dataset Upload
,Detections Filter
,Detections Stitch
,Roboflow Custom Metadata
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Detections Merge
,Moondream2
,Overlap Filter
,Detections Stabilizer
,SIFT Comparison
,Distance Measurement
,Time in Zone
,Image Contours
,Detection Offset
,Detections Consensus
,Detections Transformation
,Velocity
,Perspective Correction
,Instance Segmentation Model
,Path Deviation
,Dynamic Crop
,Byte Tracker
,Segment Anything 2 Model
,Template Matching
,SIFT Comparison
,YOLO-World Model
,Clip Comparison
,Instance Segmentation Model
,Identify Outliers
,Byte Tracker
,Byte Tracker
,PTZ Tracking (ONVIF)
.md),Line Counter
,Bounding Rectangle
,Object Detection Model
,Line Counter
,Pixel Color Count
,Identify Changes
,Dynamic Zone
,Time in Zone
,Detections Classes Replacement
,VLM as Detector
,Object Detection Model
,VLM as Detector
,Detections Filter
,Detections Stitch
,Google Vision OCR
,Path Deviation
- outputs:
Model Monitoring Inference Aggregator
,Bounding Box Visualization
,Stitch OCR Detections
,Color Visualization
,Detections Merge
,Blur Visualization
,Overlap Filter
,Detections Stabilizer
,Model Comparison Visualization
,Corner Visualization
,Distance Measurement
,Background Color Visualization
,Time in Zone
,Florence-2 Model
,Label Visualization
,Detection Offset
,Detections Consensus
,Detections Transformation
,Trace Visualization
,Velocity
,Perspective Correction
,Florence-2 Model
,Path Deviation
,Dynamic Crop
,Byte Tracker
,Roboflow Dataset Upload
,Circle Visualization
,Segment Anything 2 Model
,Byte Tracker
,Byte Tracker
,PTZ Tracking (ONVIF)
.md),Line Counter
,Dot Visualization
,Ellipse Visualization
,Line Counter
,Time in Zone
,Roboflow Custom Metadata
,Detections Classes Replacement
,Icon Visualization
,Roboflow Dataset Upload
,Triangle Visualization
,Crop Visualization
,Size Measurement
,Detections Filter
,Detections Stitch
,Pixelate Visualization
,Path Deviation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Detections Merge
,Moondream2
,Overlap Filter
,Detections Stabilizer
,SIFT Comparison
,Distance Measurement
,Time in Zone
,Image Contours
,Detection Offset
,Detections Consensus
,Detections Transformation
,Velocity
,Perspective Correction
,Instance Segmentation Model
,Path Deviation
,Dynamic Crop
,Byte Tracker
,Segment Anything 2 Model
,Template Matching
,SIFT Comparison
,YOLO-World Model
,Clip Comparison
,Instance Segmentation Model
,Identify Outliers
,Byte Tracker
,Byte Tracker
,PTZ Tracking (ONVIF)
.md),Line Counter
,Bounding Rectangle
,Object Detection Model
,Line Counter
,Pixel Color Count
,Identify Changes
,Dynamic Zone
,Time in Zone
,Detections Classes Replacement
,VLM as Detector
,Object Detection Model
,VLM as Detector
,Detections Filter
,Detections Stitch
,Google Vision OCR
,Path Deviation
- outputs:
Model Monitoring Inference Aggregator
,Bounding Box Visualization
,Stitch OCR Detections
,Color Visualization
,Detections Merge
,Blur Visualization
,Overlap Filter
,Detections Stabilizer
,Model Comparison Visualization
,Corner Visualization
,Distance Measurement
,Background Color Visualization
,Time in Zone
,Florence-2 Model
,Label Visualization
,Detection Offset
,Detections Consensus
,Detections Transformation
,Trace Visualization
,Velocity
,Perspective Correction
,Florence-2 Model
,Path Deviation
,Dynamic Crop
,Byte Tracker
,Roboflow Dataset Upload
,Circle Visualization
,Segment Anything 2 Model
,Byte Tracker
,Byte Tracker
,PTZ Tracking (ONVIF)
.md),Line Counter
,Dot Visualization
,Ellipse Visualization
,Line Counter
,Time in Zone
,Roboflow Custom Metadata
,Detections Classes Replacement
,Icon Visualization
,Roboflow Dataset Upload
,Triangle Visualization
,Crop Visualization
,Size Measurement
,Detections Filter
,Detections Stitch
,Pixelate Visualization
,Path Deviation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}