Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Path Deviation
,Background Color Visualization
,Byte Tracker
,Detections Filter
,Trace Visualization
,Time in Zone
,Google Vision OCR
,Dot Visualization
,Classification Label Visualization
,Stitch Images
,Line Counter
,Line Counter Visualization
,SIFT Comparison
,Pixel Color Count
,Time in Zone
,Model Comparison Visualization
,Line Counter
,Absolute Static Crop
,Stability AI Image Generation
,Image Contours
,Dynamic Crop
,Object Detection Model
,Image Slicer
,Gaze Detection
,Object Detection Model
,Image Preprocessing
,Circle Visualization
,Distance Measurement
,Detection Offset
,VLM as Detector
,Keypoint Detection Model
,Keypoint Detection Model
,PTZ Tracking (ONVIF)
.md),Relative Static Crop
,Image Slicer
,YOLO-World Model
,Moondream2
,Label Visualization
,Byte Tracker
,Image Threshold
,Reference Path Visualization
,Depth Estimation
,Bounding Box Visualization
,Dynamic Zone
,Icon Visualization
,Detections Transformation
,Polygon Visualization
,Byte Tracker
,Path Deviation
,Identify Outliers
,Ellipse Visualization
,Pixelate Visualization
,SIFT
,Grid Visualization
,Camera Focus
,Instance Segmentation Model
,Detections Consensus
,Bounding Rectangle
,Image Blur
,Detections Classes Replacement
,Keypoint Visualization
,Polygon Zone Visualization
,Template Matching
,Crop Visualization
,Corner Visualization
,Triangle Visualization
,Detections Stabilizer
,Clip Comparison
,Stability AI Inpainting
,Overlap Filter
,QR Code Generator
,Stability AI Outpainting
,Camera Calibration
,Detections Stitch
,Image Convert Grayscale
,Time in Zone
,SIFT Comparison
,Segment Anything 2 Model
,Blur Visualization
,Velocity
,VLM as Detector
,Identify Changes
,Color Visualization
,Instance Segmentation Model
,Halo Visualization
,Perspective Correction
,Mask Visualization
,Detections Merge
- outputs:
Path Deviation
,Background Color Visualization
,Byte Tracker
,Detections Filter
,Time in Zone
,Trace Visualization
,Dot Visualization
,Model Monitoring Inference Aggregator
,Line Counter
,Model Comparison Visualization
,Line Counter
,Dynamic Crop
,Circle Visualization
,Distance Measurement
,Detection Offset
,Florence-2 Model
,PTZ Tracking (ONVIF)
.md),Label Visualization
,Byte Tracker
,Roboflow Dataset Upload
,Bounding Box Visualization
,Roboflow Custom Metadata
,Icon Visualization
,Detections Transformation
,Florence-2 Model
,Byte Tracker
,Path Deviation
,Pixelate Visualization
,Ellipse Visualization
,Detections Consensus
,Detections Classes Replacement
,Size Measurement
,Stitch OCR Detections
,Corner Visualization
,Detections Stabilizer
,Crop Visualization
,Triangle Visualization
,Overlap Filter
,Detections Stitch
,Time in Zone
,Detections Merge
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Blur Visualization
,Velocity
,Color Visualization
,Perspective Correction
,Time in Zone
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
,keypoint_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Path Deviation
,Dynamic Zone
,Detections Transformation
,Byte Tracker
,Detections Filter
,Path Deviation
,Time in Zone
,Byte Tracker
,Identify Outliers
,Google Vision OCR
,Instance Segmentation Model
,Detections Consensus
,Line Counter
,Bounding Rectangle
,SIFT Comparison
,Detections Classes Replacement
,Pixel Color Count
,Template Matching
,Line Counter
,Detections Stabilizer
,Object Detection Model
,Image Contours
,Dynamic Crop
,Clip Comparison
,Overlap Filter
,Object Detection Model
,Detections Stitch
,Time in Zone
,Detections Merge
,SIFT Comparison
,Segment Anything 2 Model
,Distance Measurement
,Velocity
,Detection Offset
,VLM as Detector
,Identify Changes
,VLM as Detector
,Instance Segmentation Model
,PTZ Tracking (ONVIF)
.md),Perspective Correction
,YOLO-World Model
,Moondream2
,Time in Zone
,Byte Tracker
- outputs:
Path Deviation
,Bounding Box Visualization
,Background Color Visualization
,Roboflow Custom Metadata
,Icon Visualization
,Detections Transformation
,Florence-2 Model
,Byte Tracker
,Detections Filter
,Time in Zone
,Path Deviation
,Trace Visualization
,Byte Tracker
,Pixelate Visualization
,Ellipse Visualization
,Dot Visualization
,Model Monitoring Inference Aggregator
,Detections Consensus
,Line Counter
,Byte Tracker
,Detections Classes Replacement
,Time in Zone
,Model Comparison Visualization
,Size Measurement
,Stitch OCR Detections
,Corner Visualization
,Detections Stabilizer
,Crop Visualization
,Line Counter
,Triangle Visualization
,Dynamic Crop
,Overlap Filter
,Detections Stitch
,Time in Zone
,Circle Visualization
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Distance Measurement
,Blur Visualization
,Velocity
,Detection Offset
,Florence-2 Model
,Color Visualization
,PTZ Tracking (ONVIF)
.md),Perspective Correction
,Detections Merge
,Label Visualization
,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Path Deviation
,Dynamic Zone
,Detections Transformation
,Byte Tracker
,Detections Filter
,Path Deviation
,Time in Zone
,Byte Tracker
,Identify Outliers
,Google Vision OCR
,Instance Segmentation Model
,Detections Consensus
,Line Counter
,Bounding Rectangle
,SIFT Comparison
,Detections Classes Replacement
,Pixel Color Count
,Template Matching
,Line Counter
,Detections Stabilizer
,Object Detection Model
,Image Contours
,Dynamic Crop
,Clip Comparison
,Overlap Filter
,Object Detection Model
,Detections Stitch
,Time in Zone
,Detections Merge
,SIFT Comparison
,Segment Anything 2 Model
,Distance Measurement
,Velocity
,Detection Offset
,VLM as Detector
,Identify Changes
,VLM as Detector
,Instance Segmentation Model
,PTZ Tracking (ONVIF)
.md),Perspective Correction
,YOLO-World Model
,Moondream2
,Time in Zone
,Byte Tracker
- outputs:
Path Deviation
,Bounding Box Visualization
,Background Color Visualization
,Roboflow Custom Metadata
,Icon Visualization
,Detections Transformation
,Florence-2 Model
,Byte Tracker
,Detections Filter
,Time in Zone
,Path Deviation
,Trace Visualization
,Byte Tracker
,Pixelate Visualization
,Ellipse Visualization
,Dot Visualization
,Model Monitoring Inference Aggregator
,Detections Consensus
,Line Counter
,Byte Tracker
,Detections Classes Replacement
,Time in Zone
,Model Comparison Visualization
,Size Measurement
,Stitch OCR Detections
,Corner Visualization
,Detections Stabilizer
,Crop Visualization
,Line Counter
,Triangle Visualization
,Dynamic Crop
,Overlap Filter
,Detections Stitch
,Time in Zone
,Circle Visualization
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Distance Measurement
,Blur Visualization
,Velocity
,Detection Offset
,Florence-2 Model
,Color Visualization
,PTZ Tracking (ONVIF)
.md),Perspective Correction
,Detections Merge
,Label Visualization
,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}