Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Label Visualization
,Depth Estimation
,Triangle Visualization
,Image Blur
,Model Comparison Visualization
,Detections Transformation
,Line Counter Visualization
,Circle Visualization
,Relative Static Crop
,Detections Stitch
,Trace Visualization
,Byte Tracker
,Object Detection Model
,Path Deviation
,Detection Offset
,Detections Consensus
,Velocity
,Stitch Images
,Gaze Detection
,Reference Path Visualization
,Bounding Rectangle
,Detections Filter
,Polygon Visualization
,Time in Zone
,Detections Merge
,Segment Anything 2 Model
,Identify Outliers
,SIFT
,Image Threshold
,Path Deviation
,Keypoint Visualization
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Image Slicer
,Dynamic Crop
,Instance Segmentation Model
,Dot Visualization
,Instance Segmentation Model
,Keypoint Detection Model
,Keypoint Detection Model
,Stability AI Inpainting
,Line Counter
,Google Vision OCR
,Identify Changes
,Template Matching
,Corner Visualization
,Overlap Filter
,Background Color Visualization
,Polygon Zone Visualization
,Byte Tracker
,Camera Focus
,Grid Visualization
,Perspective Correction
,Stability AI Image Generation
,VLM as Detector
,Line Counter
,Image Slicer
,Clip Comparison
,Blur Visualization
,Dynamic Zone
,Classification Label Visualization
,Image Convert Grayscale
,Time in Zone
,Image Preprocessing
,SIFT Comparison
,Byte Tracker
,Detections Stabilizer
,Pixel Color Count
,YOLO-World Model
,Stability AI Outpainting
,Moondream2
,Camera Calibration
,Mask Visualization
,Bounding Box Visualization
,Distance Measurement
,Pixelate Visualization
,PTZ Tracking (ONVIF)
.md),Image Contours
,Detections Classes Replacement
,Object Detection Model
,Absolute Static Crop
,Halo Visualization
,SIFT Comparison
,VLM as Detector
- outputs:
Florence-2 Model
,Model Monitoring Inference Aggregator
,Label Visualization
,Florence-2 Model
,Triangle Visualization
,Model Comparison Visualization
,Detections Transformation
,Circle Visualization
,Detections Stitch
,Trace Visualization
,Byte Tracker
,Path Deviation
,Detections Consensus
,Velocity
,Detection Offset
,Detections Filter
,Time in Zone
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Detections Merge
,Roboflow Custom Metadata
,Path Deviation
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Dynamic Crop
,Dot Visualization
,Roboflow Dataset Upload
,Line Counter
,Corner Visualization
,Overlap Filter
,Background Color Visualization
,Byte Tracker
,Perspective Correction
,Line Counter
,Blur Visualization
,Time in Zone
,Byte Tracker
,Detections Stabilizer
,Size Measurement
,Bounding Box Visualization
,Distance Measurement
,Pixelate Visualization
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Detections Classes Replacement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,keypoint_detection_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Identify Changes
,Line Counter
,Google Vision OCR
,Template Matching
,Overlap Filter
,Byte Tracker
,Detections Transformation
,VLM as Detector
,Perspective Correction
,Line Counter
,Detections Stitch
,Byte Tracker
,Clip Comparison
,Dynamic Zone
,Object Detection Model
,Time in Zone
,Path Deviation
,SIFT Comparison
,Byte Tracker
,Detection Offset
,Detections Consensus
,Detections Stabilizer
,Velocity
,YOLO-World Model
,Pixel Color Count
,Bounding Rectangle
,Detections Filter
,Time in Zone
,Detections Merge
,Moondream2
,Segment Anything 2 Model
,Identify Outliers
,Path Deviation
,Distance Measurement
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Object Detection Model
,Image Contours
,Instance Segmentation Model
,SIFT Comparison
,VLM as Detector
,Instance Segmentation Model
- outputs:
Line Counter
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Label Visualization
,Florence-2 Model
,Overlap Filter
,Corner Visualization
,Triangle Visualization
,Background Color Visualization
,Byte Tracker
,Model Comparison Visualization
,Detections Transformation
,Circle Visualization
,Perspective Correction
,Line Counter
,Detections Stitch
,Trace Visualization
,Byte Tracker
,Blur Visualization
,Time in Zone
,Path Deviation
,Byte Tracker
,Detections Consensus
,Velocity
,Detections Stabilizer
,Detection Offset
,Detections Filter
,Size Measurement
,Time in Zone
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Detections Merge
,Roboflow Custom Metadata
,Bounding Box Visualization
,Path Deviation
,Distance Measurement
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Pixelate Visualization
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Dot Visualization
,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Identify Changes
,Line Counter
,Google Vision OCR
,Template Matching
,Overlap Filter
,Byte Tracker
,Detections Transformation
,VLM as Detector
,Perspective Correction
,Line Counter
,Detections Stitch
,Byte Tracker
,Clip Comparison
,Dynamic Zone
,Object Detection Model
,Time in Zone
,Path Deviation
,SIFT Comparison
,Byte Tracker
,Detection Offset
,Detections Consensus
,Detections Stabilizer
,Velocity
,YOLO-World Model
,Pixel Color Count
,Bounding Rectangle
,Detections Filter
,Time in Zone
,Detections Merge
,Moondream2
,Segment Anything 2 Model
,Identify Outliers
,Path Deviation
,Distance Measurement
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Object Detection Model
,Image Contours
,Instance Segmentation Model
,SIFT Comparison
,VLM as Detector
,Instance Segmentation Model
- outputs:
Line Counter
,Florence-2 Model
,Model Monitoring Inference Aggregator
,Label Visualization
,Florence-2 Model
,Overlap Filter
,Corner Visualization
,Triangle Visualization
,Background Color Visualization
,Byte Tracker
,Model Comparison Visualization
,Detections Transformation
,Circle Visualization
,Perspective Correction
,Line Counter
,Detections Stitch
,Trace Visualization
,Byte Tracker
,Blur Visualization
,Time in Zone
,Path Deviation
,Byte Tracker
,Detections Consensus
,Velocity
,Detections Stabilizer
,Detection Offset
,Detections Filter
,Size Measurement
,Time in Zone
,Roboflow Dataset Upload
,Segment Anything 2 Model
,Detections Merge
,Roboflow Custom Metadata
,Bounding Box Visualization
,Path Deviation
,Distance Measurement
,Ellipse Visualization
,Crop Visualization
,Color Visualization
,Pixelate Visualization
,Stitch OCR Detections
,PTZ Tracking (ONVIF)
.md),Dynamic Crop
,Detections Classes Replacement
,Dot Visualization
,Roboflow Dataset Upload
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}