Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Line Counter
,Keypoint Detection Model
,Gaze Detection
,Halo Visualization
,Stability AI Image Generation
,Model Comparison Visualization
,Keypoint Visualization
,Crop Visualization
,Image Blur
,Bounding Box Visualization
,Distance Measurement
,Line Counter
,Template Matching
,Mask Visualization
,Image Slicer
,Detections Classes Replacement
,Instance Segmentation Model
,Instance Segmentation Model
,Ellipse Visualization
,Bounding Rectangle
,Dynamic Zone
,VLM as Detector
,Byte Tracker
,Label Visualization
,Google Vision OCR
,Stability AI Outpainting
,Polygon Visualization
,Velocity
,Identify Changes
,Reference Path Visualization
,VLM as Detector
,Detections Transformation
,Camera Calibration
,Image Preprocessing
,Image Contours
,Line Counter Visualization
,Corner Visualization
,Detections Merge
,Clip Comparison
,Pixel Color Count
,Stitch Images
,Depth Estimation
,SIFT
,Time in Zone
,Blur Visualization
,Image Convert Grayscale
,Background Color Visualization
,Image Slicer
,Dynamic Crop
,Perspective Correction
,Circle Visualization
,Triangle Visualization
,Dot Visualization
,Byte Tracker
,Detections Filter
,SIFT Comparison
,Path Deviation
,Object Detection Model
,Segment Anything 2 Model
,Color Visualization
,Stability AI Inpainting
,Classification Label Visualization
,Byte Tracker
,Moondream2
,Detection Offset
,Absolute Static Crop
,Image Threshold
,Detections Consensus
,Time in Zone
,Keypoint Detection Model
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Camera Focus
,Grid Visualization
,YOLO-World Model
,Overlap Filter
,PTZ Tracking (ONVIF)
.md),SIFT Comparison
,Relative Static Crop
,Detections Stitch
,Object Detection Model
,Path Deviation
,Polygon Zone Visualization
,Identify Outliers
- outputs:
Line Counter
,Model Comparison Visualization
,Crop Visualization
,Bounding Box Visualization
,Distance Measurement
,Line Counter
,Roboflow Dataset Upload
,Size Measurement
,Detections Classes Replacement
,Ellipse Visualization
,Byte Tracker
,Label Visualization
,Velocity
,Model Monitoring Inference Aggregator
,Roboflow Custom Metadata
,Florence-2 Model
,Detections Transformation
,Florence-2 Model
,Corner Visualization
,Detections Merge
,Time in Zone
,Background Color Visualization
,Blur Visualization
,Dynamic Crop
,Perspective Correction
,Stitch OCR Detections
,Circle Visualization
,Triangle Visualization
,Byte Tracker
,Dot Visualization
,Detections Filter
,Path Deviation
,Segment Anything 2 Model
,Color Visualization
,Byte Tracker
,Detection Offset
,Detections Consensus
,Time in Zone
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Roboflow Dataset Upload
,Overlap Filter
,PTZ Tracking (ONVIF)
.md),Detections Stitch
,Path Deviation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,keypoint_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Line Counter
,Image Contours
,Clip Comparison
,Detections Merge
,Pixel Color Count
,Time in Zone
,Dynamic Crop
,Distance Measurement
,Line Counter
,Perspective Correction
,Byte Tracker
,Detections Filter
,Template Matching
,SIFT Comparison
,Path Deviation
,Detections Classes Replacement
,Instance Segmentation Model
,Instance Segmentation Model
,Object Detection Model
,Segment Anything 2 Model
,Bounding Rectangle
,Byte Tracker
,Moondream2
,Dynamic Zone
,Detection Offset
,VLM as Detector
,Byte Tracker
,Detections Consensus
,Time in Zone
,Detections Stabilizer
,Google Vision OCR
,YOLO-World Model
,Overlap Filter
,Velocity
,PTZ Tracking (ONVIF)
.md),Identify Changes
,SIFT Comparison
,Detections Stitch
,Object Detection Model
,Path Deviation
,VLM as Detector
,Detections Transformation
,Identify Outliers
- outputs:
Line Counter
,Florence-2 Model
,Corner Visualization
,Detections Merge
,Model Comparison Visualization
,Crop Visualization
,Time in Zone
,Background Color Visualization
,Blur Visualization
,Bounding Box Visualization
,Dynamic Crop
,Line Counter
,Distance Measurement
,Perspective Correction
,Stitch OCR Detections
,Circle Visualization
,Triangle Visualization
,Byte Tracker
,Dot Visualization
,Detections Filter
,Size Measurement
,Roboflow Dataset Upload
,Path Deviation
,Detections Classes Replacement
,Segment Anything 2 Model
,Ellipse Visualization
,Color Visualization
,Byte Tracker
,Detection Offset
,Byte Tracker
,Detections Consensus
,Time in Zone
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Label Visualization
,Roboflow Dataset Upload
,Overlap Filter
,Velocity
,PTZ Tracking (ONVIF)
.md),Model Monitoring Inference Aggregator
,Detections Stitch
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Detections Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Line Counter
,Image Contours
,Clip Comparison
,Detections Merge
,Pixel Color Count
,Time in Zone
,Dynamic Crop
,Distance Measurement
,Line Counter
,Perspective Correction
,Byte Tracker
,Detections Filter
,Template Matching
,SIFT Comparison
,Path Deviation
,Detections Classes Replacement
,Instance Segmentation Model
,Instance Segmentation Model
,Object Detection Model
,Segment Anything 2 Model
,Bounding Rectangle
,Byte Tracker
,Moondream2
,Dynamic Zone
,Detection Offset
,VLM as Detector
,Byte Tracker
,Detections Consensus
,Time in Zone
,Detections Stabilizer
,Google Vision OCR
,YOLO-World Model
,Overlap Filter
,Velocity
,PTZ Tracking (ONVIF)
.md),Identify Changes
,SIFT Comparison
,Detections Stitch
,Object Detection Model
,Path Deviation
,VLM as Detector
,Detections Transformation
,Identify Outliers
- outputs:
Line Counter
,Florence-2 Model
,Corner Visualization
,Detections Merge
,Model Comparison Visualization
,Crop Visualization
,Time in Zone
,Background Color Visualization
,Blur Visualization
,Bounding Box Visualization
,Dynamic Crop
,Line Counter
,Distance Measurement
,Perspective Correction
,Stitch OCR Detections
,Circle Visualization
,Triangle Visualization
,Byte Tracker
,Dot Visualization
,Detections Filter
,Size Measurement
,Roboflow Dataset Upload
,Path Deviation
,Detections Classes Replacement
,Segment Anything 2 Model
,Ellipse Visualization
,Color Visualization
,Byte Tracker
,Detection Offset
,Byte Tracker
,Detections Consensus
,Time in Zone
,Pixelate Visualization
,Trace Visualization
,Detections Stabilizer
,Label Visualization
,Roboflow Dataset Upload
,Overlap Filter
,Velocity
,PTZ Tracking (ONVIF)
.md),Model Monitoring Inference Aggregator
,Detections Stitch
,Roboflow Custom Metadata
,Path Deviation
,Florence-2 Model
,Detections Transformation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}