Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Bounding Rectangle
,Identify Outliers
,Classification Label Visualization
,Background Color Visualization
,Dynamic Crop
,Mask Visualization
,Clip Comparison
,Google Vision OCR
,Segment Anything 2 Model
,Absolute Static Crop
,Detection Offset
,Stability AI Image Generation
,Image Blur
,Identify Changes
,Circle Visualization
,Crop Visualization
,Template Matching
,VLM as Detector
,SIFT Comparison
,Path Deviation
,Velocity
,Detections Stitch
,Image Preprocessing
,Pixel Color Count
,Label Visualization
,Detections Stabilizer
,Path Deviation
,Line Counter
,Time in Zone
,Model Comparison Visualization
,Stitch Images
,Bounding Box Visualization
,Perspective Correction
,Moondream2
,SIFT Comparison
,Relative Static Crop
,Color Visualization
,Ellipse Visualization
,Reference Path Visualization
,Blur Visualization
,Pixelate Visualization
,Instance Segmentation Model
,VLM as Detector
,Keypoint Visualization
,Camera Focus
,Time in Zone
,Byte Tracker
,Detections Filter
,Detections Transformation
,YOLO-World Model
,Grid Visualization
,Image Convert Grayscale
,Image Threshold
,Trace Visualization
,Polygon Visualization
,Triangle Visualization
,Stability AI Inpainting
,Detections Consensus
,Halo Visualization
,Dot Visualization
,Polygon Zone Visualization
,Detections Merge
,Dynamic Zone
,Instance Segmentation Model
,Detections Classes Replacement
,Camera Calibration
,Object Detection Model
,SIFT
,Corner Visualization
,Image Contours
,Line Counter Visualization
,Image Slicer
,Byte Tracker
,Image Slicer
,Byte Tracker
,Line Counter
,Distance Measurement
,Object Detection Model
- outputs:
Background Color Visualization
,Dynamic Crop
,Segment Anything 2 Model
,Detection Offset
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Roboflow Dataset Upload
,Line Counter
,Circle Visualization
,Crop Visualization
,Path Deviation
,Stitch OCR Detections
,Velocity
,Detections Stitch
,Detections Stabilizer
,Path Deviation
,Line Counter
,Time in Zone
,Model Comparison Visualization
,Bounding Box Visualization
,Perspective Correction
,Color Visualization
,Ellipse Visualization
,Blur Visualization
,Pixelate Visualization
,Time in Zone
,Byte Tracker
,Florence-2 Model
,Detections Filter
,Detections Transformation
,Trace Visualization
,Triangle Visualization
,Detections Consensus
,Dot Visualization
,Detections Merge
,Size Measurement
,Roboflow Custom Metadata
,Detections Classes Replacement
,Corner Visualization
,Roboflow Dataset Upload
,Byte Tracker
,Byte Tracker
,Label Visualization
,Distance Measurement
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Bounding Rectangle
,Identify Outliers
,Instance Segmentation Model
,VLM as Detector
,Dynamic Crop
,Time in Zone
,Clip Comparison
,Google Vision OCR
,Byte Tracker
,Segment Anything 2 Model
,Detection Offset
,Detections Filter
,Detections Transformation
,YOLO-World Model
,Identify Changes
,Template Matching
,VLM as Detector
,SIFT Comparison
,Path Deviation
,Detections Consensus
,Detections Merge
,Velocity
,Detections Stitch
,Dynamic Zone
,Instance Segmentation Model
,Pixel Color Count
,Detections Classes Replacement
,Detections Stabilizer
,Path Deviation
,Line Counter
,Object Detection Model
,Image Contours
,Time in Zone
,Perspective Correction
,Moondream2
,Byte Tracker
,SIFT Comparison
,Byte Tracker
,Line Counter
,Distance Measurement
,Object Detection Model
- outputs:
Blur Visualization
,Pixelate Visualization
,Background Color Visualization
,Dynamic Crop
,Time in Zone
,Byte Tracker
,Segment Anything 2 Model
,Detection Offset
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Florence-2 Model
,Detections Filter
,Detections Transformation
,Roboflow Dataset Upload
,Circle Visualization
,Crop Visualization
,Trace Visualization
,Path Deviation
,Triangle Visualization
,Detections Consensus
,Stitch OCR Detections
,Dot Visualization
,Detections Merge
,Velocity
,Detections Stitch
,Size Measurement
,Roboflow Custom Metadata
,Detections Classes Replacement
,Detections Stabilizer
,Label Visualization
,Path Deviation
,Line Counter
,Corner Visualization
,Time in Zone
,Model Comparison Visualization
,Roboflow Dataset Upload
,Bounding Box Visualization
,Byte Tracker
,Perspective Correction
,Byte Tracker
,Line Counter
,Distance Measurement
,Color Visualization
,Ellipse Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Bounding Rectangle
,Identify Outliers
,Instance Segmentation Model
,VLM as Detector
,Dynamic Crop
,Time in Zone
,Clip Comparison
,Google Vision OCR
,Byte Tracker
,Segment Anything 2 Model
,Detection Offset
,Detections Filter
,Detections Transformation
,YOLO-World Model
,Identify Changes
,Template Matching
,VLM as Detector
,SIFT Comparison
,Path Deviation
,Detections Consensus
,Detections Merge
,Velocity
,Detections Stitch
,Dynamic Zone
,Instance Segmentation Model
,Pixel Color Count
,Detections Classes Replacement
,Detections Stabilizer
,Path Deviation
,Line Counter
,Object Detection Model
,Image Contours
,Time in Zone
,Perspective Correction
,Moondream2
,Byte Tracker
,SIFT Comparison
,Byte Tracker
,Line Counter
,Distance Measurement
,Object Detection Model
- outputs:
Blur Visualization
,Pixelate Visualization
,Background Color Visualization
,Dynamic Crop
,Time in Zone
,Byte Tracker
,Segment Anything 2 Model
,Detection Offset
,Model Monitoring Inference Aggregator
,Florence-2 Model
,Florence-2 Model
,Detections Filter
,Detections Transformation
,Roboflow Dataset Upload
,Circle Visualization
,Crop Visualization
,Trace Visualization
,Path Deviation
,Triangle Visualization
,Detections Consensus
,Stitch OCR Detections
,Dot Visualization
,Detections Merge
,Velocity
,Detections Stitch
,Size Measurement
,Roboflow Custom Metadata
,Detections Classes Replacement
,Detections Stabilizer
,Label Visualization
,Path Deviation
,Line Counter
,Corner Visualization
,Time in Zone
,Model Comparison Visualization
,Roboflow Dataset Upload
,Bounding Box Visualization
,Byte Tracker
,Perspective Correction
,Byte Tracker
,Line Counter
,Distance Measurement
,Color Visualization
,Ellipse Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[object_detection_prediction
,instance_segmentation_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}