Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Grid Visualization
,Image Blur
,Image Preprocessing
,Image Slicer
,Instance Segmentation Model
,Dynamic Crop
,Time in Zone
,Absolute Static Crop
,Color Visualization
,Moondream2
,Corner Visualization
,Depth Estimation
,Stability AI Outpainting
,Keypoint Visualization
,Keypoint Detection Model
,PTZ Tracking (ONVIF)
.md),Trace Visualization
,Clip Comparison
,Google Vision OCR
,YOLO-World Model
,Keypoint Detection Model
,Time in Zone
,Model Comparison Visualization
,Mask Visualization
,Image Slicer
,Detections Consensus
,Image Threshold
,Contrast Equalization
,Line Counter
,Detections Filter
,Path Deviation
,Morphological Transformation
,Classification Label Visualization
,Velocity
,Relative Static Crop
,Time in Zone
,Path Deviation
,Camera Calibration
,Dynamic Zone
,Blur Visualization
,Stitch Images
,Triangle Visualization
,Perspective Correction
,SIFT
,Icon Visualization
,Label Visualization
,Stability AI Image Generation
,Detections Transformation
,Object Detection Model
,Pixel Color Count
,Ellipse Visualization
,SIFT Comparison
,Detections Stabilizer
,VLM as Detector
,Byte Tracker
,Line Counter Visualization
,Line Counter
,Overlap Filter
,SIFT Comparison
,Byte Tracker
,Distance Measurement
,Image Convert Grayscale
,Detection Offset
,Detections Combine
,Gaze Detection
,Background Color Visualization
,QR Code Generator
,Segment Anything 2 Model
,Identify Changes
,Polygon Zone Visualization
,VLM as Detector
,Detections Stitch
,Byte Tracker
,Polygon Visualization
,Camera Focus
,Bounding Rectangle
,Dot Visualization
,Template Matching
,Detections Classes Replacement
,Instance Segmentation Model
,Identify Outliers
,Circle Visualization
,Bounding Box Visualization
,Image Contours
,Object Detection Model
,OCR Model
,Halo Visualization
,Reference Path Visualization
,Detections Merge
,Pixelate Visualization
,EasyOCR
,Stability AI Inpainting
,Crop Visualization
- outputs:
Dynamic Crop
,Time in Zone
,Roboflow Dataset Upload
,Color Visualization
,Corner Visualization
,PTZ Tracking (ONVIF)
.md),Trace Visualization
,Time in Zone
,Model Comparison Visualization
,Model Monitoring Inference Aggregator
,Size Measurement
,Detections Consensus
,Line Counter
,Detections Filter
,Path Deviation
,Velocity
,Time in Zone
,Path Deviation
,Florence-2 Model
,Blur Visualization
,Roboflow Dataset Upload
,Triangle Visualization
,Perspective Correction
,Icon Visualization
,Detections Transformation
,Label Visualization
,Stitch OCR Detections
,Ellipse Visualization
,Detections Stabilizer
,Byte Tracker
,Florence-2 Model
,Line Counter
,Overlap Filter
,Distance Measurement
,Byte Tracker
,Detection Offset
,Detections Combine
,Roboflow Custom Metadata
,Background Color Visualization
,Segment Anything 2 Model
,Detections Stitch
,Byte Tracker
,Dot Visualization
,Detections Classes Replacement
,Circle Visualization
,Bounding Box Visualization
,Detections Merge
,Pixelate Visualization
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,keypoint_detection_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
SIFT Comparison
,Detections Stabilizer
,VLM as Detector
,Byte Tracker
,Dynamic Crop
,Instance Segmentation Model
,Time in Zone
,Moondream2
,Line Counter
,Overlap Filter
,SIFT Comparison
,Byte Tracker
,Distance Measurement
,PTZ Tracking (ONVIF)
.md),Detection Offset
,Detections Combine
,Clip Comparison
,Google Vision OCR
,YOLO-World Model
,Time in Zone
,Segment Anything 2 Model
,Identify Changes
,VLM as Detector
,Detections Stitch
,Byte Tracker
,Detections Consensus
,Line Counter
,Bounding Rectangle
,Detections Filter
,Path Deviation
,Template Matching
,Velocity
,Time in Zone
,Detections Classes Replacement
,Path Deviation
,Instance Segmentation Model
,Identify Outliers
,Dynamic Zone
,Image Contours
,Object Detection Model
,OCR Model
,Detections Merge
,Perspective Correction
,EasyOCR
,Detections Transformation
,Object Detection Model
,Pixel Color Count
- outputs:
Ellipse Visualization
,Detections Stabilizer
,Byte Tracker
,Dynamic Crop
,Time in Zone
,Roboflow Dataset Upload
,Color Visualization
,Corner Visualization
,Florence-2 Model
,Line Counter
,Overlap Filter
,Distance Measurement
,Byte Tracker
,PTZ Tracking (ONVIF)
.md),Detection Offset
,Detections Combine
,Trace Visualization
,Roboflow Custom Metadata
,Background Color Visualization
,Time in Zone
,Segment Anything 2 Model
,Model Comparison Visualization
,Model Monitoring Inference Aggregator
,Size Measurement
,Detections Stitch
,Byte Tracker
,Detections Consensus
,Line Counter
,Icon Visualization
,Dot Visualization
,Detections Filter
,Path Deviation
,Velocity
,Time in Zone
,Detections Classes Replacement
,Path Deviation
,Circle Visualization
,Bounding Box Visualization
,Florence-2 Model
,Blur Visualization
,Roboflow Dataset Upload
,Detections Merge
,Triangle Visualization
,Pixelate Visualization
,Perspective Correction
,Detections Transformation
,Label Visualization
,Stitch OCR Detections
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
SIFT Comparison
,Detections Stabilizer
,VLM as Detector
,Byte Tracker
,Dynamic Crop
,Instance Segmentation Model
,Time in Zone
,Moondream2
,Line Counter
,Overlap Filter
,SIFT Comparison
,Byte Tracker
,Distance Measurement
,PTZ Tracking (ONVIF)
.md),Detection Offset
,Detections Combine
,Clip Comparison
,Google Vision OCR
,YOLO-World Model
,Time in Zone
,Segment Anything 2 Model
,Identify Changes
,VLM as Detector
,Detections Stitch
,Byte Tracker
,Detections Consensus
,Line Counter
,Bounding Rectangle
,Detections Filter
,Path Deviation
,Template Matching
,Velocity
,Time in Zone
,Detections Classes Replacement
,Path Deviation
,Instance Segmentation Model
,Identify Outliers
,Dynamic Zone
,Image Contours
,Object Detection Model
,OCR Model
,Detections Merge
,Perspective Correction
,EasyOCR
,Detections Transformation
,Object Detection Model
,Pixel Color Count
- outputs:
Ellipse Visualization
,Detections Stabilizer
,Byte Tracker
,Dynamic Crop
,Time in Zone
,Roboflow Dataset Upload
,Color Visualization
,Corner Visualization
,Florence-2 Model
,Line Counter
,Overlap Filter
,Distance Measurement
,Byte Tracker
,PTZ Tracking (ONVIF)
.md),Detection Offset
,Detections Combine
,Trace Visualization
,Roboflow Custom Metadata
,Background Color Visualization
,Time in Zone
,Segment Anything 2 Model
,Model Comparison Visualization
,Model Monitoring Inference Aggregator
,Size Measurement
,Detections Stitch
,Byte Tracker
,Detections Consensus
,Line Counter
,Icon Visualization
,Dot Visualization
,Detections Filter
,Path Deviation
,Velocity
,Time in Zone
,Detections Classes Replacement
,Path Deviation
,Circle Visualization
,Bounding Box Visualization
,Florence-2 Model
,Blur Visualization
,Roboflow Dataset Upload
,Detections Merge
,Triangle Visualization
,Pixelate Visualization
,Perspective Correction
,Detections Transformation
,Label Visualization
,Stitch OCR Detections
,Crop Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}