Byte Tracker¶
v3¶
Class: ByteTrackerBlockV3
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v3.ByteTrackerBlockV3
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
New outputs introduced in v3
The block has not changed compared to v2
apart from the fact that there are two
new outputs added:
-
new_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were first seen - specific tracked instance will only be listed in that output once - when new tracker ID is generated -
already_seen_instances
: delivers sv.Detections objects with bounding boxes that have tracker IDs which were already seen - specific tracked instance will only be listed in that output each time the tracker associates the bounding box with already seen tracker ID
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v3
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
instances_cache_size |
int |
Size of the instances cache to decide if specific tracked instance is new or already seen. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v3
.
- inputs:
Detections Consensus
,Circle Visualization
,Polygon Zone Visualization
,Image Slicer
,Reference Path Visualization
,Image Contours
,Dynamic Crop
,Detections Filter
,VLM as Detector
,Distance Measurement
,Blur Visualization
,Detection Offset
,Gaze Detection
,Stability AI Inpainting
,SIFT Comparison
,Instance Segmentation Model
,Absolute Static Crop
,Trace Visualization
,Classification Label Visualization
,Crop Visualization
,Camera Calibration
,Stitch Images
,Segment Anything 2 Model
,Line Counter Visualization
,Corner Visualization
,Time in Zone
,Google Vision OCR
,Path Deviation
,Line Counter
,Moondream2
,Perspective Correction
,Object Detection Model
,Triangle Visualization
,Path Deviation
,Image Blur
,Pixel Color Count
,Clip Comparison
,Detections Classes Replacement
,VLM as Detector
,Image Preprocessing
,Pixelate Visualization
,Keypoint Detection Model
,Background Color Visualization
,Polygon Visualization
,Stability AI Outpainting
,Grid Visualization
,Byte Tracker
,Identify Changes
,Color Visualization
,Bounding Rectangle
,Relative Static Crop
,Bounding Box Visualization
,Identify Outliers
,YOLO-World Model
,Halo Visualization
,Image Convert Grayscale
,Camera Focus
,Template Matching
,Ellipse Visualization
,Mask Visualization
,Dynamic Zone
,Detections Transformation
,Object Detection Model
,Model Comparison Visualization
,Line Counter
,Detections Stitch
,Dot Visualization
,Stability AI Image Generation
,Byte Tracker
,Depth Estimation
,Velocity
,Instance Segmentation Model
,Keypoint Visualization
,Byte Tracker
,Keypoint Detection Model
,Time in Zone
,Image Threshold
,SIFT Comparison
,Detections Merge
,SIFT
,PTZ Tracking (ONVIF)
.md),Label Visualization
,Image Slicer
,Detections Stabilizer
,Overlap Filter
- outputs:
Detections Consensus
,Circle Visualization
,Size Measurement
,Dynamic Crop
,Detections Filter
,Distance Measurement
,Blur Visualization
,Detection Offset
,Trace Visualization
,Crop Visualization
,Segment Anything 2 Model
,Corner Visualization
,Time in Zone
,Path Deviation
,Line Counter
,Perspective Correction
,Triangle Visualization
,Path Deviation
,Detections Classes Replacement
,Pixelate Visualization
,Background Color Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Color Visualization
,Model Monitoring Inference Aggregator
,Bounding Box Visualization
,Florence-2 Model
,Ellipse Visualization
,Roboflow Dataset Upload
,Detections Transformation
,Model Comparison Visualization
,Line Counter
,Detections Stitch
,Dot Visualization
,Byte Tracker
,Velocity
,Byte Tracker
,Time in Zone
,Roboflow Dataset Upload
,Florence-2 Model
,Stitch OCR Detections
,Detections Merge
,PTZ Tracking (ONVIF)
.md),Label Visualization
,Detections Stabilizer
,Overlap Filter
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v3
has.
Bindings
-
input
image
(image
): not available.detections
(Union[keypoint_detection_prediction
,instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.new_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.already_seen_instances
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v3
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v3",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1,
"instances_cache_size": "<block_does_not_provide_example>"
}
v2¶
Class: ByteTrackerBlockV2
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v2.ByteTrackerBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v2
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v2
.
- inputs:
Detections Consensus
,Byte Tracker
,Identify Changes
,Image Contours
,Dynamic Crop
,Detections Filter
,VLM as Detector
,Distance Measurement
,Bounding Rectangle
,Detection Offset
,SIFT Comparison
,Instance Segmentation Model
,Identify Outliers
,YOLO-World Model
,Segment Anything 2 Model
,Template Matching
,Time in Zone
,Google Vision OCR
,Path Deviation
,Line Counter
,Moondream2
,Perspective Correction
,Object Detection Model
,Detections Transformation
,Object Detection Model
,Overlap Filter
,Dynamic Zone
,Path Deviation
,Line Counter
,Pixel Color Count
,Byte Tracker
,Velocity
,Instance Segmentation Model
,Byte Tracker
,Clip Comparison
,Detections Classes Replacement
,VLM as Detector
,Time in Zone
,SIFT Comparison
,Detections Merge
,PTZ Tracking (ONVIF)
.md),Detections Stabilizer
,Detections Stitch
- outputs:
Background Color Visualization
,Detections Consensus
,Circle Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Color Visualization
,Size Measurement
,Dynamic Crop
,Detections Filter
,Distance Measurement
,Blur Visualization
,Detection Offset
,Model Monitoring Inference Aggregator
,Bounding Box Visualization
,Trace Visualization
,Crop Visualization
,Segment Anything 2 Model
,Florence-2 Model
,Corner Visualization
,Time in Zone
,Ellipse Visualization
,Path Deviation
,Line Counter
,Roboflow Dataset Upload
,Perspective Correction
,Detections Transformation
,Overlap Filter
,Triangle Visualization
,Model Comparison Visualization
,Path Deviation
,Line Counter
,Dot Visualization
,Byte Tracker
,Velocity
,Byte Tracker
,Detections Classes Replacement
,Time in Zone
,Roboflow Dataset Upload
,Florence-2 Model
,Stitch OCR Detections
,Detections Merge
,PTZ Tracking (ONVIF)
.md),Label Visualization
,Detections Stabilizer
,Pixelate Visualization
,Detections Stitch
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v2
has.
Bindings
-
input
image
(image
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v2",
"image": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}
v1¶
Class: ByteTrackerBlockV1
(there are multiple versions of this block)
Source: inference.core.workflows.core_steps.transformations.byte_tracker.v1.ByteTrackerBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
The ByteTrackerBlock
integrates ByteTrack, an advanced object tracking algorithm,
to manage object tracking across sequential video frames within workflows.
This block accepts detections and their corresponding video frames as input, initializing trackers for each detection based on configurable parameters like track activation threshold, lost track buffer, minimum matching threshold, and frame rate. These parameters allow fine-tuning of the tracking process to suit specific accuracy and performance needs.
Type identifier¶
Use the following identifier in step "type"
field: roboflow_core/byte_tracker@v1
to add the block as
as step in your workflow.
Properties¶
Name | Type | Description | Refs |
---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
track_activation_threshold |
float |
Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability.. | ✅ |
lost_track_buffer |
int |
Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps.. | ✅ |
minimum_matching_threshold |
float |
Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift.. | ✅ |
minimum_consecutive_frames |
int |
Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow
runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Byte Tracker
in version v1
.
- inputs:
Detections Consensus
,Byte Tracker
,Identify Changes
,Image Contours
,Dynamic Crop
,Detections Filter
,VLM as Detector
,Distance Measurement
,Bounding Rectangle
,Detection Offset
,SIFT Comparison
,Instance Segmentation Model
,Identify Outliers
,YOLO-World Model
,Segment Anything 2 Model
,Template Matching
,Time in Zone
,Google Vision OCR
,Path Deviation
,Line Counter
,Moondream2
,Perspective Correction
,Object Detection Model
,Detections Transformation
,Object Detection Model
,Overlap Filter
,Dynamic Zone
,Path Deviation
,Line Counter
,Pixel Color Count
,Byte Tracker
,Velocity
,Instance Segmentation Model
,Byte Tracker
,Clip Comparison
,Detections Classes Replacement
,VLM as Detector
,Time in Zone
,SIFT Comparison
,Detections Merge
,PTZ Tracking (ONVIF)
.md),Detections Stabilizer
,Detections Stitch
- outputs:
Background Color Visualization
,Detections Consensus
,Circle Visualization
,Roboflow Custom Metadata
,Byte Tracker
,Color Visualization
,Size Measurement
,Dynamic Crop
,Detections Filter
,Distance Measurement
,Blur Visualization
,Detection Offset
,Model Monitoring Inference Aggregator
,Bounding Box Visualization
,Trace Visualization
,Crop Visualization
,Segment Anything 2 Model
,Florence-2 Model
,Corner Visualization
,Time in Zone
,Ellipse Visualization
,Path Deviation
,Line Counter
,Roboflow Dataset Upload
,Perspective Correction
,Detections Transformation
,Overlap Filter
,Triangle Visualization
,Model Comparison Visualization
,Path Deviation
,Line Counter
,Dot Visualization
,Byte Tracker
,Velocity
,Byte Tracker
,Detections Classes Replacement
,Time in Zone
,Roboflow Dataset Upload
,Florence-2 Model
,Stitch OCR Detections
,Detections Merge
,PTZ Tracking (ONVIF)
.md),Label Visualization
,Detections Stabilizer
,Pixelate Visualization
,Detections Stitch
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Byte Tracker
in version v1
has.
Bindings
-
input
metadata
(video_metadata
): not available.detections
(Union[instance_segmentation_prediction
,object_detection_prediction
]): Objects to be tracked..track_activation_threshold
(float_zero_to_one
): Detection confidence threshold for track activation. Increasing track_activation_threshold improves accuracy and stability but might miss true detections. Decreasing it increases completeness but risks introducing noise and instability..lost_track_buffer
(integer
): Number of frames to buffer when a track is lost. Increasing lost_track_buffer enhances occlusion handling, significantly reducing the likelihood of track fragmentation or disappearance caused by brief detection gaps..minimum_matching_threshold
(float_zero_to_one
): Threshold for matching tracks with detections. Increasing minimum_matching_threshold improves accuracy but risks fragmentation. Decreasing it improves completeness but risks false positives and drift..minimum_consecutive_frames
(integer
): Number of consecutive frames that an object must be tracked before it is considered a 'valid' track. Increasing minimum_consecutive_frames prevents the creation of accidental tracks from false detection or double detection, but risks missing shorter tracks..
-
output
tracked_detections
(object_detection_prediction
): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step Byte Tracker
in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/byte_tracker@v1",
"metadata": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions",
"track_activation_threshold": 0.25,
"lost_track_buffer": 30,
"minimum_matching_threshold": 0.8,
"minimum_consecutive_frames": 1
}