Yolo26 keypoints detection
YOLO26KeypointsDetection
¶
Bases: YOLOv11KeypointsDetection
YOLO26 Keypoints Detection model with end-to-end ONNX output.
YOLO26 exports with NMS already applied, outputting: - predictions: (batch, num_detections, 57) for COCO pose (17 keypoints * 3 + 6) Format: [x1, y1, x2, y2, confidence, class_index, kp0_x, kp0_y, kp0_conf, ...]
Source code in inference/models/yolo26/yolo26_keypoints_detection.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | |
make_response(predictions, img_dims, class_filter=None, *args, **kwargs)
¶
Constructs keypoints detection response objects.
YOLO26 prediction format: [x1, y1, x2, y2, conf, class_idx, keypoints...]
Source code in inference/models/yolo26/yolo26_keypoints_detection.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | |
postprocess(predictions, preproc_return_metadata, confidence=DEFAULT_CONFIDENCE, **kwargs)
¶
Postprocesses the keypoints detection predictions.
YOLO26 predictions come with NMS already applied, so we just need to: 1. Filter by confidence 2. Scale coordinates to original image size 3. Format response
Source code in inference/models/yolo26/yolo26_keypoints_detection.py
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 | |
predict(img_in, **kwargs)
¶
Performs inference on the given image using the ONNX session.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img_in
|
ndarray
|
Input image as a NumPy array. |
required |
Returns:
| Type | Description |
|---|---|
Tuple[ndarray, ...]
|
Tuple[np.ndarray]: Predictions with boxes, confidence, class, and keypoints. |
Source code in inference/models/yolo26/yolo26_keypoints_detection.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | |