Keypoint Visualization¶
Class: KeypointVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.keypoint.v1.KeypointVisualizationBlockV1
Visualize keypoints (landmark points) detected on objects by drawing point markers, connecting edges, or labeled vertices, providing pose estimation visualization for anatomical points, structural landmarks, or object key features.
How This Block Works¶
This block takes an image and keypoint detection predictions and visualizes the detected keypoints using one of three visualization modes. The block:
- Takes an image and keypoint detection predictions as input (predictions must include keypoint coordinates, confidence scores, and class names)
- Extracts keypoint data (coordinates, confidence values, and class names) from the predictions
- Converts the detection data into a KeyPoints format suitable for visualization
- Applies one of three visualization modes based on the annotator_type setting:
- Edge mode: Draws connecting lines (edges) between keypoints using specified edge pairs to show keypoint relationships (e.g., skeleton connections in pose estimation)
- Vertex mode: Draws circular markers at each keypoint location without connections, showing individual keypoint positions
- Vertex label mode: Draws circular markers with text labels identifying each keypoint class name, providing labeled keypoint visualization
- Applies color styling, sizing, and optional text labeling based on the selected parameters
- Returns an annotated image with keypoints visualized according to the selected mode
The block supports three visualization styles to suit different use cases. Edge mode connects related keypoints with lines (useful for pose estimation skeletons or structural relationships), vertex mode shows individual keypoint locations as circular markers, and vertex label mode adds text labels to identify each keypoint type. This visualization is essential for pose estimation workflows, anatomical point detection, or any application where specific landmark points on objects need to be identified and visualized.
Common Use Cases¶
- Human Pose Estimation: Visualize human body keypoints (joints, body parts) for pose estimation, activity recognition, or motion analysis applications where anatomical points need to be displayed with skeleton connections or labeled markers
- Animal Pose Estimation: Display animal keypoints for behavior analysis, veterinary applications, or wildlife monitoring where anatomical landmarks need to be visualized for pose analysis or movement tracking
- Structural Landmark Detection: Visualize keypoints on objects, structures, or machinery for structural analysis, quality control, or measurement workflows where specific landmark points need to be identified and displayed
- Facial Landmark Detection: Display facial keypoints (eye corners, nose tip, mouth corners, etc.) for facial recognition, expression analysis, or face alignment applications where facial features need to be visualized
- Sports and Movement Analysis: Visualize keypoints for sports analysis, biomechanics, or movement studies where body positions, joint angles, or movement patterns need to be analyzed and displayed
- Quality Control and Inspection: Display keypoints for manufacturing, quality assurance, or inspection workflows where specific points on products or components need to be identified, measured, or validated
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Keypoint Detection Model blocks to receive keypoint predictions that are visualized with point markers, edges, or labeled vertices
- Other visualization blocks (e.g., Bounding Box Visualization, Label Visualization, Polygon Visualization) to combine keypoint visualization with additional annotations for comprehensive pose or structure visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with keypoint visualizations for documentation, reporting, or analysis
- Webhook blocks to send visualized results with keypoints to external systems, APIs, or web applications for display in dashboards, pose analysis tools, or monitoring interfaces
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with keypoints as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with keypoint visualizations for live pose estimation, movement analysis, or post-processing workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/keypoint_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
annotator_type |
str |
Type of keypoint visualization mode. Options: 'edge' (draws connecting lines between keypoints using edge pairs, useful for skeleton/pose visualization), 'vertex' (draws circular markers at keypoint locations without connections), 'vertex_label' (draws circular markers with text labels identifying each keypoint class name).. | ❌ |
color |
str |
Color of the keypoint markers, edges, or labels. Can be specified as a color name (e.g., 'green', 'red', 'blue'), hex color code (e.g., '#A351FB', '#FF0000'), or RGB format. Used for keypoint circles (vertex/vertex_label modes) or edge lines (edge mode).. | ✅ |
text_color |
str |
Color of the text labels displayed on keypoints (vertex_label mode only). Can be specified as a color name (e.g., 'black', 'white'), hex color code, or RGB format. Only applies when annotator_type is 'vertex_label'.. | ✅ |
text_scale |
float |
Scale factor for keypoint label text size (vertex_label mode only). Controls the size of text labels displayed on keypoints. Values greater than 1.0 make text larger, values less than 1.0 make text smaller. Only applies when annotator_type is 'vertex_label'. Typical values range from 0.3 to 1.0.. | ✅ |
text_thickness |
int |
Thickness of the keypoint label text characters in pixels (vertex_label mode only). Controls how bold the text labels appear. Higher values create thicker, bolder text. Only applies when annotator_type is 'vertex_label'. Typical values range from 1 to 3.. | ✅ |
text_padding |
int |
Padding around keypoint label text in pixels (vertex_label mode only). Controls the spacing between the text label and its background border. Higher values create more space around text. Only applies when annotator_type is 'vertex_label'. Typical values range from 5 to 20 pixels.. | ✅ |
thickness |
int |
Thickness of the edge lines connecting keypoints in pixels (edge mode only). Controls how thick the connecting lines between keypoints appear. Higher values create thicker, more visible edges. Only applies when annotator_type is 'edge'. Typical values range from 1 to 5 pixels.. | ✅ |
radius |
int |
Radius of the circular keypoint markers in pixels (vertex and vertex_label modes only). Controls the size of circular markers drawn at keypoint locations. Higher values create larger, more visible markers. Only applies when annotator_type is 'vertex' or 'vertex_label'. Typical values range from 5 to 20 pixels.. | ✅ |
edges |
List[Any] |
Edge connections between keypoints (edge mode only). List of pairs of keypoint indices (e.g., [(0, 1), (1, 2), ...]) defining which keypoints should be connected with lines. For pose estimation, this typically represents skeleton connections (e.g., connecting joints). Only applies when annotator_type is 'edge'. Required for edge visualization.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Keypoint Visualization in version v1.
- inputs:
Triangle Visualization,Morphological Transformation,Roboflow Dataset Upload,Ellipse Visualization,Detections Classes Replacement,LMM,Florence-2 Model,Blur Visualization,Halo Visualization,Anthropic Claude,Google Gemini,Camera Focus,Llama 3.2 Vision,Motion Detection,Model Comparison Visualization,VLM As Detector,Keypoint Visualization,Pixelate Visualization,Size Measurement,Image Slicer,SIFT Comparison,Roboflow Dataset Upload,Line Counter Visualization,Keypoint Detection Model,Label Visualization,SIFT Comparison,QR Code Generator,Stitch OCR Detections,Line Counter,Dynamic Zone,Distance Measurement,Email Notification,Clip Comparison,Buffer,Slack Notification,Corner Visualization,Image Slicer,Florence-2 Model,CSV Formatter,EasyOCR,Object Detection Model,Anthropic Claude,OpenAI,Google Gemini,Bounding Box Visualization,Keypoint Detection Model,Anthropic Claude,Background Subtraction,Pixel Color Count,Background Color Visualization,Image Convert Grayscale,Camera Calibration,Polygon Visualization,Image Blur,VLM As Classifier,Relative Static Crop,Clip Comparison,Heatmap Visualization,CogVLM,Mask Visualization,Image Preprocessing,Twilio SMS Notification,VLM As Detector,PTZ Tracking (ONVIF).md),OpenAI,Detections Transformation,OCR Model,SIFT,Stitch Images,Stability AI Outpainting,Stitch OCR Detections,Dynamic Crop,Model Monitoring Inference Aggregator,Circle Visualization,Detection Offset,Color Visualization,Trace Visualization,OpenAI,Dimension Collapse,Icon Visualization,Dot Visualization,Cosine Similarity,Email Notification,Instance Segmentation Model,Camera Focus,Twilio SMS/MMS Notification,Depth Estimation,Contrast Equalization,Roboflow Custom Metadata,LMM For Classification,Grid Visualization,Text Display,Detections Filter,Reference Path Visualization,Image Threshold,Perspective Correction,Image Contours,Polygon Zone Visualization,Multi-Label Classification Model,Detection Event Log,Polygon Visualization,Local File Sink,Identify Changes,Halo Visualization,JSON Parser,Google Vision OCR,Stability AI Inpainting,Identify Outliers,Crop Visualization,Detections Consensus,Template Matching,Detections List Roll-Up,Google Gemini,Webhook Sink,Absolute Static Crop,Classification Label Visualization,VLM As Classifier,OpenAI,Single-Label Classification Model,Line Counter,Gaze Detection,Stability AI Image Generation - outputs:
Triangle Visualization,Detections Stitch,Roboflow Dataset Upload,Ellipse Visualization,Morphological Transformation,LMM,Florence-2 Model,Blur Visualization,CLIP Embedding Model,Anthropic Claude,Halo Visualization,Google Gemini,Camera Focus,Llama 3.2 Vision,Motion Detection,Model Comparison Visualization,VLM As Detector,Keypoint Visualization,Qwen3-VL,Pixelate Visualization,Image Slicer,Roboflow Dataset Upload,Line Counter Visualization,Keypoint Detection Model,SmolVLM2,Label Visualization,SIFT Comparison,Clip Comparison,Buffer,SAM 3,Detections Stabilizer,Object Detection Model,EasyOCR,Image Slicer,Corner Visualization,Florence-2 Model,Object Detection Model,SAM 3,QR Code Detection,Anthropic Claude,OpenAI,Google Gemini,Perception Encoder Embedding Model,Bounding Box Visualization,Keypoint Detection Model,Anthropic Claude,Pixel Color Count,Background Subtraction,Background Color Visualization,Image Convert Grayscale,Camera Calibration,Polygon Visualization,Image Blur,VLM As Classifier,Relative Static Crop,Clip Comparison,Heatmap Visualization,CogVLM,Mask Visualization,Image Preprocessing,VLM As Detector,Instance Segmentation Model,OpenAI,OCR Model,SIFT,Stitch Images,Single-Label Classification Model,Moondream2,Stability AI Outpainting,Dynamic Crop,Circle Visualization,Byte Tracker,OpenAI,Trace Visualization,Color Visualization,Qwen2.5-VL,Icon Visualization,YOLO-World Model,Dot Visualization,Time in Zone,Email Notification,Instance Segmentation Model,Camera Focus,Twilio SMS/MMS Notification,Contrast Equalization,Depth Estimation,Segment Anything 2 Model,LMM For Classification,Text Display,Dominant Color,Reference Path Visualization,Image Threshold,Perspective Correction,Multi-Label Classification Model,Image Contours,Polygon Zone Visualization,Polygon Visualization,Halo Visualization,Google Vision OCR,SAM 3,Stability AI Inpainting,Crop Visualization,Template Matching,Google Gemini,Barcode Detection,VLM As Classifier,Classification Label Visualization,Seg Preview,OpenAI,Absolute Static Crop,Multi-Label Classification Model,Single-Label Classification Model,Gaze Detection,Stability AI Image Generation
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Keypoint Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(keypoint_detection_prediction): Keypoint detection predictions containing keypoint coordinates, confidence scores, and class names. Predictions must include keypoints_xy (keypoint coordinates), keypoints_confidence (confidence values), and keypoints_class_name (keypoint class/type names). Requires outputs from a keypoint detection model block..color(string): Color of the keypoint markers, edges, or labels. Can be specified as a color name (e.g., 'green', 'red', 'blue'), hex color code (e.g., '#A351FB', '#FF0000'), or RGB format. Used for keypoint circles (vertex/vertex_label modes) or edge lines (edge mode)..text_color(string): Color of the text labels displayed on keypoints (vertex_label mode only). Can be specified as a color name (e.g., 'black', 'white'), hex color code, or RGB format. Only applies when annotator_type is 'vertex_label'..text_scale(float): Scale factor for keypoint label text size (vertex_label mode only). Controls the size of text labels displayed on keypoints. Values greater than 1.0 make text larger, values less than 1.0 make text smaller. Only applies when annotator_type is 'vertex_label'. Typical values range from 0.3 to 1.0..text_thickness(integer): Thickness of the keypoint label text characters in pixels (vertex_label mode only). Controls how bold the text labels appear. Higher values create thicker, bolder text. Only applies when annotator_type is 'vertex_label'. Typical values range from 1 to 3..text_padding(integer): Padding around keypoint label text in pixels (vertex_label mode only). Controls the spacing between the text label and its background border. Higher values create more space around text. Only applies when annotator_type is 'vertex_label'. Typical values range from 5 to 20 pixels..thickness(integer): Thickness of the edge lines connecting keypoints in pixels (edge mode only). Controls how thick the connecting lines between keypoints appear. Higher values create thicker, more visible edges. Only applies when annotator_type is 'edge'. Typical values range from 1 to 5 pixels..radius(integer): Radius of the circular keypoint markers in pixels (vertex and vertex_label modes only). Controls the size of circular markers drawn at keypoint locations. Higher values create larger, more visible markers. Only applies when annotator_type is 'vertex' or 'vertex_label'. Typical values range from 5 to 20 pixels..edges(list_of_values): Edge connections between keypoints (edge mode only). List of pairs of keypoint indices (e.g., [(0, 1), (1, 2), ...]) defining which keypoints should be connected with lines. For pose estimation, this typically represents skeleton connections (e.g., connecting joints). Only applies when annotator_type is 'edge'. Required for edge visualization..
-
output
image(image): Image in workflows.
Example JSON definition of step Keypoint Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/keypoint_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.keypoint_detection_model.predictions",
"annotator_type": "<block_does_not_provide_example>",
"color": "#A351FB",
"text_color": "black",
"text_scale": 0.5,
"text_thickness": 1,
"text_padding": 10,
"thickness": 2,
"radius": 10,
"edges": "$inputs.edges"
}