Keypoint Visualization¶
Class: KeypointVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.keypoint.v1.KeypointVisualizationBlockV1
Visualize keypoints (landmark points) detected on objects by drawing point markers, connecting edges, or labeled vertices, providing pose estimation visualization for anatomical points, structural landmarks, or object key features.
How This Block Works¶
This block takes an image and keypoint detection predictions and visualizes the detected keypoints using one of three visualization modes. The block:
- Takes an image and keypoint detection predictions as input (predictions must include keypoint coordinates, confidence scores, and class names)
- Extracts keypoint data (coordinates, confidence values, and class names) from the predictions
- Converts the detection data into a KeyPoints format suitable for visualization
- Applies one of three visualization modes based on the annotator_type setting:
- Edge mode: Draws connecting lines (edges) between keypoints using specified edge pairs to show keypoint relationships (e.g., skeleton connections in pose estimation)
- Vertex mode: Draws circular markers at each keypoint location without connections, showing individual keypoint positions
- Vertex label mode: Draws circular markers with text labels identifying each keypoint class name, providing labeled keypoint visualization
- Applies color styling, sizing, and optional text labeling based on the selected parameters
- Returns an annotated image with keypoints visualized according to the selected mode
The block supports three visualization styles to suit different use cases. Edge mode connects related keypoints with lines (useful for pose estimation skeletons or structural relationships), vertex mode shows individual keypoint locations as circular markers, and vertex label mode adds text labels to identify each keypoint type. This visualization is essential for pose estimation workflows, anatomical point detection, or any application where specific landmark points on objects need to be identified and visualized.
Common Use Cases¶
- Human Pose Estimation: Visualize human body keypoints (joints, body parts) for pose estimation, activity recognition, or motion analysis applications where anatomical points need to be displayed with skeleton connections or labeled markers
- Animal Pose Estimation: Display animal keypoints for behavior analysis, veterinary applications, or wildlife monitoring where anatomical landmarks need to be visualized for pose analysis or movement tracking
- Structural Landmark Detection: Visualize keypoints on objects, structures, or machinery for structural analysis, quality control, or measurement workflows where specific landmark points need to be identified and displayed
- Facial Landmark Detection: Display facial keypoints (eye corners, nose tip, mouth corners, etc.) for facial recognition, expression analysis, or face alignment applications where facial features need to be visualized
- Sports and Movement Analysis: Visualize keypoints for sports analysis, biomechanics, or movement studies where body positions, joint angles, or movement patterns need to be analyzed and displayed
- Quality Control and Inspection: Display keypoints for manufacturing, quality assurance, or inspection workflows where specific points on products or components need to be identified, measured, or validated
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Keypoint Detection Model blocks to receive keypoint predictions that are visualized with point markers, edges, or labeled vertices
- Other visualization blocks (e.g., Bounding Box Visualization, Label Visualization, Polygon Visualization) to combine keypoint visualization with additional annotations for comprehensive pose or structure visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with keypoint visualizations for documentation, reporting, or analysis
- Webhook blocks to send visualized results with keypoints to external systems, APIs, or web applications for display in dashboards, pose analysis tools, or monitoring interfaces
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with keypoints as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with keypoint visualizations for live pose estimation, movement analysis, or post-processing workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/keypoint_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
annotator_type |
str |
Type of keypoint visualization mode. Options: 'edge' (draws connecting lines between keypoints using edge pairs, useful for skeleton/pose visualization), 'vertex' (draws circular markers at keypoint locations without connections), 'vertex_label' (draws circular markers with text labels identifying each keypoint class name).. | ❌ |
color |
str |
Color of the keypoint markers, edges, or labels. Can be specified as a color name (e.g., 'green', 'red', 'blue'), hex color code (e.g., '#A351FB', '#FF0000'), or RGB format. Used for keypoint circles (vertex/vertex_label modes) or edge lines (edge mode).. | ✅ |
text_color |
str |
Color of the text labels displayed on keypoints (vertex_label mode only). Can be specified as a color name (e.g., 'black', 'white'), hex color code, or RGB format. Only applies when annotator_type is 'vertex_label'.. | ✅ |
text_scale |
float |
Scale factor for keypoint label text size (vertex_label mode only). Controls the size of text labels displayed on keypoints. Values greater than 1.0 make text larger, values less than 1.0 make text smaller. Only applies when annotator_type is 'vertex_label'. Typical values range from 0.3 to 1.0.. | ✅ |
text_thickness |
int |
Thickness of the keypoint label text characters in pixels (vertex_label mode only). Controls how bold the text labels appear. Higher values create thicker, bolder text. Only applies when annotator_type is 'vertex_label'. Typical values range from 1 to 3.. | ✅ |
text_padding |
int |
Padding around keypoint label text in pixels (vertex_label mode only). Controls the spacing between the text label and its background border. Higher values create more space around text. Only applies when annotator_type is 'vertex_label'. Typical values range from 5 to 20 pixels.. | ✅ |
thickness |
int |
Thickness of the edge lines connecting keypoints in pixels (edge mode only). Controls how thick the connecting lines between keypoints appear. Higher values create thicker, more visible edges. Only applies when annotator_type is 'edge'. Typical values range from 1 to 5 pixels.. | ✅ |
radius |
int |
Radius of the circular keypoint markers in pixels (vertex and vertex_label modes only). Controls the size of circular markers drawn at keypoint locations. Higher values create larger, more visible markers. Only applies when annotator_type is 'vertex' or 'vertex_label'. Typical values range from 5 to 20 pixels.. | ✅ |
edges |
List[Any] |
Edge connections between keypoints (edge mode only). List of pairs of keypoint indices (e.g., [(0, 1), (1, 2), ...]) defining which keypoints should be connected with lines. For pose estimation, this typically represents skeleton connections (e.g., connecting joints). Only applies when annotator_type is 'edge'. Required for edge visualization.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Keypoint Visualization in version v1.
- inputs:
Roboflow Dataset Upload,Line Counter Visualization,Gaze Detection,Image Slicer,OCR Model,Instance Segmentation Model,Distance Measurement,Color Visualization,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Relative Static Crop,Detections Consensus,Detections Classes Replacement,Webhook Sink,Trace Visualization,Stitch OCR Detections,Camera Focus,Qwen 3.5 API,OpenAI,Buffer,Size Measurement,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,GLM-OCR,Dot Visualization,S3 Sink,Twilio SMS Notification,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Clip Comparison,VLM As Classifier,Pixelate Visualization,Line Counter,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,CSV Formatter,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Line Counter,Bounding Box Visualization,Velocity,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Grid Visualization,Qwen3.5-VL,Contrast Equalization,Per-Class Confidence Filter,Triangle Visualization,Halo Visualization,Circle Visualization,Mask Visualization,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Object Detection Model,Stability AI Outpainting,Email Notification,Google Gemma API,Google Vision OCR,Identify Outliers,Image Preprocessing,Google Gemini,EasyOCR,Cosine Similarity,OpenAI,Detection Event Log,Anthropic Claude,Model Comparison Visualization,Roboflow Custom Metadata,Detection Offset,Single-Label Classification Model,VLM As Classifier,Detections List Roll-Up,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,CogVLM,Crop Visualization,Camera Calibration,Florence-2 Model,OC-SORT Tracker,Icon Visualization,Local File Sink,Detections Filter,Image Contours,JSON Parser,Keypoint Detection Model,Reference Path Visualization,Dimension Collapse,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Multi-Label Classification Model,Image Convert Grayscale,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Keypoint Visualization,QR Code Generator,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Qwen2.5-VL,Instance Segmentation Model,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Single-Label Classification Model,Relative Static Crop,Barcode Detection,Trace Visualization,Object Detection Model,Qwen 3.5 API,Camera Focus,OpenAI,Buffer,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,GLM-OCR,Dot Visualization,Semantic Segmentation Model,Seg Preview,Google Gemini,Roboflow Dataset Upload,Clip Comparison,VLM As Classifier,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SmolVLM2,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Qwen3.5-VL,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Dominant Color,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,CLIP Embedding Model,Detections Stabilizer,Detections Stitch,Object Detection Model,Stability AI Outpainting,Google Gemma API,Google Vision OCR,Google Gemini,Image Preprocessing,EasyOCR,Object Detection Model,OpenAI,SAM2 Video Tracker,Byte Tracker,Anthropic Claude,Qwen3-VL,Model Comparison Visualization,YOLO-World Model,Instance Segmentation Model,Perception Encoder Embedding Model,Semantic Segmentation Model,Single-Label Classification Model,VLM As Classifier,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Florence-2 Model,Camera Calibration,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,QR Code Detection,Icon Visualization,Image Contours,Keypoint Detection Model,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Multi-Label Classification Model,Image Slicer,Absolute Static Crop,Classification Label Visualization,Image Blur,Image Convert Grayscale,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Moondream2,Keypoint Visualization,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Keypoint Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(keypoint_detection_prediction): Keypoint detection predictions containing keypoint coordinates, confidence scores, and class names. Predictions must include keypoints_xy (keypoint coordinates), keypoints_confidence (confidence values), and keypoints_class_name (keypoint class/type names). Requires outputs from a keypoint detection model block..color(string): Color of the keypoint markers, edges, or labels. Can be specified as a color name (e.g., 'green', 'red', 'blue'), hex color code (e.g., '#A351FB', '#FF0000'), or RGB format. Used for keypoint circles (vertex/vertex_label modes) or edge lines (edge mode)..text_color(string): Color of the text labels displayed on keypoints (vertex_label mode only). Can be specified as a color name (e.g., 'black', 'white'), hex color code, or RGB format. Only applies when annotator_type is 'vertex_label'..text_scale(float): Scale factor for keypoint label text size (vertex_label mode only). Controls the size of text labels displayed on keypoints. Values greater than 1.0 make text larger, values less than 1.0 make text smaller. Only applies when annotator_type is 'vertex_label'. Typical values range from 0.3 to 1.0..text_thickness(integer): Thickness of the keypoint label text characters in pixels (vertex_label mode only). Controls how bold the text labels appear. Higher values create thicker, bolder text. Only applies when annotator_type is 'vertex_label'. Typical values range from 1 to 3..text_padding(integer): Padding around keypoint label text in pixels (vertex_label mode only). Controls the spacing between the text label and its background border. Higher values create more space around text. Only applies when annotator_type is 'vertex_label'. Typical values range from 5 to 20 pixels..thickness(integer): Thickness of the edge lines connecting keypoints in pixels (edge mode only). Controls how thick the connecting lines between keypoints appear. Higher values create thicker, more visible edges. Only applies when annotator_type is 'edge'. Typical values range from 1 to 5 pixels..radius(integer): Radius of the circular keypoint markers in pixels (vertex and vertex_label modes only). Controls the size of circular markers drawn at keypoint locations. Higher values create larger, more visible markers. Only applies when annotator_type is 'vertex' or 'vertex_label'. Typical values range from 5 to 20 pixels..edges(list_of_values): Edge connections between keypoints (edge mode only). List of pairs of keypoint indices (e.g., [(0, 1), (1, 2), ...]) defining which keypoints should be connected with lines. For pose estimation, this typically represents skeleton connections (e.g., connecting joints). Only applies when annotator_type is 'edge'. Required for edge visualization..
-
output
image(image): Image in workflows.
Example JSON definition of step Keypoint Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/keypoint_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.keypoint_detection_model.predictions",
"annotator_type": "<block_does_not_provide_example>",
"color": "#A351FB",
"text_color": "black",
"text_scale": 0.5,
"text_thickness": 1,
"text_padding": 10,
"thickness": 2,
"radius": 10,
"edges": "$inputs.edges"
}