Keypoint Visualization¶
Class: KeypointVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.keypoint.v1.KeypointVisualizationBlockV1
Visualize keypoints (landmark points) detected on objects by drawing point markers, connecting edges, or labeled vertices, providing pose estimation visualization for anatomical points, structural landmarks, or object key features.
How This Block Works¶
This block takes an image and keypoint detection predictions and visualizes the detected keypoints using one of three visualization modes. The block:
- Takes an image and keypoint detection predictions as input (predictions must include keypoint coordinates, confidence scores, and class names)
- Extracts keypoint data (coordinates, confidence values, and class names) from the predictions
- Converts the detection data into a KeyPoints format suitable for visualization
- Applies one of three visualization modes based on the annotator_type setting:
- Edge mode: Draws connecting lines (edges) between keypoints using specified edge pairs to show keypoint relationships (e.g., skeleton connections in pose estimation)
- Vertex mode: Draws circular markers at each keypoint location without connections, showing individual keypoint positions
- Vertex label mode: Draws circular markers with text labels identifying each keypoint class name, providing labeled keypoint visualization
- Applies color styling, sizing, and optional text labeling based on the selected parameters
- Returns an annotated image with keypoints visualized according to the selected mode
The block supports three visualization styles to suit different use cases. Edge mode connects related keypoints with lines (useful for pose estimation skeletons or structural relationships), vertex mode shows individual keypoint locations as circular markers, and vertex label mode adds text labels to identify each keypoint type. This visualization is essential for pose estimation workflows, anatomical point detection, or any application where specific landmark points on objects need to be identified and visualized.
Common Use Cases¶
- Human Pose Estimation: Visualize human body keypoints (joints, body parts) for pose estimation, activity recognition, or motion analysis applications where anatomical points need to be displayed with skeleton connections or labeled markers
- Animal Pose Estimation: Display animal keypoints for behavior analysis, veterinary applications, or wildlife monitoring where anatomical landmarks need to be visualized for pose analysis or movement tracking
- Structural Landmark Detection: Visualize keypoints on objects, structures, or machinery for structural analysis, quality control, or measurement workflows where specific landmark points need to be identified and displayed
- Facial Landmark Detection: Display facial keypoints (eye corners, nose tip, mouth corners, etc.) for facial recognition, expression analysis, or face alignment applications where facial features need to be visualized
- Sports and Movement Analysis: Visualize keypoints for sports analysis, biomechanics, or movement studies where body positions, joint angles, or movement patterns need to be analyzed and displayed
- Quality Control and Inspection: Display keypoints for manufacturing, quality assurance, or inspection workflows where specific points on products or components need to be identified, measured, or validated
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Keypoint Detection Model blocks to receive keypoint predictions that are visualized with point markers, edges, or labeled vertices
- Other visualization blocks (e.g., Bounding Box Visualization, Label Visualization, Polygon Visualization) to combine keypoint visualization with additional annotations for comprehensive pose or structure visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with keypoint visualizations for documentation, reporting, or analysis
- Webhook blocks to send visualized results with keypoints to external systems, APIs, or web applications for display in dashboards, pose analysis tools, or monitoring interfaces
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with keypoints as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with keypoint visualizations for live pose estimation, movement analysis, or post-processing workflows
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/keypoint_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
annotator_type |
str |
Type of keypoint visualization mode. Options: 'edge' (draws connecting lines between keypoints using edge pairs, useful for skeleton/pose visualization), 'vertex' (draws circular markers at keypoint locations without connections), 'vertex_label' (draws circular markers with text labels identifying each keypoint class name).. | ❌ |
color |
str |
Color of the keypoint markers, edges, or labels. Can be specified as a color name (e.g., 'green', 'red', 'blue'), hex color code (e.g., '#A351FB', '#FF0000'), or RGB format. Used for keypoint circles (vertex/vertex_label modes) or edge lines (edge mode).. | ✅ |
text_color |
str |
Color of the text labels displayed on keypoints (vertex_label mode only). Can be specified as a color name (e.g., 'black', 'white'), hex color code, or RGB format. Only applies when annotator_type is 'vertex_label'.. | ✅ |
text_scale |
float |
Scale factor for keypoint label text size (vertex_label mode only). Controls the size of text labels displayed on keypoints. Values greater than 1.0 make text larger, values less than 1.0 make text smaller. Only applies when annotator_type is 'vertex_label'. Typical values range from 0.3 to 1.0.. | ✅ |
text_thickness |
int |
Thickness of the keypoint label text characters in pixels (vertex_label mode only). Controls how bold the text labels appear. Higher values create thicker, bolder text. Only applies when annotator_type is 'vertex_label'. Typical values range from 1 to 3.. | ✅ |
text_padding |
int |
Padding around keypoint label text in pixels (vertex_label mode only). Controls the spacing between the text label and its background border. Higher values create more space around text. Only applies when annotator_type is 'vertex_label'. Typical values range from 5 to 20 pixels.. | ✅ |
thickness |
int |
Thickness of the edge lines connecting keypoints in pixels (edge mode only). Controls how thick the connecting lines between keypoints appear. Higher values create thicker, more visible edges. Only applies when annotator_type is 'edge'. Typical values range from 1 to 5 pixels.. | ✅ |
radius |
int |
Radius of the circular keypoint markers in pixels (vertex and vertex_label modes only). Controls the size of circular markers drawn at keypoint locations. Higher values create larger, more visible markers. Only applies when annotator_type is 'vertex' or 'vertex_label'. Typical values range from 5 to 20 pixels.. | ✅ |
edges |
List[Any] |
Edge connections between keypoints (edge mode only). List of pairs of keypoint indices (e.g., [(0, 1), (1, 2), ...]) defining which keypoints should be connected with lines. For pose estimation, this typically represents skeleton connections (e.g., connecting joints). Only applies when annotator_type is 'edge'. Required for edge visualization.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Keypoint Visualization in version v1.
- inputs:
Contrast Equalization,Llama 3.2 Vision,Clip Comparison,SIFT Comparison,Anthropic Claude,VLM as Detector,Local File Sink,Polygon Visualization,QR Code Generator,Detections Transformation,Image Blur,SIFT Comparison,Email Notification,Roboflow Dataset Upload,Text Display,Motion Detection,Model Comparison Visualization,Camera Focus,SIFT,PTZ Tracking (ONVIF).md),LMM,Google Vision OCR,Mask Visualization,Anthropic Claude,Relative Static Crop,Cosine Similarity,Keypoint Detection Model,Circle Visualization,EasyOCR,Pixelate Visualization,Stability AI Inpainting,Reference Path Visualization,VLM as Classifier,Detection Offset,Detections Filter,Instance Segmentation Model,Perspective Correction,Ellipse Visualization,Crop Visualization,Halo Visualization,Image Threshold,Keypoint Detection Model,CSV Formatter,Florence-2 Model,Twilio SMS Notification,Image Convert Grayscale,Corner Visualization,Image Preprocessing,Line Counter,Dynamic Zone,Detections List Roll-Up,Identify Changes,Icon Visualization,Background Subtraction,Image Contours,Image Slicer,Detections Consensus,Depth Estimation,Multi-Label Classification Model,Pixel Color Count,Stitch Images,Dynamic Crop,Bounding Box Visualization,VLM as Classifier,Model Monitoring Inference Aggregator,Anthropic Claude,Detection Event Log,Detections Classes Replacement,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Single-Label Classification Model,Email Notification,Line Counter,Stability AI Image Generation,Keypoint Visualization,OCR Model,Roboflow Custom Metadata,Google Gemini,OpenAI,Distance Measurement,Camera Focus,Trace Visualization,OpenAI,CogVLM,Color Visualization,Absolute Static Crop,Image Slicer,Size Measurement,Dot Visualization,Identify Outliers,Label Visualization,Slack Notification,Buffer,Florence-2 Model,Google Gemini,JSON Parser,Google Gemini,Grid Visualization,Object Detection Model,LMM For Classification,OpenAI,Stitch OCR Detections,Template Matching,Dimension Collapse,OpenAI,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Roboflow Dataset Upload,Stitch OCR Detections,Twilio SMS/MMS Notification,Gaze Detection,Clip Comparison,Triangle Visualization,VLM as Detector,Webhook Sink - outputs:
Contrast Equalization,Llama 3.2 Vision,Clip Comparison,Anthropic Claude,VLM as Detector,Polygon Visualization,Image Blur,SIFT Comparison,SmolVLM2,CLIP Embedding Model,Roboflow Dataset Upload,Text Display,Motion Detection,SIFT,Model Comparison Visualization,Camera Focus,Moondream2,LMM,Qwen3-VL,Single-Label Classification Model,Google Vision OCR,SAM 3,Anthropic Claude,Relative Static Crop,Mask Visualization,Object Detection Model,Keypoint Detection Model,Circle Visualization,Seg Preview,EasyOCR,Pixelate Visualization,Stability AI Inpainting,Multi-Label Classification Model,Time in Zone,VLM as Classifier,Reference Path Visualization,Instance Segmentation Model,Perspective Correction,Halo Visualization,Image Threshold,Ellipse Visualization,Crop Visualization,Keypoint Detection Model,Florence-2 Model,Detections Stabilizer,Image Convert Grayscale,Perception Encoder Embedding Model,Corner Visualization,Image Preprocessing,Barcode Detection,Icon Visualization,SAM 3,Background Subtraction,Segment Anything 2 Model,Qwen2.5-VL,Image Slicer,Image Contours,Depth Estimation,Multi-Label Classification Model,Pixel Color Count,Detections Stitch,Stitch Images,QR Code Detection,Dynamic Crop,Bounding Box Visualization,Anthropic Claude,VLM as Classifier,YOLO-World Model,Instance Segmentation Model,Line Counter Visualization,Blur Visualization,Morphological Transformation,Camera Calibration,Polygon Zone Visualization,Single-Label Classification Model,Email Notification,Stability AI Image Generation,Dominant Color,OCR Model,Keypoint Visualization,Google Gemini,OpenAI,Camera Focus,Trace Visualization,CogVLM,OpenAI,Image Slicer,Absolute Static Crop,Color Visualization,Dot Visualization,Label Visualization,Buffer,Florence-2 Model,Google Gemini,Google Gemini,Object Detection Model,LMM For Classification,Template Matching,OpenAI,OpenAI,Classification Label Visualization,Background Color Visualization,Stability AI Outpainting,Byte Tracker,SAM 3,Twilio SMS/MMS Notification,Roboflow Dataset Upload,Gaze Detection,Clip Comparison,Triangle Visualization,VLM as Detector
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Keypoint Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(keypoint_detection_prediction): Keypoint detection predictions containing keypoint coordinates, confidence scores, and class names. Predictions must include keypoints_xy (keypoint coordinates), keypoints_confidence (confidence values), and keypoints_class_name (keypoint class/type names). Requires outputs from a keypoint detection model block..color(string): Color of the keypoint markers, edges, or labels. Can be specified as a color name (e.g., 'green', 'red', 'blue'), hex color code (e.g., '#A351FB', '#FF0000'), or RGB format. Used for keypoint circles (vertex/vertex_label modes) or edge lines (edge mode)..text_color(string): Color of the text labels displayed on keypoints (vertex_label mode only). Can be specified as a color name (e.g., 'black', 'white'), hex color code, or RGB format. Only applies when annotator_type is 'vertex_label'..text_scale(float): Scale factor for keypoint label text size (vertex_label mode only). Controls the size of text labels displayed on keypoints. Values greater than 1.0 make text larger, values less than 1.0 make text smaller. Only applies when annotator_type is 'vertex_label'. Typical values range from 0.3 to 1.0..text_thickness(integer): Thickness of the keypoint label text characters in pixels (vertex_label mode only). Controls how bold the text labels appear. Higher values create thicker, bolder text. Only applies when annotator_type is 'vertex_label'. Typical values range from 1 to 3..text_padding(integer): Padding around keypoint label text in pixels (vertex_label mode only). Controls the spacing between the text label and its background border. Higher values create more space around text. Only applies when annotator_type is 'vertex_label'. Typical values range from 5 to 20 pixels..thickness(integer): Thickness of the edge lines connecting keypoints in pixels (edge mode only). Controls how thick the connecting lines between keypoints appear. Higher values create thicker, more visible edges. Only applies when annotator_type is 'edge'. Typical values range from 1 to 5 pixels..radius(integer): Radius of the circular keypoint markers in pixels (vertex and vertex_label modes only). Controls the size of circular markers drawn at keypoint locations. Higher values create larger, more visible markers. Only applies when annotator_type is 'vertex' or 'vertex_label'. Typical values range from 5 to 20 pixels..edges(list_of_values): Edge connections between keypoints (edge mode only). List of pairs of keypoint indices (e.g., [(0, 1), (1, 2), ...]) defining which keypoints should be connected with lines. For pose estimation, this typically represents skeleton connections (e.g., connecting joints). Only applies when annotator_type is 'edge'. Required for edge visualization..
-
output
image(image): Image in workflows.
Example JSON definition of step Keypoint Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/keypoint_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.keypoint_detection_model.predictions",
"annotator_type": "<block_does_not_provide_example>",
"color": "#A351FB",
"text_color": "black",
"text_scale": 0.5,
"text_thickness": 1,
"text_padding": 10,
"thickness": 2,
"radius": 10,
"edges": "$inputs.edges"
}