Image Contours¶
Class: ImageContoursDetectionBlockV1
Source: inference.core.workflows.core_steps.classical_cv.contours.v1.ImageContoursDetectionBlockV1
Detect and extract contours (boundaries of shapes) from a thresholded binary or grayscale image using OpenCV's contour detection, drawing the detected contours on the image, and returning contour data including coordinates, hierarchy information, and count for shape analysis, object boundary detection, and contour-based image processing workflows.
How This Block Works¶
This block detects contours (connected boundaries of shapes) in an image and draws them for visualization. The block:
- Receives an input image that should be thresholded (binary or grayscale) for best results
- Converts the image to grayscale if it's in color (handles BGR color images by converting to grayscale)
- Detects contours using OpenCV's findContours function:
- Uses RETR_EXTERNAL retrieval mode to find only external contours (outer boundaries of shapes)
- Uses CHAIN_APPROX_SIMPLE approximation method to compress contour points (reduces redundant points)
- Detects all connected boundary points that form closed or open contours
- Returns contours as arrays of points and hierarchy information describing contour relationships
- Draws detected contours on the image:
- Converts the grayscale image back to BGR color format for visualization
- Draws all contours on the image using a configurable line thickness
- Uses purple color (255, 0, 255 in BGR) by default for contour lines
- Draws contours directly on the image for visual inspection
- Counts the total number of contours detected in the image
- Returns the image with contours drawn, the contours data (point arrays), hierarchy information, and the contour count
The block expects a thresholded (binary) image where objects are white and background is black (or vice versa) for optimal contour detection. Contours are detected as the boundaries between different pixel intensity regions. The RETR_EXTERNAL mode focuses on outer boundaries, ignoring internal holes, which is useful for detecting separate objects. The CHAIN_APPROX_SIMPLE method simplifies contours by removing redundant points along straight lines, making the contour data more compact while preserving essential shape information.
Common Use Cases¶
- Shape Detection and Analysis: Detect and analyze shapes in images by finding their boundaries (e.g., detect object boundaries for shape analysis, identify geometric shapes, extract shape outlines for measurement), enabling shape-based image analysis workflows
- Object Boundary Extraction: Extract object boundaries and outlines from thresholded images (e.g., extract object boundaries for further processing, identify object edges, detect object outlines in binary images), enabling boundary extraction workflows
- Image Segmentation Analysis: Analyze segmentation results by detecting contour boundaries (e.g., find contours from segmentation masks, analyze segmented regions, extract boundaries from segmented objects), enabling segmentation analysis workflows
- Quality Control and Inspection: Use contour detection for quality control and inspection tasks (e.g., detect defects by finding unexpected contours, verify object shapes, inspect object boundaries), enabling contour-based quality control workflows
- Object Counting: Count objects in images by detecting their contours (e.g., count objects by detecting contours, enumerate objects based on boundaries, quantify items using contour detection), enabling contour-based object counting workflows
- Measurement and Analysis: Use contours for measurements and geometric analysis (e.g., measure object perimeters using contours, analyze object shapes, calculate geometric properties from contours), enabling contour-based measurement workflows
Connecting to Other Blocks¶
This block receives a thresholded image and produces contour data and visualizations:
- After image thresholding blocks to detect contours in thresholded binary images (e.g., find contours after thresholding, detect shapes in binary images, extract boundaries from thresholded images), enabling thresholding-to-contour workflows
- After image preprocessing blocks that prepare images for contour detection (e.g., detect contours after preprocessing, find shapes after filtering, extract boundaries after enhancement), enabling preprocessed contour detection workflows
- After segmentation blocks to extract contours from segmentation results (e.g., find contours from segmentation masks, detect boundaries of segmented regions, extract shape outlines from segments), enabling segmentation-to-contour workflows
- Before visualization blocks to display contour visualizations (e.g., visualize detected contours, display shape boundaries, show contour analysis results), enabling contour visualization workflows
- Before analysis blocks that process contour data (e.g., analyze contour shapes, process contour coordinates, measure contour properties), enabling contour analysis workflows
- Before filtering or logic blocks that use contour count or properties for decision-making (e.g., filter based on contour count, make decisions based on detected shapes, apply logic based on contour properties), enabling contour-based conditional workflows
Requirements¶
The input image should be thresholded (converted to binary/grayscale) before using this block. Thresholded images have distinct foreground (white) and background (black) regions, which makes contour detection more reliable. Use thresholding blocks (e.g., Image Threshold) or segmentation blocks to prepare images before contour detection.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/contours_detection@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
line_thickness |
int |
Thickness of the lines used to draw contours on the output image. Must be a positive integer. Thicker lines (e.g., 5-10) make contours more visible but may obscure fine details. Thinner lines (e.g., 1-2) show more detail but may be harder to see. Default is 3, which provides good visibility. Adjust based on image size and desired visibility. Use thicker lines for large images or when contours need to be highly visible, thinner lines for detailed analysis or small images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Contours in version v1.
- inputs:
Icon Visualization,Line Counter Visualization,Stability AI Outpainting,Image Contours,Image Slicer,Pixelate Visualization,Line Counter,Image Preprocessing,Polygon Zone Visualization,Distance Measurement,Color Visualization,Reference Path Visualization,Blur Visualization,Background Subtraction,Text Display,Ellipse Visualization,Polygon Visualization,Detection Event Log,Relative Static Crop,Line Counter,Stability AI Image Generation,Perspective Correction,Model Comparison Visualization,Bounding Box Visualization,Trace Visualization,Depth Estimation,Camera Focus,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Stability AI Inpainting,Pixel Color Count,Polygon Visualization,Image Convert Grayscale,SIFT,Label Visualization,Image Threshold,Corner Visualization,Template Matching,Grid Visualization,Dynamic Crop,Contrast Equalization,Heatmap Visualization,SIFT Comparison,Stitch Images,Triangle Visualization,Morphological Transformation,Keypoint Visualization,QR Code Generator,Halo Visualization,Circle Visualization,Camera Focus,Halo Visualization,Mask Visualization,Crop Visualization,Morphological Transformation,Camera Calibration,Contrast Enhancement,Background Color Visualization,Dot Visualization,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Qwen2.5-VL,Instance Segmentation Model,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Single-Label Classification Model,Relative Static Crop,Byte Tracker,Detections Consensus,Barcode Detection,Detections Classes Replacement,Webhook Sink,Trace Visualization,Object Detection Model,Qwen 3.5 API,Camera Focus,Stitch OCR Detections,OpenAI,Buffer,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,GLM-OCR,Dot Visualization,Semantic Segmentation Model,Seg Preview,Twilio SMS Notification,Google Gemini,Roboflow Dataset Upload,Clip Comparison,Dynamic Zone,VLM As Classifier,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SmolVLM2,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Grid Visualization,Qwen3.5-VL,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Dominant Color,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,CLIP Embedding Model,Detections Stabilizer,Detections Stitch,Object Detection Model,Stability AI Outpainting,Google Gemma API,Email Notification,Google Vision OCR,Identify Outliers,Google Gemini,Image Preprocessing,EasyOCR,Object Detection Model,OpenAI,SAM2 Video Tracker,Byte Tracker,Anthropic Claude,Qwen3-VL,Model Comparison Visualization,YOLO-World Model,Detection Offset,Instance Segmentation Model,Perception Encoder Embedding Model,Semantic Segmentation Model,Single-Label Classification Model,VLM As Classifier,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Florence-2 Model,Camera Calibration,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,QR Code Detection,Icon Visualization,Image Contours,Keypoint Detection Model,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Identify Changes,Multi-Label Classification Model,Image Slicer,Absolute Static Crop,Classification Label Visualization,Image Blur,Byte Tracker,Image Convert Grayscale,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Moondream2,Keypoint Visualization,QR Code Generator,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Contours in version v1 has.
Bindings
-
input
image(image): Input image to detect contours from. Should be thresholded (binary or grayscale) for best results - thresholded images have distinct foreground and background regions that make contour detection more reliable. The image will be converted to grayscale automatically if it's in color format. Contours are detected as boundaries between different pixel intensity regions. Use thresholding blocks (e.g., Image Threshold) or segmentation blocks to prepare images before contour detection. The block detects external contours (outer boundaries) and draws them on the image..line_thickness(integer): Thickness of the lines used to draw contours on the output image. Must be a positive integer. Thicker lines (e.g., 5-10) make contours more visible but may obscure fine details. Thinner lines (e.g., 1-2) show more detail but may be harder to see. Default is 3, which provides good visibility. Adjust based on image size and desired visibility. Use thicker lines for large images or when contours need to be highly visible, thinner lines for detailed analysis or small images..
-
output
image(image): Image in workflows.contours(contours): List of numpy arrays where each array represents contour points.hierarchy(numpy_array): Numpy array.number_contours(integer): Integer value.
Example JSON definition of step Image Contours in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/contours_detection@v1",
"image": "$inputs.image",
"line_thickness": 3
}