Image Contours¶
Class: ImageContoursDetectionBlockV1
Source: inference.core.workflows.core_steps.classical_cv.contours.v1.ImageContoursDetectionBlockV1
Detect and extract contours (boundaries of shapes) from a thresholded binary or grayscale image using OpenCV's contour detection, drawing the detected contours on the image, and returning contour data including coordinates, hierarchy information, and count for shape analysis, object boundary detection, and contour-based image processing workflows.
How This Block Works¶
This block detects contours (connected boundaries of shapes) in an image and draws them for visualization. The block:
- Receives an input image that should be thresholded (binary or grayscale) for best results
- Converts the image to grayscale if it's in color (handles BGR color images by converting to grayscale)
- Detects contours using OpenCV's findContours function:
- Uses RETR_EXTERNAL retrieval mode to find only external contours (outer boundaries of shapes)
- Uses CHAIN_APPROX_SIMPLE approximation method to compress contour points (reduces redundant points)
- Detects all connected boundary points that form closed or open contours
- Returns contours as arrays of points and hierarchy information describing contour relationships
- Draws detected contours on the image:
- Converts the grayscale image back to BGR color format for visualization
- Draws all contours on the image using a configurable line thickness
- Uses purple color (255, 0, 255 in BGR) by default for contour lines
- Draws contours directly on the image for visual inspection
- Counts the total number of contours detected in the image
- Returns the image with contours drawn, the contours data (point arrays), hierarchy information, and the contour count
The block expects a thresholded (binary) image where objects are white and background is black (or vice versa) for optimal contour detection. Contours are detected as the boundaries between different pixel intensity regions. The RETR_EXTERNAL mode focuses on outer boundaries, ignoring internal holes, which is useful for detecting separate objects. The CHAIN_APPROX_SIMPLE method simplifies contours by removing redundant points along straight lines, making the contour data more compact while preserving essential shape information.
Common Use Cases¶
- Shape Detection and Analysis: Detect and analyze shapes in images by finding their boundaries (e.g., detect object boundaries for shape analysis, identify geometric shapes, extract shape outlines for measurement), enabling shape-based image analysis workflows
- Object Boundary Extraction: Extract object boundaries and outlines from thresholded images (e.g., extract object boundaries for further processing, identify object edges, detect object outlines in binary images), enabling boundary extraction workflows
- Image Segmentation Analysis: Analyze segmentation results by detecting contour boundaries (e.g., find contours from segmentation masks, analyze segmented regions, extract boundaries from segmented objects), enabling segmentation analysis workflows
- Quality Control and Inspection: Use contour detection for quality control and inspection tasks (e.g., detect defects by finding unexpected contours, verify object shapes, inspect object boundaries), enabling contour-based quality control workflows
- Object Counting: Count objects in images by detecting their contours (e.g., count objects by detecting contours, enumerate objects based on boundaries, quantify items using contour detection), enabling contour-based object counting workflows
- Measurement and Analysis: Use contours for measurements and geometric analysis (e.g., measure object perimeters using contours, analyze object shapes, calculate geometric properties from contours), enabling contour-based measurement workflows
Connecting to Other Blocks¶
This block receives a thresholded image and produces contour data and visualizations:
- After image thresholding blocks to detect contours in thresholded binary images (e.g., find contours after thresholding, detect shapes in binary images, extract boundaries from thresholded images), enabling thresholding-to-contour workflows
- After image preprocessing blocks that prepare images for contour detection (e.g., detect contours after preprocessing, find shapes after filtering, extract boundaries after enhancement), enabling preprocessed contour detection workflows
- After segmentation blocks to extract contours from segmentation results (e.g., find contours from segmentation masks, detect boundaries of segmented regions, extract shape outlines from segments), enabling segmentation-to-contour workflows
- Before visualization blocks to display contour visualizations (e.g., visualize detected contours, display shape boundaries, show contour analysis results), enabling contour visualization workflows
- Before analysis blocks that process contour data (e.g., analyze contour shapes, process contour coordinates, measure contour properties), enabling contour analysis workflows
- Before filtering or logic blocks that use contour count or properties for decision-making (e.g., filter based on contour count, make decisions based on detected shapes, apply logic based on contour properties), enabling contour-based conditional workflows
Requirements¶
The input image should be thresholded (converted to binary/grayscale) before using this block. Thresholded images have distinct foreground (white) and background (black) regions, which makes contour detection more reliable. Use thresholding blocks (e.g., Image Threshold) or segmentation blocks to prepare images before contour detection.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/contours_detection@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
line_thickness |
int |
Thickness of the lines used to draw contours on the output image. Must be a positive integer. Thicker lines (e.g., 5-10) make contours more visible but may obscure fine details. Thinner lines (e.g., 1-2) show more detail but may be harder to see. Default is 3, which provides good visibility. Adjust based on image size and desired visibility. Use thicker lines for large images or when contours need to be highly visible, thinner lines for detailed analysis or small images.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Image Contours in version v1.
- inputs:
Mask Visualization,Circle Visualization,Classification Label Visualization,SIFT Comparison,Halo Visualization,Blur Visualization,QR Code Generator,Dynamic Crop,Label Visualization,Image Blur,Corner Visualization,Image Convert Grayscale,Ellipse Visualization,SIFT,Image Preprocessing,Line Counter,Stability AI Outpainting,Halo Visualization,Stability AI Inpainting,Template Matching,Image Threshold,Background Color Visualization,Image Contours,Depth Estimation,Model Comparison Visualization,Trace Visualization,Morphological Transformation,Line Counter,Triangle Visualization,Absolute Static Crop,Relative Static Crop,Text Display,Stitch Images,Camera Calibration,Grid Visualization,Camera Focus,Perspective Correction,Color Visualization,Dot Visualization,Image Slicer,Pixelate Visualization,Polygon Visualization,Stability AI Image Generation,Reference Path Visualization,Keypoint Visualization,Polygon Visualization,Line Counter Visualization,Bounding Box Visualization,Detection Event Log,Contrast Equalization,Distance Measurement,Polygon Zone Visualization,SIFT Comparison,Camera Focus,Icon Visualization,Crop Visualization,Pixel Color Count,Background Subtraction,Image Slicer - outputs:
Mask Visualization,Classification Label Visualization,Instance Segmentation Model,Detections Consensus,Webhook Sink,Multi-Label Classification Model,Email Notification,QR Code Generator,VLM As Detector,Multi-Label Classification Model,LMM,SAM 3,Detection Offset,Corner Visualization,Image Convert Grayscale,Stability AI Outpainting,Segment Anything 2 Model,Halo Visualization,Object Detection Model,Single-Label Classification Model,Trace Visualization,Google Vision OCR,Instance Segmentation Model,Clip Comparison,Text Display,Stitch Images,Google Gemini,Slack Notification,VLM As Classifier,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Color Visualization,Dot Visualization,Polygon Visualization,Object Detection Model,Anthropic Claude,Buffer,Byte Tracker,Contrast Equalization,Identify Changes,Detections Classes Replacement,Perception Encoder Embedding Model,Moondream2,SIFT Comparison,Halo Visualization,Florence-2 Model,Blur Visualization,Twilio SMS/MMS Notification,Label Visualization,Ellipse Visualization,OpenAI,SIFT,Single-Label Classification Model,OpenAI,Image Threshold,Background Color Visualization,Model Comparison Visualization,OpenAI,Keypoint Detection Model,Gaze Detection,SAM 3,Polygon Visualization,Twilio SMS Notification,Bounding Box Visualization,OCR Model,Icon Visualization,Google Gemini,Florence-2 Model,Roboflow Dataset Upload,Anthropic Claude,Dynamic Zone,Dynamic Crop,CLIP Embedding Model,VLM As Detector,Google Gemini,Image Blur,Byte Tracker,SmolVLM2,Stability AI Inpainting,Template Matching,Image Contours,Morphological Transformation,Triangle Visualization,Detections Stitch,Relative Static Crop,Camera Calibration,Grid Visualization,Detections Stabilizer,Camera Focus,Image Slicer,LMM For Classification,Line Counter Visualization,Keypoint Detection Model,Llama 3.2 Vision,SIFT Comparison,Camera Focus,Dominant Color,Time in Zone,Background Subtraction,Image Slicer,Circle Visualization,Seg Preview,Identify Outliers,Qwen3-VL,Clip Comparison,Email Notification,QR Code Detection,Byte Tracker,Image Preprocessing,SAM 3,Depth Estimation,CogVLM,Absolute Static Crop,EasyOCR,Stitch OCR Detections,Perspective Correction,Qwen2.5-VL,Anthropic Claude,Pixelate Visualization,Reference Path Visualization,Stability AI Image Generation,Keypoint Visualization,VLM As Classifier,Polygon Zone Visualization,YOLO-World Model,Stitch OCR Detections,Crop Visualization,Pixel Color Count,Motion Detection,OpenAI,Barcode Detection
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Image Contours in version v1 has.
Bindings
-
input
image(image): Input image to detect contours from. Should be thresholded (binary or grayscale) for best results - thresholded images have distinct foreground and background regions that make contour detection more reliable. The image will be converted to grayscale automatically if it's in color format. Contours are detected as boundaries between different pixel intensity regions. Use thresholding blocks (e.g., Image Threshold) or segmentation blocks to prepare images before contour detection. The block detects external contours (outer boundaries) and draws them on the image..line_thickness(integer): Thickness of the lines used to draw contours on the output image. Must be a positive integer. Thicker lines (e.g., 5-10) make contours more visible but may obscure fine details. Thinner lines (e.g., 1-2) show more detail but may be harder to see. Default is 3, which provides good visibility. Adjust based on image size and desired visibility. Use thicker lines for large images or when contours need to be highly visible, thinner lines for detailed analysis or small images..
-
output
image(image): Image in workflows.contours(contours): List of numpy arrays where each array represents contour points.hierarchy(numpy_array): Numpy array.number_contours(integer): Integer value.
Example JSON definition of step Image Contours in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/contours_detection@v1",
"image": "$inputs.image",
"line_thickness": 3
}