Mask Edge Snap¶
Class: MaskEdgeSnapBlockV1
Source: inference.core.workflows.core_steps.classical_cv.mask_edge_snap.v1.MaskEdgeSnapBlockV1
Refine instance segmentation masks by snapping contour points to Sobel edges within a band around the predicted boundary. This block improves segmentation accuracy by adjusting mask edges to align with detected image features.
How This Block Works¶
This block refines segmentation masks through a sophisticated multi-step pipeline:
- Edge Detection: Computes Sobel gradient magnitudes from the input image to detect edges
- Adaptive Thresholding: Uses per-pixel adaptive thresholding (local mean + sigma * local std) to identify significant edges
- Morphological Processing: Applies closing (dilation + erosion) to bridge small gaps in edge segments
- Thinning: Applies Zhang-Suen single-iteration thinning to reduce edge width to 1-2 pixels while preserving connectivity
- Boundary Band Creation: Builds a search band around each predicted mask's contour
- Area Filtering: Removes small edge components below a minimum area threshold
- Contour Snapping: For each original mask contour point, finds the strongest nearby edge within tolerance and snaps to it
Common Use Cases¶
- Medical Image Analysis: Refine organ/tumor segmentation masks to align with anatomical boundaries
- Industrial Quality Control: Improve part boundary detection for precise dimension measurement
- Autonomous Vehicles: Refine road/lane segmentation boundaries for improved path planning
- Agricultural Monitoring: Enhance crop boundary detection for yield estimation
- Microscopy Analysis: Refine cell/nuclei segmentation for morphological analysis
- Document Processing: Improve text region boundary detection for OCR
Input Parameters¶
image : Input image (color or grayscale) - Can be single-channel, 3-channel (BGR), or 4-channel (BGRA) - Preprocessing (blur, contrast enhancement) should be applied upstream if needed
segmentation : Initial instance segmentation predictions
- Source: from object detection or instance segmentation model
- Must contain populated mask field; if empty, passed through unchanged
pixel_tolerance : Maximum perpendicular distance (pixels) for edge snapping - Range: 5-50 typically - 5-15: tight predictions with minimal offset - 20-50: rough predictions needing more forgiveness
sigma : Strictness multiplier for adaptive Sobel threshold - Range: 0.1-2.0 typically - 0.1-0.5: permissive, keeps weaker edges, good for low-contrast boundaries - 1.0-2.0: strict, only strongest edges survive, good for high-contrast images
min_contour_area : Minimum enclosed-polygon area for edge components - Range: 10-1000 typically - Small (10-50): keeps fragmented edges - Large (200-1000): aggressive noise rejection
dilation_iterations : Number of morphological closing iterations - Range: 0-10 typically - 0: no closing, only thresholded edges - 1-2: bridges hairline gaps - 3-5: bridges visible dashes - 10+: aggressive, can merge unrelated edges
boundary_band_width : Half-width of search band around mask contour (default: 15) - Sets maximum distance between predicted and true boundary that can be corrected
adaptive_window_size : Side length of local-statistics window (default: 41) - Should be roughly 5-10% of smaller image dimension - Smaller (15-25): fine local contrast sensitivity, can pick up noise - Larger (81-121): smooth threshold field, closer to global thresholding
Outputs¶
refined_segmentation : Same detections with snapped mask contours edges : Single detection containing union of all surviving edge pixels (debug/visualization)
Preprocessing¶
Preprocessing is usually critical for success. This block does no preprocessing — what you feed in is what Sobel sees. For challenging imagery, chain Roboflow image-processing blocks upstream:
Gaussian Blur For grainy or noisy surfaces (welds, machined metal, biological tissue), blur before edge detection to suppress per-pixel noise. A 5x5 kernel with sigma 1.0 is a sensible default; increase to 7x7 or 9x9 for very noisy imagery. Don't over-blur — strong blur rounds off corners and softens real boundaries, leading to boundary positions that are biased inward.
Bilateral Blur Better than Gaussian when the image has both noise AND important sharp edges (e.g. textured fabric on a clean background). Slower, but preserves edges while denoising flat regions.
Contrast Enhancement Use when boundary contrast is genuinely too low to threshold reliably. The Contrast Enhancement block normalizes the histogram to use the full range, improving edge detection sensitivity without the noise amplification of aggressive methods. Follow with blur to suppress any remaining noise. Avoid on already-high-contrast images.
Morphological Opening then Closing
Opening (erode then dilate) removes small bright specks and thin protrusions from the input before edge detection — useful when the surface has fine debris or hot pixels that would otherwise generate spurious edges. Closing (dilate then erode) fills small dark holes/gaps in bright regions; less commonly needed as preprocessing, since gap filling on the edge map itself is what the dilation_iterations parameter already does. Use the Morphological Transformation v2 block with the "Opening then Closing" operation for this preprocessing.
Order matters: Blur first, then contrast adjustment if needed. Reverse causes contrast adjustment to amplify the noise before blur can suppress it.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/mask_edge_snap@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
pixel_tolerance |
int |
Maximum perpendicular distance (pixels) from each contour point to candidate edges during snapping. Typical: 5-15 for tight predictions, 20-50 for rough ones. Too small: real edges outside range get missed. Too large: snap can wander to unrelated edges.. | ✅ |
sigma |
float |
Strictness multiplier for adaptive Sobel threshold (local_mean + sigma * local_std). Lower (0.1-0.5): permissive, good for low-contrast. Higher (1.0-2.0): strict, only strongest edges. Tune this AFTER other parameters.. | ✅ |
min_contour_area |
float |
Minimum enclosed-polygon area for edge components to keep. Small (10-50): keeps fragmented edges. Large (200-1000): aggressive noise rejection. Scales roughly with dilation_iterations.. | ✅ |
dilation_iterations |
int |
Morphological closing iterations to bridge gaps in thresholded edge map. Each iteration bridges ~2px gaps. 0: no closing. 1-2: hairline gaps. 3-5: visible dashes. 10+: aggressive merging.. | ✅ |
boundary_band_width |
int |
Half-width (pixels) of search band around segmentation contour. Sets maximum distance between predicted boundary and true boundary that can be corrected. Should generally be >= pixel_tolerance.. | ✅ |
adaptive_window_size |
int |
Side length of local-statistics window for adaptive threshold. Small (15-25): fine local sensitivity, can pick noise. Default 41: balanced. Large (81-121): smooth field, closer to global thresholding. Should be ~5-10% of smaller image dimension.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Mask Edge Snap in version v1.
- inputs:
Detections Stitch,Detections Stabilizer,Line Counter Visualization,Stability AI Outpainting,Mask Edge Snap,Gaze Detection,Image Slicer,Image Preprocessing,Instance Segmentation Model,Distance Measurement,Color Visualization,Detections Combine,Cosine Similarity,SAM2 Video Tracker,Bounding Rectangle,Ellipse Visualization,Polygon Visualization,Detection Event Log,ByteTrack Tracker,Relative Static Crop,Detections Consensus,Detections Classes Replacement,Time in Zone,Model Comparison Visualization,Trace Visualization,Camera Focus,Detection Offset,SAM 3,Instance Segmentation Model,Detections List Roll-Up,Image Threshold,Mask Area Measurement,Template Matching,Stitch Images,Heatmap Visualization,SORT Tracker,SIFT Comparison,Morphological Transformation,Halo Visualization,Detections Transformation,Instance Segmentation Model,Crop Visualization,Camera Calibration,Path Deviation,Time in Zone,Dot Visualization,OC-SORT Tracker,Path Deviation,SAM 3,Seg Preview,Icon Visualization,Detections Filter,Dynamic Zone,Image Contours,Pixelate Visualization,Line Counter,Time in Zone,Polygon Zone Visualization,Reference Path Visualization,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Line Counter,Bounding Box Visualization,Velocity,Depth Estimation,Pixel Color Count,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Stability AI Inpainting,Identify Changes,Polygon Visualization,Image Convert Grayscale,SAM 3,SIFT,Label Visualization,Corner Visualization,Grid Visualization,Dynamic Crop,Contrast Equalization,Keypoint Visualization,Triangle Visualization,Per-Class Confidence Filter,QR Code Generator,Halo Visualization,Circle Visualization,Camera Focus,Segment Anything 2 Model,Mask Visualization,Morphological Transformation,Contrast Enhancement,Background Color Visualization,SIFT Comparison - outputs:
Detections Stabilizer,Detections Stitch,Roboflow Dataset Upload,Mask Edge Snap,Distance Measurement,Color Visualization,Detections Combine,SAM2 Video Tracker,Detection Event Log,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Byte Tracker,Bounding Rectangle,Byte Tracker,Time in Zone,Detections Classes Replacement,Detections Consensus,Model Comparison Visualization,Trace Visualization,Camera Focus,Roboflow Custom Metadata,Detection Offset,Detections List Roll-Up,Size Measurement,Mask Area Measurement,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Crop Visualization,Florence-2 Model,Path Deviation,Time in Zone,Dot Visualization,OC-SORT Tracker,Path Deviation,Model Monitoring Inference Aggregator,Detections Filter,Icon Visualization,Roboflow Dataset Upload,Dynamic Zone,Pixelate Visualization,Line Counter,Time in Zone,Blur Visualization,Detections Merge,Perspective Correction,Overlap Filter,Line Counter,Velocity,Bounding Box Visualization,Byte Tracker,Stability AI Inpainting,Polygon Visualization,Roboflow Vision Events,Label Visualization,Corner Visualization,Dynamic Crop,Per-Class Confidence Filter,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Background Color Visualization,PTZ Tracking (ONVIF)
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Mask Edge Snap in version v1 has.
Bindings
-
input
image(image): Input image (color or grayscale) for edge detection and snapping. Can be grayscale, single-channel, BGR, or BGRA. No preprocessing is applied internally; use upstream blocks for blur or contrast enhancement if needed..segmentation(instance_segmentation_prediction): Instance segmentation predictions with mask field populated. Each mask contour will be snapped to detected edges. If empty, segmentation is passed through unchanged. Can be a reference string like '$steps.segmentation_model.predictions' or a supervision.Detections object..pixel_tolerance(integer): Maximum perpendicular distance (pixels) from each contour point to candidate edges during snapping. Typical: 5-15 for tight predictions, 20-50 for rough ones. Too small: real edges outside range get missed. Too large: snap can wander to unrelated edges..sigma(float): Strictness multiplier for adaptive Sobel threshold (local_mean + sigma * local_std). Lower (0.1-0.5): permissive, good for low-contrast. Higher (1.0-2.0): strict, only strongest edges. Tune this AFTER other parameters..min_contour_area(float): Minimum enclosed-polygon area for edge components to keep. Small (10-50): keeps fragmented edges. Large (200-1000): aggressive noise rejection. Scales roughly with dilation_iterations..dilation_iterations(integer): Morphological closing iterations to bridge gaps in thresholded edge map. Each iteration bridges ~2px gaps. 0: no closing. 1-2: hairline gaps. 3-5: visible dashes. 10+: aggressive merging..boundary_band_width(integer): Half-width (pixels) of search band around segmentation contour. Sets maximum distance between predicted boundary and true boundary that can be corrected. Should generally be >= pixel_tolerance..adaptive_window_size(integer): Side length of local-statistics window for adaptive threshold. Small (15-25): fine local sensitivity, can pick noise. Default 41: balanced. Large (81-121): smooth field, closer to global thresholding. Should be ~5-10% of smaller image dimension..
-
output
refined_segmentation(instance_segmentation_prediction): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.edges(instance_segmentation_prediction): Prediction with detected bounding boxes and segmentation masks in form of sv.Detections(...) object.
Example JSON definition of step Mask Edge Snap in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/mask_edge_snap@v1",
"image": "$inputs.image",
"segmentation": "$steps.segmentation_model.predictions",
"pixel_tolerance": "<block_does_not_provide_example>",
"sigma": "<block_does_not_provide_example>",
"min_contour_area": "<block_does_not_provide_example>",
"dilation_iterations": "<block_does_not_provide_example>",
"boundary_band_width": "<block_does_not_provide_example>",
"adaptive_window_size": "<block_does_not_provide_example>"
}