Crop Visualization¶
Class: CropVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.crop.v1.CropVisualizationBlockV1
Display scaled-up, zoomed-in views of detected objects overlaid on the original image, allowing detailed inspection of small or distant objects while maintaining context with the full scene.
How This Block Works¶
This block takes an image and detection predictions and creates scaled-up, zoomed-in crops of each detected object, then displays these enlarged crops on the original image. The block:
- Takes an image and predictions as input
- Identifies detected regions from bounding boxes or segmentation masks
- Extracts the image region for each detected object (crops the object from the original image)
- Scales up each crop by the specified scale factor (e.g., 2x makes objects twice as large)
- Applies color styling to the crop border based on the selected color palette, with colors assigned by class, index, or track ID
- Positions the scaled crop on the image at the specified anchor point relative to the original detection location using Supervision's CropAnnotator
- Draws a colored border around the scaled crop with the specified thickness
- Returns an annotated image with scaled-up object crops overlaid on the original image
The block works with both object detection predictions (using bounding boxes) and instance segmentation predictions (using masks). When masks are available, it crops the exact shape of detected objects; otherwise, it crops rectangular bounding box regions. The scale factor allows you to zoom in on objects, making small or distant objects more visible and easier to inspect. The scaled crops are positioned relative to their original detection locations, allowing you to see both the zoomed-in detail and the object's position in the full scene context.
Common Use Cases¶
- Small Object Inspection: Zoom in on small detected objects (e.g., defects, small products, distant objects) to make them more visible and easier to inspect while maintaining scene context
- Detail Visualization: Display enlarged views of detected objects for detailed analysis, quality control, or inspection workflows where fine details need to be visible
- Multi-Scale Object Display: Show both the full scene and zoomed-in object details simultaneously, useful for applications where context and detail are both important
- Quality Control and Inspection: Inspect detected defects, products, or components at higher magnification while keeping the original detection location visible for reference
- Presentation and Reporting: Create visualizations that highlight detected objects with zoomed-in views for reports, documentation, or presentations where both overview and detail are needed
- User Interface Enhancement: Provide zoomed-in object views in user interfaces, dashboards, or interactive applications where users need to see object details without losing scene context
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Bounding Box Visualization, Polygon Visualization) to combine scaled crops with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with scaled crops for documentation, reporting, or analysis
- Webhook blocks to send visualized results with scaled crops to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with scaled crops as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with scaled crops for live monitoring, detailed inspection, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/crop_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
position |
str |
Anchor position for placing the scaled crop relative to the original detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The scaled crop will be positioned at this anchor point relative to the original detection location.. | ✅ |
scale_factor |
float |
Factor by which to scale (zoom) the cropped object region. A factor of 2.0 doubles the size of the crop, making objects twice as large. A factor of 1.0 shows the crop at original size. Higher values (e.g., 3.0, 4.0) create more zoomed-in views, useful for inspecting small or distant objects. Lower values (e.g., 1.5) provide subtle magnification.. | ✅ |
border_thickness |
int |
Thickness of the border outline around the scaled crop in pixels. Higher values create thicker, more visible borders that help distinguish the scaled crop from the background.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Crop Visualization in version v1.
- inputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Instance Segmentation Model,Distance Measurement,Color Visualization,Bounding Rectangle,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Relative Static Crop,Byte Tracker,Detections Consensus,Detections Classes Replacement,Webhook Sink,Trace Visualization,Object Detection Model,Camera Focus,Stitch OCR Detections,Qwen 3.5 API,OpenAI,Buffer,SAM 3,Size Measurement,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,Detections Transformation,Path Deviation,GLM-OCR,Dot Visualization,S3 Sink,Path Deviation,Twilio SMS Notification,Seg Preview,Model Monitoring Inference Aggregator,Google Gemini,Roboflow Dataset Upload,Dynamic Zone,Clip Comparison,VLM As Classifier,Pixelate Visualization,Line Counter,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,CSV Formatter,Stability AI Image Generation,Detections Merge,Perspective Correction,Overlap Filter,Anthropic Claude,Bounding Box Visualization,Velocity,Depth Estimation,Line Counter,Stability AI Inpainting,Polygon Visualization,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Grid Visualization,Qwen3.5-VL,Contrast Equalization,Per-Class Confidence Filter,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,Slack Notification,Detections Stitch,Detections Stabilizer,Object Detection Model,Stability AI Outpainting,Email Notification,Google Gemma API,Google Vision OCR,Identify Outliers,Image Preprocessing,Google Gemini,EasyOCR,Detections Combine,Object Detection Model,Cosine Similarity,SAM2 Video Tracker,Detection Event Log,Byte Tracker,OpenAI,Anthropic Claude,Time in Zone,Model Comparison Visualization,Roboflow Custom Metadata,YOLO-World Model,Detection Offset,Instance Segmentation Model,Single-Label Classification Model,VLM As Classifier,Detections List Roll-Up,Template Matching,Mask Area Measurement,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Camera Calibration,Florence-2 Model,Time in Zone,OC-SORT Tracker,SAM 3,Icon Visualization,Local File Sink,Detections Filter,Image Contours,JSON Parser,Keypoint Detection Model,Time in Zone,Reference Path Visualization,Dimension Collapse,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Identify Changes,Classification Label Visualization,Image Slicer,Absolute Static Crop,Image Blur,Byte Tracker,Multi-Label Classification Model,Image Convert Grayscale,SAM 3,OpenAI,Corner Visualization,Dynamic Crop,Moondream2,Keypoint Visualization,Keypoint Detection Model,QR Code Generator,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization,PTZ Tracking (ONVIF),Stitch OCR Detections,SIFT Comparison - outputs:
Roboflow Dataset Upload,Line Counter Visualization,Mask Edge Snap,OCR Model,Image Slicer,Gaze Detection,Qwen2.5-VL,Instance Segmentation Model,Color Visualization,Multi-Label Classification Model,Ellipse Visualization,Polygon Visualization,ByteTrack Tracker,Single-Label Classification Model,Relative Static Crop,Barcode Detection,Trace Visualization,Object Detection Model,Qwen 3.5 API,Camera Focus,OpenAI,Buffer,SAM 3,Image Threshold,Heatmap Visualization,SORT Tracker,Florence-2 Model,Halo Visualization,GLM-OCR,Dot Visualization,Semantic Segmentation Model,Seg Preview,Google Gemini,Roboflow Dataset Upload,Clip Comparison,VLM As Classifier,Pixelate Visualization,Twilio SMS/MMS Notification,Polygon Zone Visualization,Motion Detection,Blur Visualization,Background Subtraction,Text Display,Stability AI Image Generation,Perspective Correction,Anthropic Claude,Bounding Box Visualization,Depth Estimation,Stability AI Inpainting,Polygon Visualization,SmolVLM2,SIFT,Roboflow Vision Events,VLM As Detector,Google Gemini,Label Visualization,Qwen3.5-VL,Contrast Equalization,Triangle Visualization,Halo Visualization,Circle Visualization,Segment Anything 2 Model,Mask Visualization,Dominant Color,OpenAI,MoonshotAI Kimi,Llama 3.2 Vision,Email Notification,CLIP Embedding Model,Detections Stabilizer,Detections Stitch,Object Detection Model,Stability AI Outpainting,Google Gemma API,Google Vision OCR,Google Gemini,Image Preprocessing,EasyOCR,Object Detection Model,OpenAI,SAM2 Video Tracker,Byte Tracker,Anthropic Claude,Qwen3-VL,Model Comparison Visualization,YOLO-World Model,Instance Segmentation Model,Perception Encoder Embedding Model,Semantic Segmentation Model,Single-Label Classification Model,VLM As Classifier,Template Matching,Stitch Images,Qwen 3.6 API,SIFT Comparison,Morphological Transformation,Instance Segmentation Model,CogVLM,Crop Visualization,Florence-2 Model,Camera Calibration,Multi-Label Classification Model,Time in Zone,OC-SORT Tracker,SAM 3,QR Code Detection,Icon Visualization,Image Contours,Keypoint Detection Model,Reference Path Visualization,Anthropic Claude,Clip Comparison,VLM As Detector,LMM,Pixel Color Count,Multi-Label Classification Model,Image Slicer,Absolute Static Crop,Classification Label Visualization,Image Blur,Image Convert Grayscale,SAM 3,Single-Label Classification Model,OpenAI,Corner Visualization,Dynamic Crop,Keypoint Detection Model,Moondream2,Keypoint Visualization,Camera Focus,LMM For Classification,Morphological Transformation,Keypoint Detection Model,Contrast Enhancement,Background Color Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Crop Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[object_detection_prediction,rle_instance_segmentation_prediction,keypoint_detection_prediction,instance_segmentation_prediction]): Model predictions to visualize..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..position(string): Anchor position for placing the scaled crop relative to the original detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The scaled crop will be positioned at this anchor point relative to the original detection location..scale_factor(float): Factor by which to scale (zoom) the cropped object region. A factor of 2.0 doubles the size of the crop, making objects twice as large. A factor of 1.0 shows the crop at original size. Higher values (e.g., 3.0, 4.0) create more zoomed-in views, useful for inspecting small or distant objects. Lower values (e.g., 1.5) provide subtle magnification..border_thickness(integer): Thickness of the border outline around the scaled crop in pixels. Higher values create thicker, more visible borders that help distinguish the scaled crop from the background..
-
output
image(image): Image in workflows.
Example JSON definition of step Crop Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/crop_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"position": "CENTER",
"scale_factor": 2.0,
"border_thickness": 2
}