Crop Visualization¶
Class: CropVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.crop.v1.CropVisualizationBlockV1
Display scaled-up, zoomed-in views of detected objects overlaid on the original image, allowing detailed inspection of small or distant objects while maintaining context with the full scene.
How This Block Works¶
This block takes an image and detection predictions and creates scaled-up, zoomed-in crops of each detected object, then displays these enlarged crops on the original image. The block:
- Takes an image and predictions as input
- Identifies detected regions from bounding boxes or segmentation masks
- Extracts the image region for each detected object (crops the object from the original image)
- Scales up each crop by the specified scale factor (e.g., 2x makes objects twice as large)
- Applies color styling to the crop border based on the selected color palette, with colors assigned by class, index, or track ID
- Positions the scaled crop on the image at the specified anchor point relative to the original detection location using Supervision's CropAnnotator
- Draws a colored border around the scaled crop with the specified thickness
- Returns an annotated image with scaled-up object crops overlaid on the original image
The block works with both object detection predictions (using bounding boxes) and instance segmentation predictions (using masks). When masks are available, it crops the exact shape of detected objects; otherwise, it crops rectangular bounding box regions. The scale factor allows you to zoom in on objects, making small or distant objects more visible and easier to inspect. The scaled crops are positioned relative to their original detection locations, allowing you to see both the zoomed-in detail and the object's position in the full scene context.
Common Use Cases¶
- Small Object Inspection: Zoom in on small detected objects (e.g., defects, small products, distant objects) to make them more visible and easier to inspect while maintaining scene context
- Detail Visualization: Display enlarged views of detected objects for detailed analysis, quality control, or inspection workflows where fine details need to be visible
- Multi-Scale Object Display: Show both the full scene and zoomed-in object details simultaneously, useful for applications where context and detail are both important
- Quality Control and Inspection: Inspect detected defects, products, or components at higher magnification while keeping the original detection location visible for reference
- Presentation and Reporting: Create visualizations that highlight detected objects with zoomed-in views for reports, documentation, or presentations where both overview and detail are needed
- User Interface Enhancement: Provide zoomed-in object views in user interfaces, dashboards, or interactive applications where users need to see object details without losing scene context
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Bounding Box Visualization, Polygon Visualization) to combine scaled crops with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with scaled crops for documentation, reporting, or analysis
- Webhook blocks to send visualized results with scaled crops to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with scaled crops as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with scaled crops for live monitoring, detailed inspection, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/crop_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
position |
str |
Anchor position for placing the scaled crop relative to the original detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The scaled crop will be positioned at this anchor point relative to the original detection location.. | ✅ |
scale_factor |
float |
Factor by which to scale (zoom) the cropped object region. A factor of 2.0 doubles the size of the crop, making objects twice as large. A factor of 1.0 shows the crop at original size. Higher values (e.g., 3.0, 4.0) create more zoomed-in views, useful for inspecting small or distant objects. Lower values (e.g., 1.5) provide subtle magnification.. | ✅ |
border_thickness |
int |
Thickness of the border outline around the scaled crop in pixels. Higher values create thicker, more visible borders that help distinguish the scaled crop from the background.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Crop Visualization in version v1.
- inputs:
Image Threshold,Email Notification,Corner Visualization,Roboflow Dataset Upload,Object Detection Model,Stitch OCR Detections,Gaze Detection,Dimension Collapse,Stability AI Image Generation,Time in Zone,Grid Visualization,Dynamic Crop,Image Slicer,Image Preprocessing,Instance Segmentation Model,SIFT,Line Counter Visualization,Detections Combine,Trace Visualization,Halo Visualization,ByteTrack Tracker,Roboflow Custom Metadata,Pixelate Visualization,Circle Visualization,S3 Sink,Detections Classes Replacement,Keypoint Detection Model,Twilio SMS Notification,Halo Visualization,SIFT Comparison,Anthropic Claude,OC-SORT Tracker,Polygon Visualization,Detections Consensus,Cosine Similarity,Identify Changes,Crop Visualization,Roboflow Dataset Upload,Mask Visualization,Detection Offset,Heatmap Visualization,Webhook Sink,Detections List Roll-Up,Google Vision OCR,Florence-2 Model,Florence-2 Model,VLM As Classifier,Overlap Filter,Anthropic Claude,OpenAI,VLM As Detector,OpenAI,PTZ Tracking (ONVIF),Bounding Rectangle,Background Color Visualization,Template Matching,Anthropic Claude,Background Subtraction,SIFT Comparison,Multi-Label Classification Model,Keypoint Visualization,Time in Zone,Detections Filter,Stitch OCR Detections,LMM,Detections Merge,Detections Transformation,Identify Outliers,SAM 3,Motion Detection,Dynamic Zone,Seg Preview,Single-Label Classification Model,Object Detection Model,Roboflow Vision Events,VLM As Classifier,Detections Stitch,Triangle Visualization,Distance Measurement,Google Gemini,Path Deviation,Image Slicer,Image Contours,Model Comparison Visualization,Stability AI Outpainting,Stitch Images,Image Blur,Ellipse Visualization,OpenAI,Time in Zone,Depth Estimation,EasyOCR,Absolute Static Crop,JSON Parser,CogVLM,Google Gemini,Velocity,Relative Static Crop,Morphological Transformation,LMM For Classification,Detection Event Log,Dot Visualization,GLM-OCR,Model Monitoring Inference Aggregator,Keypoint Detection Model,Pixel Color Count,Image Convert Grayscale,Icon Visualization,QR Code Generator,Detections Stabilizer,Camera Focus,SAM 3,OCR Model,Text Display,Reference Path Visualization,Instance Segmentation Model,Llama 3.2 Vision,CSV Formatter,SORT Tracker,Byte Tracker,Label Visualization,Classification Label Visualization,Byte Tracker,Segment Anything 2 Model,Polygon Zone Visualization,Stability AI Inpainting,Google Gemini,SAM 3,Perspective Correction,Camera Calibration,Qwen3.5-VL,Size Measurement,Email Notification,Contrast Equalization,Line Counter,Path Deviation,Byte Tracker,Line Counter,Color Visualization,OpenAI,Local File Sink,Mask Area Measurement,Twilio SMS/MMS Notification,YOLO-World Model,Clip Comparison,Clip Comparison,Buffer,Blur Visualization,Bounding Box Visualization,Camera Focus,Polygon Visualization,Moondream2,VLM As Detector,Slack Notification - outputs:
Stitch Images,Image Threshold,Corner Visualization,Image Blur,Ellipse Visualization,OpenAI,Roboflow Dataset Upload,Object Detection Model,Barcode Detection,Depth Estimation,Gaze Detection,EasyOCR,Absolute Static Crop,Multi-Label Classification Model,CogVLM,Google Gemini,Stability AI Image Generation,Time in Zone,Dynamic Crop,Instance Segmentation Model,Image Slicer,Image Preprocessing,SIFT,Relative Static Crop,Morphological Transformation,Line Counter Visualization,Trace Visualization,LMM For Classification,Halo Visualization,Dot Visualization,ByteTrack Tracker,GLM-OCR,Keypoint Detection Model,Pixel Color Count,Pixelate Visualization,Circle Visualization,Semantic Segmentation Model,Image Convert Grayscale,Icon Visualization,Keypoint Detection Model,Detections Stabilizer,Halo Visualization,Camera Focus,Anthropic Claude,SAM 3,OC-SORT Tracker,OCR Model,Polygon Visualization,Text Display,Qwen2.5-VL,Reference Path Visualization,Instance Segmentation Model,Llama 3.2 Vision,Qwen3-VL,Crop Visualization,Roboflow Dataset Upload,Mask Visualization,Moondream2,CLIP Embedding Model,SORT Tracker,Heatmap Visualization,Label Visualization,Google Vision OCR,Byte Tracker,Classification Label Visualization,Florence-2 Model,Segment Anything 2 Model,Florence-2 Model,VLM As Detector,Polygon Zone Visualization,Stability AI Inpainting,VLM As Classifier,Google Gemini,SAM 3,Perspective Correction,Anthropic Claude,OpenAI,Camera Calibration,VLM As Detector,OpenAI,Qwen3.5-VL,Template Matching,Background Color Visualization,Anthropic Claude,Email Notification,Background Subtraction,SIFT Comparison,Contrast Equalization,Multi-Label Classification Model,Keypoint Visualization,LMM,SmolVLM2,Single-Label Classification Model,Perception Encoder Embedding Model,SAM 3,Motion Detection,Seg Preview,Single-Label Classification Model,Color Visualization,Object Detection Model,OpenAI,Dominant Color,Roboflow Vision Events,QR Code Detection,VLM As Classifier,Detections Stitch,Clip Comparison,YOLO-World Model,Triangle Visualization,Buffer,Clip Comparison,Twilio SMS/MMS Notification,Bounding Box Visualization,Blur Visualization,Camera Focus,Polygon Visualization,Google Gemini,Image Slicer,Image Contours,Model Comparison Visualization,Stability AI Outpainting
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Crop Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[keypoint_detection_prediction,rle_instance_segmentation_prediction,instance_segmentation_prediction,object_detection_prediction]): Model predictions to visualize..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..position(string): Anchor position for placing the scaled crop relative to the original detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The scaled crop will be positioned at this anchor point relative to the original detection location..scale_factor(float): Factor by which to scale (zoom) the cropped object region. A factor of 2.0 doubles the size of the crop, making objects twice as large. A factor of 1.0 shows the crop at original size. Higher values (e.g., 3.0, 4.0) create more zoomed-in views, useful for inspecting small or distant objects. Lower values (e.g., 1.5) provide subtle magnification..border_thickness(integer): Thickness of the border outline around the scaled crop in pixels. Higher values create thicker, more visible borders that help distinguish the scaled crop from the background..
-
output
image(image): Image in workflows.
Example JSON definition of step Crop Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/crop_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"position": "CENTER",
"scale_factor": 2.0,
"border_thickness": 2
}