Crop Visualization¶
Class: CropVisualizationBlockV1
Source: inference.core.workflows.core_steps.visualizations.crop.v1.CropVisualizationBlockV1
Display scaled-up, zoomed-in views of detected objects overlaid on the original image, allowing detailed inspection of small or distant objects while maintaining context with the full scene.
How This Block Works¶
This block takes an image and detection predictions and creates scaled-up, zoomed-in crops of each detected object, then displays these enlarged crops on the original image. The block:
- Takes an image and predictions as input
- Identifies detected regions from bounding boxes or segmentation masks
- Extracts the image region for each detected object (crops the object from the original image)
- Scales up each crop by the specified scale factor (e.g., 2x makes objects twice as large)
- Applies color styling to the crop border based on the selected color palette, with colors assigned by class, index, or track ID
- Positions the scaled crop on the image at the specified anchor point relative to the original detection location using Supervision's CropAnnotator
- Draws a colored border around the scaled crop with the specified thickness
- Returns an annotated image with scaled-up object crops overlaid on the original image
The block works with both object detection predictions (using bounding boxes) and instance segmentation predictions (using masks). When masks are available, it crops the exact shape of detected objects; otherwise, it crops rectangular bounding box regions. The scale factor allows you to zoom in on objects, making small or distant objects more visible and easier to inspect. The scaled crops are positioned relative to their original detection locations, allowing you to see both the zoomed-in detail and the object's position in the full scene context.
Common Use Cases¶
- Small Object Inspection: Zoom in on small detected objects (e.g., defects, small products, distant objects) to make them more visible and easier to inspect while maintaining scene context
- Detail Visualization: Display enlarged views of detected objects for detailed analysis, quality control, or inspection workflows where fine details need to be visible
- Multi-Scale Object Display: Show both the full scene and zoomed-in object details simultaneously, useful for applications where context and detail are both important
- Quality Control and Inspection: Inspect detected defects, products, or components at higher magnification while keeping the original detection location visible for reference
- Presentation and Reporting: Create visualizations that highlight detected objects with zoomed-in views for reports, documentation, or presentations where both overview and detail are needed
- User Interface Enhancement: Provide zoomed-in object views in user interfaces, dashboards, or interactive applications where users need to see object details without losing scene context
Connecting to Other Blocks¶
The annotated image from this block can be connected to:
- Other visualization blocks (e.g., Label Visualization, Bounding Box Visualization, Polygon Visualization) to combine scaled crops with additional annotations for comprehensive visualization
- Data storage blocks (e.g., Local File Sink, CSV Formatter, Roboflow Dataset Upload) to save images with scaled crops for documentation, reporting, or analysis
- Webhook blocks to send visualized results with scaled crops to external systems, APIs, or web applications for display in dashboards or monitoring tools
- Notification blocks (e.g., Email Notification, Slack Notification) to send annotated images with scaled crops as visual evidence in alerts or reports
- Video output blocks to create annotated video streams or recordings with scaled crops for live monitoring, detailed inspection, or post-processing analysis
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/crop_visualization@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
copy_image |
bool |
Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations.. | ✅ |
color_palette |
str |
Select a color palette for the visualised elements.. | ✅ |
palette_size |
int |
Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes.. | ✅ |
custom_colors |
List[str] |
Define a list of custom colors for bounding boxes in HEX format.. | ✅ |
color_axis |
str |
Choose how bounding box colors are assigned.. | ✅ |
position |
str |
Anchor position for placing the scaled crop relative to the original detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The scaled crop will be positioned at this anchor point relative to the original detection location.. | ✅ |
scale_factor |
float |
Factor by which to scale (zoom) the cropped object region. A factor of 2.0 doubles the size of the crop, making objects twice as large. A factor of 1.0 shows the crop at original size. Higher values (e.g., 3.0, 4.0) create more zoomed-in views, useful for inspecting small or distant objects. Lower values (e.g., 1.5) provide subtle magnification.. | ✅ |
border_thickness |
int |
Thickness of the border outline around the scaled crop in pixels. Higher values create thicker, more visible borders that help distinguish the scaled crop from the background.. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Crop Visualization in version v1.
- inputs:
Image Convert Grayscale,Image Slicer,Image Blur,Ellipse Visualization,Halo Visualization,Camera Focus,Detection Offset,Line Counter,Detection Event Log,Stability AI Inpainting,Reference Path Visualization,OpenAI,Slack Notification,Circle Visualization,Background Subtraction,Stability AI Image Generation,Roboflow Dataset Upload,VLM as Classifier,YOLO-World Model,LMM For Classification,Pixel Color Count,Clip Comparison,Detections Merge,Anthropic Claude,Line Counter,Pixelate Visualization,Buffer,Byte Tracker,Email Notification,Image Contours,JSON Parser,Stitch OCR Detections,VLM as Detector,Relative Static Crop,Detections Consensus,Camera Focus,Byte Tracker,Multi-Label Classification Model,LMM,Dot Visualization,Anthropic Claude,Stitch OCR Detections,Dimension Collapse,Keypoint Visualization,Anthropic Claude,Trace Visualization,Detections Transformation,Crop Visualization,Absolute Static Crop,Google Gemini,Segment Anything 2 Model,Byte Tracker,Overlap Filter,Image Preprocessing,Gaze Detection,Instance Segmentation Model,Identify Changes,Perspective Correction,Email Notification,Motion Detection,Cosine Similarity,Halo Visualization,VLM as Classifier,SIFT Comparison,Path Deviation,Local File Sink,EasyOCR,Depth Estimation,CogVLM,Polygon Visualization,OpenAI,QR Code Generator,Bounding Box Visualization,Size Measurement,Corner Visualization,Label Visualization,Clip Comparison,CSV Formatter,SIFT Comparison,Florence-2 Model,OCR Model,Google Gemini,Webhook Sink,Single-Label Classification Model,Contrast Equalization,Stability AI Outpainting,Stitch Images,Model Comparison Visualization,Detections Filter,Distance Measurement,Polygon Visualization,Object Detection Model,Detections Stabilizer,OpenAI,Detections List Roll-Up,Path Deviation,Icon Visualization,Twilio SMS/MMS Notification,Model Monitoring Inference Aggregator,Object Detection Model,Color Visualization,SAM 3,Mask Visualization,Roboflow Dataset Upload,Time in Zone,Detections Classes Replacement,Image Slicer,Template Matching,OpenAI,Bounding Rectangle,Instance Segmentation Model,Keypoint Detection Model,Dynamic Zone,Google Gemini,Text Display,Blur Visualization,Roboflow Custom Metadata,Triangle Visualization,Google Vision OCR,Identify Outliers,Detections Combine,Classification Label Visualization,SAM 3,Image Threshold,PTZ Tracking (ONVIF).md),Camera Calibration,Time in Zone,Background Color Visualization,Seg Preview,Polygon Zone Visualization,Grid Visualization,Dynamic Crop,Keypoint Detection Model,SAM 3,Line Counter Visualization,Florence-2 Model,Time in Zone,Detections Stitch,Moondream2,Twilio SMS Notification,SIFT,Morphological Transformation,Velocity,Llama 3.2 Vision,VLM as Detector - outputs:
Corner Visualization,Image Convert Grayscale,Label Visualization,Clip Comparison,Image Slicer,SmolVLM2,Image Blur,Florence-2 Model,SIFT Comparison,Google Gemini,OCR Model,Ellipse Visualization,Halo Visualization,Single-Label Classification Model,Stability AI Outpainting,Contrast Equalization,Perception Encoder Embedding Model,Qwen3-VL,Camera Focus,Stitch Images,Model Comparison Visualization,Polygon Visualization,Object Detection Model,Stability AI Inpainting,Reference Path Visualization,OpenAI,Detections Stabilizer,OpenAI,Circle Visualization,Background Subtraction,Roboflow Dataset Upload,Stability AI Image Generation,Icon Visualization,LMM For Classification,YOLO-World Model,VLM as Classifier,Pixel Color Count,Twilio SMS/MMS Notification,Object Detection Model,Multi-Label Classification Model,Color Visualization,Clip Comparison,SAM 3,Barcode Detection,Mask Visualization,Roboflow Dataset Upload,Anthropic Claude,Image Slicer,Template Matching,Buffer,Pixelate Visualization,OpenAI,Byte Tracker,CLIP Embedding Model,Instance Segmentation Model,Keypoint Detection Model,Email Notification,Image Contours,Google Gemini,Text Display,Blur Visualization,Triangle Visualization,Google Vision OCR,VLM as Detector,Relative Static Crop,Llama 3.2 Vision,Camera Focus,SAM 3,Classification Label Visualization,Multi-Label Classification Model,Image Threshold,LMM,Dot Visualization,Anthropic Claude,Camera Calibration,Background Color Visualization,Qwen2.5-VL,Dominant Color,Seg Preview,Polygon Zone Visualization,Keypoint Visualization,Anthropic Claude,Dynamic Crop,Keypoint Detection Model,Trace Visualization,SAM 3,Crop Visualization,Absolute Static Crop,Line Counter Visualization,Florence-2 Model,Google Gemini,Time in Zone,Detections Stitch,Segment Anything 2 Model,Moondream2,Image Preprocessing,Gaze Detection,Instance Segmentation Model,SIFT,Perspective Correction,Motion Detection,Halo Visualization,VLM as Classifier,EasyOCR,Depth Estimation,CogVLM,Morphological Transformation,OpenAI,Polygon Visualization,Single-Label Classification Model,QR Code Detection,VLM as Detector,Bounding Box Visualization
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Crop Visualization in version v1 has.
Bindings
-
input
image(image): The image to visualize on..copy_image(boolean): Enable this option to create a copy of the input image for visualization, preserving the original. Use this when stacking multiple visualizations..predictions(Union[instance_segmentation_prediction,rle_instance_segmentation_prediction,object_detection_prediction,keypoint_detection_prediction]): Model predictions to visualize..color_palette(string): Select a color palette for the visualised elements..palette_size(integer): Specify the number of colors in the palette. This applies when using custom or Matplotlib palettes..custom_colors(list_of_values): Define a list of custom colors for bounding boxes in HEX format..color_axis(string): Choose how bounding box colors are assigned..position(string): Anchor position for placing the scaled crop relative to the original detection's bounding box. Options include: CENTER (center of box), corners (TOP_LEFT, TOP_RIGHT, BOTTOM_LEFT, BOTTOM_RIGHT), edge midpoints (TOP_CENTER, CENTER_LEFT, CENTER_RIGHT, BOTTOM_CENTER), or CENTER_OF_MASS (center of mass of the object). The scaled crop will be positioned at this anchor point relative to the original detection location..scale_factor(float): Factor by which to scale (zoom) the cropped object region. A factor of 2.0 doubles the size of the crop, making objects twice as large. A factor of 1.0 shows the crop at original size. Higher values (e.g., 3.0, 4.0) create more zoomed-in views, useful for inspecting small or distant objects. Lower values (e.g., 1.5) provide subtle magnification..border_thickness(integer): Thickness of the border outline around the scaled crop in pixels. Higher values create thicker, more visible borders that help distinguish the scaled crop from the background..
-
output
image(image): Image in workflows.
Example JSON definition of step Crop Visualization in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/crop_visualization@v1",
"image": "$inputs.image",
"copy_image": true,
"predictions": "$steps.object_detection_model.predictions",
"color_palette": "DEFAULT",
"palette_size": 10,
"custom_colors": [
"#FF0000",
"#00FF00",
"#0000FF"
],
"color_axis": "CLASS",
"position": "CENTER",
"scale_factor": 2.0,
"border_thickness": 2
}