Camera Focus¶
v2¶
Class: CameraFocusBlockV2 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.camera_focus.v2.CameraFocusBlockV2
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate focus quality scores using the Tenengrad focus measure (Sobel gradient magnitudes) to assess image sharpness, detect blur, evaluate camera focus performance, enable auto-focus systems, perform image quality assessment, compute per-region focus measures for detected objects, and provide comprehensive visualization overlays including zebra pattern exposure warnings, focus peaking, heads-up display, composition grid, and center markers for professional camera control and image analysis workflows.
How This Block Works¶
This block calculates the Tenengrad focus measure, which quantifies image sharpness by measuring gradient magnitudes using Sobel operators. The block:
- Receives an input image (color or grayscale, automatically converts color to grayscale for processing)
- Optionally receives detection bounding boxes to compute focus measures within specific regions
- Converts the image to grayscale if it's in color format (Tenengrad measure works on single-channel images)
- Calculates horizontal and vertical Sobel gradients:
- Applies Sobel operator in horizontal direction (gradient X) to detect vertical edges
- Applies Sobel operator in vertical direction (gradient Y) to detect horizontal edges
- Uses 3x3 Sobel kernels for gradient computation
- Computes gradient magnitude using squared values: sqrt(gx² + gy²) approximated as gx² + gy²
- Calculates the focus measure:
- Squares the horizontal and vertical gradient components
- Sums the squared gradients to create a focus measure matrix
- Higher values indicate stronger edges and finer detail (sharper, more focused regions)
- Lower values indicate weaker edges and less detail (blurrier, less focused regions)
- Computes overall focus value:
- Calculates mean of focus measure matrix across entire image
- Returns a single numerical focus score for the whole image
- Computes per-region focus measures (if detections provided):
- Extracts bounding box coordinates from detection predictions
- Clips bounding boxes to image boundaries
- Calculates mean focus measure within each bounding box region
- Returns a list of focus values, one per detection region
- Applies optional visualization overlays:
- Zebra Pattern Warnings: Diagonal stripe overlay on under/overexposed regions (blue for underexposed, red for overexposed) to identify exposure issues
- Focus Peaking: Green overlay highlighting in-focus areas (regions above focus threshold) to visualize sharp regions
- Heads-Up Display (HUD): Semi-transparent overlay showing focus value, brightness histogram (for each color channel and grayscale), and exposure information in top-left corner
- Composition Grid: Overlay grid lines for composition assistance (2x2, 3x3 rule of thirds, 4x4, or 5x5 divisions)
- Center Marker: Crosshair marker at frame center for alignment and framing reference
- Preserves image structure and metadata
- Returns the visualization image (if overlays enabled), overall focus measure value, and per-bounding-box focus measures list
The Tenengrad focus measure quantifies image sharpness by analyzing edge strength and gradient magnitudes. In-focus images contain many sharp edges with strong gradients, resulting in high Tenengrad scores. Out-of-focus images have blurred edges with weak gradients, resulting in low Tenengrad scores. The measure uses Sobel operators to compute gradients efficiently and is robust to noise. Higher Tenengrad values indicate better focus, with typical ranges varying based on image content, resolution, and edge density. The visualization overlays provide professional camera control aids, helping identify focus issues, exposure problems, and composition opportunities in real-time or during analysis.
Common Use Cases¶
- Auto-Focus Systems: Assess focus quality to enable automatic camera focus adjustment with per-region focus analysis (e.g., evaluate focus during auto-focus operations, detect optimal focus position for specific objects, trigger focus adjustments based on Tenengrad scores), enabling advanced auto-focus workflows
- Image Quality Assessment: Evaluate image sharpness and detect blurry images with visualization overlays for quality control (e.g., assess image quality in capture pipelines with HUD display, detect out-of-focus images with focus peaking, filter low-quality images using focus thresholds), enabling comprehensive quality assessment workflows
- Professional Camera Control: Provide real-time focus and exposure feedback for manual camera operation (e.g., display focus peaking for manual focus, show zebra warnings for exposure adjustment, use composition grid for framing), enabling professional camera control workflows
- Object-Specific Focus Analysis: Evaluate focus quality for specific detected objects within images (e.g., assess focus on detected objects, analyze focus per bounding box region, optimize focus for specific object classes), enabling object-focused analysis workflows
- Camera Calibration: Evaluate focus performance during camera setup and calibration with comprehensive visualization (e.g., assess focus during camera calibration with overlays, optimize focus settings using HUD feedback, evaluate camera performance with visualization aids), enabling enhanced camera calibration workflows
- Video Focus Tracking: Monitor focus quality across video frames with per-object focus measures (e.g., track focus for moving objects, monitor focus quality in video streams, analyze focus consistency across frames), enabling video focus tracking workflows
Connecting to Other Blocks¶
This block receives an image (and optionally detections) and produces a visualization image, overall focus_measure float value, and bbox_focus_measures list:
- After object detection or instance segmentation blocks to compute focus measures for detected objects (e.g., assess focus on detected objects, analyze focus per detection region, evaluate object-specific focus quality), enabling detection-to-focus workflows
- After image capture or preprocessing blocks to assess focus quality of captured or processed images (e.g., evaluate focus after image capture, assess sharpness after preprocessing with visualization, measure focus in image pipelines with overlays), enabling enhanced focus assessment workflows
- Before logic blocks like Continue If to make decisions based on focus quality (e.g., continue if focus is good, filter images based on Tenengrad scores, make decisions using focus measures or per-object focus values), enabling focus-based decision workflows
- Before analysis blocks to assess image quality before analysis (e.g., evaluate focus before analysis with HUD display, assess sharpness for processing, measure quality before analysis), enabling quality-based analysis workflows
- In auto-focus systems where focus measurement is part of a feedback loop with per-object analysis (e.g., measure focus for auto-focus with object prioritization, assess focus in feedback systems, evaluate focus in control loops), enabling advanced auto-focus system workflows
- Before visualization blocks to display focus quality information (e.g., visualize focus scores with overlays, display focus measures, show focus quality with professional camera aids), enabling comprehensive focus visualization workflows
Version Differences¶
Enhanced from v1:
- Different Focus Algorithm: Uses Tenengrad focus measure (Sobel gradient magnitudes) instead of Brenner measure, providing more robust edge detection and focus assessment
- Visualization Overlays: Includes comprehensive visualization features including zebra pattern exposure warnings, focus peaking (green highlight on sharp areas), heads-up display with focus values and brightness histogram, composition grid overlays (2x2, 3x3, 4x4, 5x5), and center crosshair marker for professional camera control
- Per-Region Focus Analysis: Supports optional detection bounding boxes to compute focus measures within specific object regions, enabling object-specific focus assessment
- Enhanced Outputs: Returns three outputs - visualization image, overall focus_measure float, and bbox_focus_measures list (per-detection focus values)
- Configurable Visualization: All visualization overlays are configurable (zebra warnings, HUD, focus peaking, grid, center marker can be enabled/disabled independently)
- Exposure Analysis: Includes exposure assessment with configurable thresholds for underexposed/overexposed regions with visual zebra pattern warnings
- Professional Camera Aids: Provides tools similar to professional camera displays including focus peaking, histogram display, and composition guides
Requirements¶
This block works on color or grayscale input images. Color images are automatically converted to grayscale before processing (Tenengrad measure works on single-channel images). The block outputs a visualization image (with optional overlays), an overall focus_measure float value, and a bbox_focus_measures list (if detections are provided). Higher Tenengrad values indicate better focus and sharper images, while lower values indicate blur and poor focus. The focus measure is sensitive to image content, resolution, and edge density, so threshold values for "good" focus should be calibrated based on specific use cases and image characteristics. All visualization overlays are optional and can be enabled or disabled independently based on workflow needs.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/camera_focus@v2to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
underexposed_threshold_percent |
float |
Brightness percentage threshold below which pixels are marked as underexposed. Must be between 0.0 and 100.0. Default is 3.0%, meaning pixels with brightness below 3% (approximately value 8 in 0-255 range) are considered underexposed. Pixels below this threshold will show blue zebra pattern overlay when show_zebra_warnings is enabled. Lower values are stricter (fewer pixels marked as underexposed), higher values are more lenient (more pixels marked as underexposed). Adjust based on exposure tolerance and image requirements.. | ❌ |
overexposed_threshold_percent |
float |
Brightness percentage threshold above which pixels are marked as overexposed. Must be between 0.0 and 100.0. Default is 97.0%, meaning pixels with brightness above 97% (approximately value 247 in 0-255 range) are considered overexposed. Pixels above this threshold will show red zebra pattern overlay when show_zebra_warnings is enabled. Higher values are stricter (fewer pixels marked as overexposed), lower values are more lenient (more pixels marked as overexposed). Adjust based on exposure tolerance and image requirements.. | ❌ |
show_zebra_warnings |
bool |
Display diagonal zebra pattern overlay on under/overexposed regions. When enabled (default True), pixels below underexposed_threshold_percent show blue zebra stripes, and pixels above overexposed_threshold_percent show red zebra stripes. This provides visual feedback for exposure issues similar to professional camera zebra pattern displays. The zebra pattern helps identify regions with exposure problems (too dark or too bright) that may need adjustment. Disable if you don't want exposure warnings or want cleaner visualization.. | ❌ |
grid_overlay |
str |
Composition grid overlay for framing assistance. Options: 'None' (no grid), '2x2' (four quadrants), '3x3' (default, rule of thirds with 9 sections), '4x4' (16 sections), or '5x5' (25 sections). The grid helps with composition and framing by dividing the image into sections. The 3x3 grid (rule of thirds) is commonly used for balanced composition. Grid lines are drawn in gray color. Choose based on composition needs: rule of thirds (3x3) for general use, 2x2 for simple quadrant composition, or higher divisions for more detailed composition guides.. | ❌ |
show_hud |
bool |
Display heads-up display (HUD) overlay with focus scores and brightness histogram. When enabled (default True), shows a semi-transparent black overlay in the top-left corner displaying: focus value (labeled 'TFM Focus' with numerical score), brightness histogram showing distribution for each color channel (red, green, blue) and grayscale, and exposure label. The HUD provides comprehensive focus and exposure information for professional camera control. Disable if you don't need the HUD display or want cleaner visualization.. | ❌ |
show_focus_peaking |
bool |
Display green overlay highlighting in-focus areas (focus peaking). When enabled (default True), regions with focus measures above a threshold (top 30% by default) are highlighted with a semi-transparent green overlay. This helps visualize which areas of the image are in sharp focus, similar to professional camera focus peaking displays. The green highlight makes it easy to see sharp regions at a glance. Disable if you don't want focus peaking overlay or want cleaner visualization.. | ❌ |
show_center_marker |
bool |
Display crosshair marker at the center of the frame. When enabled (default True), shows a white crosshair at the image center for alignment and framing reference. The crosshair size scales with image dimensions for visibility. This helps with composition alignment and center framing, similar to professional camera center markers. Disable if you don't need the center marker or want cleaner visualization.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Focus in version v2.
- inputs:
Icon Visualization,Image Preprocessing,Blur Visualization,Moondream2,Morphological Transformation,Stitch Images,Detections Filter,Detections Classes Replacement,Color Visualization,Contrast Equalization,Detections Merge,Circle Visualization,Stability AI Image Generation,Image Blur,Reference Path Visualization,Template Matching,Velocity,SIFT,Detections List Roll-Up,Bounding Rectangle,SAM 3,Halo Visualization,Detection Event Log,EasyOCR,Trace Visualization,Detections Transformation,Instance Segmentation Model,VLM as Detector,Classification Label Visualization,Detection Offset,Image Convert Grayscale,Google Vision OCR,Path Deviation,Background Color Visualization,Camera Calibration,VLM as Detector,Segment Anything 2 Model,Triangle Visualization,Text Display,Dynamic Zone,SAM 3,Ellipse Visualization,Seg Preview,Mask Visualization,Polygon Zone Visualization,Polygon Visualization,Absolute Static Crop,Model Comparison Visualization,Label Visualization,Time in Zone,Line Counter Visualization,Perspective Correction,Overlap Filter,Image Slicer,QR Code Generator,Detections Consensus,Instance Segmentation Model,Stability AI Outpainting,Object Detection Model,Grid Visualization,Relative Static Crop,Image Slicer,Byte Tracker,Image Contours,Stability AI Inpainting,Camera Focus,Path Deviation,Line Counter,Time in Zone,Motion Detection,Object Detection Model,SIFT Comparison,Detections Stitch,Byte Tracker,Dot Visualization,Camera Focus,YOLO-World Model,Crop Visualization,Bounding Box Visualization,OCR Model,Background Subtraction,Byte Tracker,SAM 3,Detections Stabilizer,Dynamic Crop,Keypoint Visualization,PTZ Tracking (ONVIF).md),Image Threshold,Time in Zone,Detections Combine,Corner Visualization,Depth Estimation,Pixelate Visualization - outputs:
Icon Visualization,Image Preprocessing,LMM,Blur Visualization,Color Visualization,Contrast Equalization,Cache Set,Llama 3.2 Vision,Velocity,Reference Path Visualization,SIFT,OpenAI,Buffer,SAM 3,Halo Visualization,Roboflow Dataset Upload,Trace Visualization,Twilio SMS/MMS Notification,Single-Label Classification Model,VLM as Detector,Image Convert Grayscale,Path Deviation,Background Color Visualization,Multi-Label Classification Model,Qwen2.5-VL,Single-Label Classification Model,Camera Calibration,VLM as Detector,Triangle Visualization,Dynamic Zone,Ellipse Visualization,Seg Preview,Absolute Static Crop,Google Gemini,Time in Zone,Webhook Sink,Line Counter Visualization,Florence-2 Model,Dominant Color,Detections Consensus,Object Detection Model,Anthropic Claude,Keypoint Detection Model,Byte Tracker,Image Slicer,Pixel Color Count,VLM as Classifier,Image Contours,Stability AI Inpainting,Line Counter,Google Gemini,Motion Detection,SIFT Comparison,Line Counter,Keypoint Detection Model,Dot Visualization,Camera Focus,YOLO-World Model,Bounding Box Visualization,OCR Model,Background Subtraction,OpenAI,SAM 3,Qwen3-VL,Dynamic Crop,Distance Measurement,Keypoint Visualization,Email Notification,Image Threshold,Anthropic Claude,Corner Visualization,Pixelate Visualization,SmolVLM2,Moondream2,Morphological Transformation,Stitch Images,Stability AI Image Generation,Template Matching,Image Blur,Circle Visualization,Detections List Roll-Up,Email Notification,Perception Encoder Embedding Model,Continue If,Detection Event Log,EasyOCR,Google Gemini,Instance Segmentation Model,Classification Label Visualization,Clip Comparison,CogVLM,Google Vision OCR,QR Code Detection,Segment Anything 2 Model,LMM For Classification,Text Display,CLIP Embedding Model,SAM 3,Multi-Label Classification Model,Barcode Detection,OpenAI,Mask Visualization,Anthropic Claude,Polygon Zone Visualization,Polygon Visualization,Model Comparison Visualization,Label Visualization,Perspective Correction,Image Slicer,Instance Segmentation Model,Stability AI Outpainting,VLM as Classifier,Grid Visualization,Relative Static Crop,Size Measurement,Path Deviation,Camera Focus,Time in Zone,Florence-2 Model,Object Detection Model,Detections Stitch,Crop Visualization,Clip Comparison,Detections Stabilizer,Roboflow Dataset Upload,PTZ Tracking (ONVIF).md),Time in Zone,Depth Estimation,OpenAI,Gaze Detection
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Focus in version v2 has.
Bindings
-
input
image(image): Input image (color or grayscale) to calculate focus quality for. Color images are automatically converted to grayscale before processing (Tenengrad focus measure works on single-channel images). The block calculates the Tenengrad focus measure using Sobel gradient magnitudes to assess image sharpness. The output includes a visualization image (with optional overlays if enabled), an overall focus_measure float value, and a bbox_focus_measures list (per-detection focus values if detections are provided). Higher Tenengrad values indicate better focus and sharper images (stronger edges and gradients), while lower values indicate blur and poor focus (weaker gradients). The focus measure uses Sobel operators to compute gradient magnitudes efficiently. Original image metadata is preserved. Use this block to assess focus quality, detect blur, enable auto-focus systems, perform object-specific focus analysis, or perform image quality assessment with professional camera control visualization aids..detections(Union[object_detection_prediction,instance_segmentation_prediction]): Optional detection predictions (object detection or instance segmentation) to compute focus measures within bounding box regions. When provided, the block calculates a separate focus measure for each detection's bounding box region and returns them in the bbox_focus_measures list output. This enables object-specific focus analysis, allowing you to assess focus quality for individual detected objects rather than just the overall image. Useful for evaluating focus on specific objects of interest, analyzing focus per object class, or optimizing focus for detected regions. Each bbox_focus_measure value corresponds to the mean Tenengrad focus measure within that object's bounding box. Leave as None if you only need overall image focus assessment..
-
output
image(image): Image in workflows.focus_measure(float): Float value.bbox_focus_measures(list_of_values): List of values of any type.
Example JSON definition of step Camera Focus in version v2
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera_focus@v2",
"image": "$inputs.image",
"underexposed_threshold_percent": "<block_does_not_provide_example>",
"overexposed_threshold_percent": "<block_does_not_provide_example>",
"show_zebra_warnings": "<block_does_not_provide_example>",
"grid_overlay": "<block_does_not_provide_example>",
"show_hud": "<block_does_not_provide_example>",
"show_focus_peaking": "<block_does_not_provide_example>",
"show_center_marker": "<block_does_not_provide_example>",
"detections": "$steps.object_detection_model.predictions"
}
v1¶
Class: CameraFocusBlockV1 (there are multiple versions of this block)
Source: inference.core.workflows.core_steps.classical_cv.camera_focus.v1.CameraFocusBlockV1
Warning: This block has multiple versions. Please refer to the specific version for details. You can learn more about how versions work here: Versioning
Calculate focus quality scores using the Brenner function measure to assess image sharpness, detect blur, evaluate camera focus performance, enable auto-focus systems, perform image quality assessment, and determine optimal focus settings for camera calibration and image capture workflows.
How This Block Works¶
This block calculates the Brenner focus measure, which quantifies image sharpness by measuring texture detail at fine scales. The block:
- Receives an input image (color or grayscale, automatically converts color to grayscale)
- Converts the image to grayscale if it's in color format (Brenner measure works on single-channel images)
- Converts the grayscale image to 16-bit integer format for precise calculations
- Calculates horizontal and vertical intensity differences:
- Computes horizontal differences by comparing pixels 2 positions apart horizontally:
pixel[x+2] - pixel[x] - Computes vertical differences by comparing pixels 2 positions apart vertically:
pixel[y+2] - pixel[y] - Uses clipping to keep only positive differences (highlights sharp edges and details)
- Measures rapid intensity changes that indicate fine-scale texture and sharp edges
- Calculates the focus measure matrix:
- Takes the maximum of horizontal and vertical differences at each pixel location
- Squares the differences to emphasize larger variations (stronger response to sharp edges)
- Creates a matrix where higher values indicate sharper, more focused regions
- Normalizes and converts to visualization format:
- Normalizes the focus measure matrix to 0-255 range for display
- Converts to 8-bit format for visualization
- Creates a visual representation showing focus quality across the image
- Overlays the Brenner value text on the image:
- Displays the mean focus measure value on the top-left of the image
- Shows focus quality as a numerical score for easy assessment
- Preserves image structure and metadata
- Returns the visualization image and the mean focus measure value as a float
The Brenner focus measure quantifies image sharpness by analyzing fine-scale texture and edge detail. In-focus images contain many sharp edges and fine texture details, resulting in large intensity differences between nearby pixels and high Brenner scores. Out-of-focus images have blurred edges and lack fine detail, resulting in small intensity differences and low Brenner scores. The measure uses a 2-pixel spacing to detect fine-scale texture while being robust to noise. Higher Brenner values indicate better focus, with typical ranges varying based on image content and resolution. The visualization shows focus quality distribution across the image, helping identify well-focused and blurred regions.
Common Use Cases¶
- Auto-Focus Systems: Assess focus quality to enable automatic camera focus adjustment (e.g., evaluate focus during auto-focus operations, detect optimal focus position, trigger focus adjustments based on Brenner scores), enabling auto-focus workflows
- Image Quality Assessment: Evaluate image sharpness and detect blurry images for quality control (e.g., assess image quality in capture pipelines, detect out-of-focus images, filter low-quality images), enabling quality assessment workflows
- Camera Calibration: Evaluate focus performance during camera setup and calibration (e.g., assess focus during camera calibration, optimize focus settings, evaluate camera performance), enabling camera calibration workflows
- Blur Detection: Detect blurry images in image processing pipelines (e.g., identify blurry images for rejection, detect focus issues, assess image sharpness), enabling blur detection workflows
- Focus Optimization: Determine optimal focus settings for image capture systems (e.g., find best focus position, optimize focus parameters, evaluate focus across settings), enabling focus optimization workflows
- Image Analysis: Assess image sharpness as part of image analysis workflows (e.g., evaluate image quality before processing, assess focus for analysis tasks, measure image sharpness metrics), enabling focus analysis workflows
Connecting to Other Blocks¶
This block receives an image and produces a focus measure visualization image and a focus_measure float value:
- After image capture or preprocessing blocks to assess focus quality of captured or processed images (e.g., evaluate focus after image capture, assess sharpness after preprocessing, measure focus in image pipelines), enabling focus assessment workflows
- Before logic blocks like Continue If to make decisions based on focus quality (e.g., continue if focus is good, filter images based on focus scores, make decisions using focus measures), enabling focus-based decision workflows
- Before analysis blocks to assess image quality before analysis (e.g., evaluate focus before analysis, assess sharpness for processing, measure quality before analysis), enabling quality-based analysis workflows
- In auto-focus systems where focus measurement is part of a feedback loop (e.g., measure focus for auto-focus, assess focus in feedback systems, evaluate focus in control loops), enabling auto-focus system workflows
- Before visualization blocks to display focus quality information (e.g., visualize focus scores, display focus measures, show focus quality), enabling focus visualization workflows
- In image quality control pipelines where focus assessment is part of quality checks (e.g., assess focus in quality pipelines, evaluate sharpness in QC workflows, measure focus for quality control), enabling quality control workflows
Requirements¶
This block works on color or grayscale input images. Color images are automatically converted to grayscale before processing (Brenner measure works on single-channel images). The block outputs both a visualization image (with focus measure displayed) and a numerical focus_measure value. Higher Brenner values indicate better focus and sharper images, while lower values indicate blur and poor focus. The focus measure is sensitive to image content and resolution, so threshold values for "good" focus should be calibrated based on specific use cases and image characteristics.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/camera_focus@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to Camera Focus in version v1.
- inputs:
Icon Visualization,Line Counter Visualization,Perspective Correction,Image Preprocessing,Blur Visualization,Morphological Transformation,Image Slicer,QR Code Generator,Pixelate Visualization,Stitch Images,Color Visualization,Contrast Equalization,Stability AI Outpainting,Circle Visualization,Stability AI Image Generation,Grid Visualization,Relative Static Crop,Image Blur,Reference Path Visualization,SIFT,Image Slicer,Image Contours,Stability AI Inpainting,Camera Focus,Halo Visualization,Polygon Zone Visualization,Trace Visualization,SIFT Comparison,Dot Visualization,Camera Focus,Image Convert Grayscale,Classification Label Visualization,Crop Visualization,Background Color Visualization,Bounding Box Visualization,Background Subtraction,Camera Calibration,Dynamic Crop,Triangle Visualization,Polygon Visualization,Text Display,Keypoint Visualization,Ellipse Visualization,Image Threshold,Corner Visualization,Mask Visualization,Depth Estimation,Absolute Static Crop,Model Comparison Visualization,Label Visualization - outputs:
Icon Visualization,Image Preprocessing,LMM,Blur Visualization,Moondream2,Morphological Transformation,Stitch Images,Color Visualization,Contrast Equalization,Gaze Detection,Llama 3.2 Vision,Stability AI Image Generation,Template Matching,Image Blur,Reference Path Visualization,Circle Visualization,Velocity,SIFT,Detections List Roll-Up,OpenAI,Buffer,SAM 3,Perception Encoder Embedding Model,Halo Visualization,Continue If,Detection Event Log,EasyOCR,Google Gemini,Roboflow Dataset Upload,Trace Visualization,Twilio SMS/MMS Notification,Instance Segmentation Model,Single-Label Classification Model,VLM as Detector,Classification Label Visualization,Clip Comparison,Image Convert Grayscale,CogVLM,Google Vision OCR,Background Color Visualization,Multi-Label Classification Model,Qwen2.5-VL,QR Code Detection,Single-Label Classification Model,Camera Calibration,VLM as Detector,Segment Anything 2 Model,LMM For Classification,Triangle Visualization,Text Display,CLIP Embedding Model,SAM 3,Dynamic Zone,Ellipse Visualization,Seg Preview,Multi-Label Classification Model,Barcode Detection,OpenAI,Mask Visualization,Anthropic Claude,Absolute Static Crop,Google Gemini,Time in Zone,Polygon Zone Visualization,Polygon Visualization,Model Comparison Visualization,Label Visualization,Webhook Sink,Line Counter Visualization,Perspective Correction,Florence-2 Model,Dominant Color,Image Slicer,Instance Segmentation Model,Stability AI Outpainting,Object Detection Model,Anthropic Claude,Keypoint Detection Model,VLM as Classifier,Relative Static Crop,Byte Tracker,Image Slicer,Pixel Color Count,VLM as Classifier,Image Contours,Stability AI Inpainting,Camera Focus,Google Gemini,Motion Detection,Florence-2 Model,Object Detection Model,SIFT Comparison,Detections Stitch,Keypoint Detection Model,Dot Visualization,Camera Focus,YOLO-World Model,Crop Visualization,Clip Comparison,Bounding Box Visualization,OCR Model,Background Subtraction,OpenAI,SAM 3,Detections Stabilizer,Roboflow Dataset Upload,Qwen3-VL,Dynamic Crop,Distance Measurement,Keypoint Visualization,Email Notification,PTZ Tracking (ONVIF).md),Image Threshold,Anthropic Claude,Corner Visualization,Depth Estimation,OpenAI,Pixelate Visualization,SmolVLM2
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
Camera Focus in version v1 has.
Bindings
-
input
image(image): Input image (color or grayscale) to calculate focus quality for. Color images are automatically converted to grayscale before processing (Brenner focus measure works on single-channel images). The block calculates the Brenner function score which measures fine-scale texture and edge detail to assess image sharpness. The output includes both a visualization image (with focus measure value displayed) and a numerical focus_measure float value. Higher Brenner values indicate better focus and sharper images (more fine-scale texture and sharp edges), while lower values indicate blur and poor focus (lacking fine detail). The focus measure uses intensity differences between pixels 2 positions apart to detect fine-scale texture. Original image metadata is preserved. Use this block to assess focus quality, detect blur, enable auto-focus systems, or perform image quality assessment..
-
output
Example JSON definition of step Camera Focus in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/camera_focus@v1",
"image": "$inputs.image"
}