YOLO-World Model¶
Class: YoloWorldModelBlockV1
Source: inference.core.workflows.core_steps.models.foundation.yolo_world.v1.YoloWorldModelBlockV1
Run YOLO-World, a zero-shot object detection model, on an image.
YOLO-World accepts one or more text classes you want to identify in an image. The model returns the location of objects that meet the specified class, if YOLO-World is able to identify objects of that class.
We recommend experimenting with YOLO-World to evaluate the model on your use case before using this block in production. For example on how to effectively prompt YOLO-World, refer to the Roboflow YOLO-World prompting guide.
Type identifier¶
Use the following identifier in step "type" field: roboflow_core/yolo_world_model@v1to add the block as
as step in your workflow.
Properties¶
| Name | Type | Description | Refs |
|---|---|---|---|
name |
str |
Enter a unique identifier for this step.. | ❌ |
class_names |
List[str] |
One or more classes that you want YOLO-World to detect. The model accepts any string as an input, though does best with short descriptions of common objects.. | ✅ |
version |
str |
Variant of YoloWorld model. | ✅ |
confidence |
float |
Confidence threshold for detections. | ✅ |
The Refs column marks possibility to parametrise the property with dynamic values available
in workflow runtime. See Bindings for more info.
Available Connections¶
Compatible Blocks
Check what blocks you can connect to YOLO-World Model in version v1.
- inputs:
VLM as Detector,Google Vision OCR,Classification Label Visualization,Circle Visualization,Image Contours,Relative Static Crop,Image Preprocessing,LMM For Classification,VLM as Classifier,Ellipse Visualization,Stitch Images,Triangle Visualization,Stability AI Inpainting,QR Code Generator,Image Slicer,Background Color Visualization,Model Monitoring Inference Aggregator,OCR Model,Dot Visualization,Florence-2 Model,SIFT,Morphological Transformation,EasyOCR,Reference Path Visualization,Halo Visualization,SIFT Comparison,Buffer,Polygon Visualization,Image Slicer,Florence-2 Model,Slack Notification,Clip Comparison,Image Convert Grayscale,Instance Segmentation Model,OpenAI,Color Visualization,Keypoint Detection Model,Google Gemini,Label Visualization,Email Notification,Llama 3.2 Vision,Trace Visualization,Dynamic Zone,Size Measurement,Email Notification,Corner Visualization,Mask Visualization,CogVLM,Stability AI Outpainting,OpenAI,Roboflow Custom Metadata,Stitch OCR Detections,Blur Visualization,CSV Formatter,Crop Visualization,OpenAI,Grid Visualization,Perspective Correction,Twilio SMS Notification,Absolute Static Crop,Clip Comparison,Single-Label Classification Model,Contrast Equalization,Roboflow Dataset Upload,Roboflow Dataset Upload,Polygon Zone Visualization,Stability AI Image Generation,Webhook Sink,Depth Estimation,Dimension Collapse,Bounding Box Visualization,Camera Focus,Line Counter Visualization,Multi-Label Classification Model,Icon Visualization,Image Blur,Pixelate Visualization,Image Threshold,Anthropic Claude,LMM,Google Gemini,Identify Outliers,Dynamic Crop,Detections Consensus,Model Comparison Visualization,Camera Calibration,Local File Sink,Keypoint Visualization,Identify Changes,Object Detection Model - outputs:
Byte Tracker,Overlap Filter,Blur Visualization,Time in Zone,Circle Visualization,Detections Stabilizer,Crop Visualization,Detections Filter,Detections Classes Replacement,Perspective Correction,Ellipse Visualization,Triangle Visualization,Roboflow Dataset Upload,Detections Combine,Roboflow Dataset Upload,Stitch OCR Detections,Background Color Visualization,Model Monitoring Inference Aggregator,Segment Anything 2 Model,Velocity,Distance Measurement,Dot Visualization,Florence-2 Model,Bounding Box Visualization,Detections Transformation,Icon Visualization,Florence-2 Model,Time in Zone,Detection Offset,Pixelate Visualization,Path Deviation,Byte Tracker,PTZ Tracking (ONVIF).md),Color Visualization,Line Counter,Detections Merge,Label Visualization,Byte Tracker,Trace Visualization,Dynamic Crop,Path Deviation,Line Counter,Detections Consensus,Model Comparison Visualization,Size Measurement,Corner Visualization,Time in Zone,Roboflow Custom Metadata,Detections Stitch
Input and Output Bindings¶
The available connections depend on its binding kinds. Check what binding kinds
YOLO-World Model in version v1 has.
Bindings
-
input
images(image): The image to infer on..class_names(list_of_values): One or more classes that you want YOLO-World to detect. The model accepts any string as an input, though does best with short descriptions of common objects..version(string): Variant of YoloWorld model.confidence(float_zero_to_one): Confidence threshold for detections.
-
output
predictions(object_detection_prediction): Prediction with detected bounding boxes in form of sv.Detections(...) object.
Example JSON definition of step YOLO-World Model in version v1
{
"name": "<your_step_name_here>",
"type": "roboflow_core/yolo_world_model@v1",
"images": "$inputs.image",
"class_names": [
"person",
"car",
"license plate"
],
"version": "v2-s",
"confidence": 0.005
}