Inference servers have a number of configurable parameters which can be set using environment variables. To set an environment variable with the docker run command, use the -e flag with an argument, like this:
Sets the default non-maximal suppression (NMS) behavior for detection type models (object detection, instance segmentation, etc.). If True, the default NMS behavior will be class be class agnostic, meaning overlapping detections from different classes may be removed based on the IoU threshold. If False, only overlapping detections from the same class will be considered for removal by NMS.
Sets the allow_origins property on the CORSMiddleware used with FastAPI for HTTP interfaces. Multiple values can be provided separated by a comma (ex. ALLOW_ORIGINS=orig1.com,orig2.com).
Sets the OpenAI CLIP version for use by all /clip routes. Available model versions are: RN101, RN50, RN50x16, RN50x4, RN50x64, ViT-B-16, BiT-B-32, BiT-L-14-336px, and ViT-L-14.
Sets the maximum number of models the internal model manager will store in memory at one time. By default, the model queue will remove the least recently accessed model when making space for a new model.
Sets the container path to the TensorRT cache directory. Setting this path in conjunction with mounting a host volume can reduce the cold start time of TensorRT based servers.