This will provision a GPU-capable instance in GCP.
The latest version of Roboflow Inference will be automatically installed on the machine.
When the command has run, you should see a message like:
Deployed Roboflow Inference to gcp on gpu, deployment name is ...
To get a list of your deployments: inference status
To delete your deployment: inference undeploy ...
To ssh into the deployed server: ssh ...
The Roboflow Inference Server is running at http://34.66.116.66:9001
You can then use the API endpoint for your server for use in running models.
You can run any model that Inference supports, including object detection, segmentation, classification, and keypoint models that you have available on Roboflow, and foundation models like CLIP, PaliGemma, SAM-2, and more.