Vultr model isn't being updated to the latest version, here is the version from vultr
Vultr Serverless Inference is billed based on input and output tokens used. All prices shown are the cost per 1mm tokens.
List of supported chat completion models:
MiniMax-M2.7 ($0.30 input, $1.20 output)
Qwen3.5-397B-A17B-FP8 ($0.30 input, $1.20 output)
DeepSeek-V4-Pro ($0.55 input, $1.65 output)
Kimi-K2.6 ($0.15 input, $0.60 output)
DeepSeek-V3.2-NVFP4 ($0.55 input, $1.65 output)
Llama-3.1-Nemotron-Safety-Guard-8B-v3 ($0.01 input, $0.01 output)
Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 ($0.13 input, $0.38 output)
Nemotron-Cascade-2-30B-A3B ($0.15 input, $0.60 output)
GLM-5.1-FP8 ($0.85 input, $3.10 output)
Vultr model isn't being updated to the latest version, here is the version from vultr
Vultr Serverless Inference is billed based on input and output tokens used. All prices shown are the cost per 1mm tokens.
List of supported chat completion models:
MiniMax-M2.7 ($0.30 input, $1.20 output)
Qwen3.5-397B-A17B-FP8 ($0.30 input, $1.20 output)
DeepSeek-V4-Pro ($0.55 input, $1.65 output)
Kimi-K2.6 ($0.15 input, $0.60 output)
DeepSeek-V3.2-NVFP4 ($0.55 input, $1.65 output)
Llama-3.1-Nemotron-Safety-Guard-8B-v3 ($0.01 input, $0.01 output)
Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 ($0.13 input, $0.38 output)
Nemotron-Cascade-2-30B-A3B ($0.15 input, $0.60 output)
GLM-5.1-FP8 ($0.85 input, $3.10 output)