Note
The context length for all fine-tuning models is 8k.

DeepSeek

NameSupported fine-tuning typeLicense
deepseek-ai/DeepSeekV3-0324
(Model card)
Full fine-tuningMIT License

Meta

Nebius AI Studio and the Meta models hosted in the service are built with Meta Llama 3.1 and 3.3. For more information, see the following Acceptable Use Policies:
NameSupported fine-tuning typeLicense
meta-llama/Llama-3.2-1B-Instruct
(Model card)
LoRA and full fine-tuningLlama 3.2 Community License Agreement
meta-llama/Llama-3.2-3B-Instruct
(Model card)
LoRA and full fine-tuningLlama 3.2 Community License Agreement
meta-llama/Llama-3.1-8B-Instruct
(Model card)
LoRA and full fine-tuningLlama 3.1 Community License Agreement
meta-llama/Llama-3.1-70B
(Model card)
LoRA and full fine-tuningLlama 3.1 Community License Agreement
meta-llama/Llama-3.3-70B-Instruct
(Model card)
LoRA and full fine-tuningLlama 3.3 Community License Agreement

Qwen

Nebius AI Studio and the Qwen models hosted in the service are built with Qwen.
NameSupported fine-tuning typeLicense
Qwen/Qwen3-14B
(Model card)
LoRA and full fine-tuningApache License 2.0
Qwen/Qwen3-32B
(Model card)
LoRA and full fine-tuningApache License 2.0

Base LoRA adapter models available for deployment

You can deploy serverless LoRA adapter models in Nebius AI Studio. To deploy a model, first prepare or fine-tune it by using one of the base models below:
NameSupported fine-tuning typeLicense
meta-llama/Llama-3.1-8B-Instruct
(Model card)
LoRA and full fine-tuningLlama 3.1 Community License Agreement
meta-llama/Llama-3.3-70B-Instruct
(Model card)
LoRALlama 3.3 Community License Agreement
Qwen/Qwen3-32B
(Model card)
LoRAApache License 2.0