Overview

By default, a text generation model responds to your requests with a text in natural language. You can also force the model to respond with JSON instead, which makes it easier to work with outputs programmatically in your application. For example, you can tell a model the name of an actor or actress and get a response with details of a film they have starred in, in a format like this:
{
  "title": "The Shining",
  "year": 1980,
  "director": "Stanley Kubrick",
  "cast": ["Jack Nicholson", "Shelley Duvall", "Danny Lloyd"],
}
Use response_formatparameter to get JSON output. Depending on the settings in your request, the response can be:
  • JSON schema following. Follows the specific schema you provide - {"type": "json_schema"} in response_format parameter.
  • Arbitrary JSON object. Forces arbitrary JSON output, the model will decide on the schema by itself {"type": "json_object"} in response_format parameter.
Supported models
Some models are better in providing JSON due to their training data or our inference engine limitations. Use JSON mode tag in on a model card to find a model with structured output supported.
1

Choose response format type

Select between strict "json_schema" and arbitrary "json_object"
2

Optionally: provide JSON schema

Provide the schema in a JSON Schema Specification compliant format.
3

Add instructions to the system or user prompt

It’s always better to provide the instruction to follow JSON output and the schema to your model’s system or user prompt to increase output quality and consistency
4

Test several models

Different models has different structured output capabilities. Always test and compare a few to obtain the best result

JSON Schema following

JSON that follows a schema provided in your request. Make a request to the Nebius AI Studio API with {"type": "json_schema"} in response_format and provide desirable schema in a JSON Schema Specification compliant format.
Many practical tests show that it’s always better to provide the schema both in text prompt for the model and"json_schema"parameter
import os
import json
from openai import OpenAI
from typing import List, Literal
from pydantic import BaseModel

# 1. Define a schema using Pydantic's `BaseModel`. You can also define 
# a JSON Schema directly; see details after this code example

class Film(BaseModel):
  title: str
  year: int
  director: str
  cast: List[str]
  genre: Literal[
      "drama", "thriller", "sci-fi",
      "comedy", "horror", "fantasy"
  ]

# 2. Add the schema to your request

client = OpenAI(
    base_url="https://api.studio.nebius.com/v1/",
    api_key=os.environ.get("NEBIUS_API_KEY"),
)

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B",
	response_format={
        	"type": "json_schema"
			"json_schema": Film.model_json_schema()
    	}
    messages=[
        {
            "role": "system",
            "content": (
                "I will give you an actor or actress, and you will respond "
                "with details of a real film they have starred in, according "
                "to the provided structure."
            )
        },
        {
            "role": "user",
            "content": "Jack Nicholson"
        }
    ],

)

# 3. Work with the JSON output or a refusal

output = completion.choices[0].message
if output.refusal:
    # Handle refusal
    print(output.refusal)
elif output.content:
    try:
        output_json = json.loads(output.content)
        print(output_json)
        print("Film: {} ({})".format(output_json['title'], 
                                     output_json['year']))
        # etc.
    except Exception as e:
        # Handle possible exceptions, e.g. invalid JSON
        print(e)
        pass
Example of a valid JSON schema:
{
  "name": "meal_nutrition",
  "schema": {
    "type": "object",
    "properties": {
      "meal_name": {
        "type": "string",
        "description": "Name or description of the meal",
        "minLength": 1
      },
      "serving_size_g": {
        "type": "number",
        "description": "Serving size in grams",
        "minimum": 1
      },
      "nutrients": {
        "type": "object",
        "description": "Macronutrient and micronutrient content per serving",
        "properties": {
          "calories": {
            "type": "number",
            "description": "Energy in kilocalories (kcal)",
            "minimum": 0
          },
          "protein_g": {
            "type": "number",
            "description": "Protein in grams",
            "minimum": 0
          },
          "carbohydrates_g": {
            "type": "number",
            "description": "Carbohydrates in grams",
            "minimum": 0
          },
          "fat_g": {
            "type": "number",
            "description": "Total fat in grams",
            "minimum": 0
          },
          "fiber_g": {
            "type": "number",
            "description": "Dietary fiber in grams",
            "minimum": 0
          },
          "sugars_g": {
            "type": "number",
            "description": "Sugars in grams",
            "minimum": 0
          },
          "sodium_mg": {
            "type": "number",
            "description": "Sodium in milligrams",
            "minimum": 0
          }
        },
        "required": [
          "calories",
          "protein_g",
          "carbohydrates_g",
          "fat_g",
          "fiber_g",
          "sugars_g",
          "sodium_mg"
        ],
        "additionalProperties": false
      }
    },
    "required": [
      "meal_name",
      "serving_size_g",
      "nutrients"
    ],
    "additionalProperties": false
  },
  "strict": true
}

Arbitrary JSON object

The model will produce a valid JSON object without following any specific schema.
Make a request to the Nebius AI Studio API with {"type": "json_object"} in response_format:
import os
import json
from openai import OpenAI

client = OpenAI(
    base_url="https://api.studio.nebius.com/v1/",
    api_key=os.environ.get("NEBIUS_API_KEY"),
)

completion = client.chat.completions.create(
    model="mistralai/Mistral-Nemo-Instruct-2407",
    # 1. Instruct the model to produce JSON. In this example, we do it 
    # in the system prompt; you can do it in a user message instead
    messages=[
        {
            "role": "system",
            "content": (
                "I will give you an actor or actress, and you will respond "
                "with details of a real film they have starred in, using JSON "
                "as the format."
            )
        },
        {
            "role": "user",
            "content": "Jack Nicholson"
        }
    ],
    # 2. Set the response format to `json_object`
    response_format={
        "type": "json_object"
    }
)

# 3. Work with the JSON output (it should be valid but is not guaranteed 
# to have any predetermined fields) or a refusal

output = completion.choices[0].message
if output.refusal:
    # Handle refusal
    print(output.refusal)
elif output.content:
    try:
        output_json = json.loads(output.content)
        print(output_json)
    except Exception as e:
        # Handle possible exceptions, e.g. invalid JSON
        print(e)
        pass
Use JSON schema instead of JSON objects whenever you need strict JSON structure

Try it in our Cookbook

Strucrured output - Cookbook

Structured output in Playground

You can explore the structured output capabilities of models directly in the Playground:
  1. Go to the “Model Parameters” section.
  2. In the "Response format" dropdown, select JSON object/schema.
  3. (Optional) Provide a strict JSON schema if you want the model to follow a specific structure.
  4. Enter your prompt and run it to test whether the model returns the expected JSON output for your use case.
Screenshot2025 07 30at10 11 07 Pn