You can use the Helicone observability platform to gather statistics on your requests to Nebius AI Studio. Helicone routes your requests to Nebius AI Studio through its Gateway and collects usage data. The statistics and costs are then visualized on the Helicone dashboard. This integration works with all Nebius AI Studio models for inference, including text-to-text, text-to-image and other modalities. To use the integration, do the following:
  1. Create a Nebius API key for authentication.
  2. Save the API key to an environment variable:
    export NEBIUS_API_KEY=<API_key>
    
  3. Create a Helicone account.
  4. Log in to Helicone.
  5. Go to SettingsAPI Keys and generate a Helicone API key.
  6. Save the API key into a HELICONE_API_KEY environment variable:
    export HELICONE_API_KEY="<Helicone_API_key>"
    
  7. Configure the integration:
  1. Create the following helicone-test.py script:
    import openai
    import os
    
    # Create an OpenAI client
    client = openai.Client(
        base_url="https://nebius.helicone.ai/v1",
        api_key=os.environ.get("NEBIUS_API_KEY"),
        default_headers={
            "Helicone-Auth": f"Bearer {os.environ.get("HELICONE_API_KEY")}",
        }
    )
    
    # Use the client to create a multi-message request and print the response
    response = client.chat.completions.create(
        model="meta-llama/Llama-3.3-70B-Instruct",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Hello!"},
        ]
    )
    
    print(response.choices[0].message.content)
    
    In this script, the following settings are specific to the Helicone integration:
    • base_url contains the address of the Helicone API Gateway for the Nebius AI Studio provider.
    • default_headers contains a Helicone-Auth header with the Helicone API key.
  2. Run the script:
    python helicone-test.py
    
    The output is the following:
    Hello. How can I assist you today?
    
You can change this code to fit your needs:
  • To work with a different model, change model. You can copy the model ID from the model card or look it up in this list.
  • Modify messages to specify your questions.
To work with models of different modalities, you can use the same OpenAI client and call other methods, for example, client.images.generate for image generation. See also examples.