Prerequisites
- Create an API key for authentication.
-
Save the API key to an environment variable:
-
Install LlamaIndex packages:
Prepare a script
-
Copy the following part of the script:
-
Add one of the following methods, depending on your use case:
Use case Description How to implement Prompt Ask a question as a prompt. Call the llm.complete("<prompt>")
method.Multi-message request Include system prompts and a chat history to your request, so Nebius AI Studio returns more precise output. Call the llm.chat(messages)
method and pass along a list of messages in it. To preparemessages
, call theChatMessage()
method. For more details, see Examples.Streaming output Output is printed out word by word. This can be helpful for chats, so the user can watch how the answer is being typed gradually. To make the output streaming, call the llm.stream_complete("<prompt>")
method. Next, print out the response:
for r in response:
print(r.delta, end="")Multi-message request with streaming output Add system prompts and a chat history and receive the streaming output. Call the llm.stream_chat(messages)
method and pass along a list of messages in it.
Examples
Multi-message request
The script below adds a chat to the request and asks in the last message to write a poem:Prompt request with streaming output
The script below sends the promptAmsterdam is the capital of
and returns the output that is printed out word by word: