| Prompt | Ask a question as a prompt. | Call the following method: 
 
 mm_llm.complete(     
 prompt="Describe the image as an           alternative text",                         image_documents=image_documents, ) | 
| Streaming output | Output is printed out word by word. This can be helpful for chats, so the user can watch how the answer is being typed gradually. | Call the mm_llm.stream_complete()method and put in itpromptandimage_documentsas well. Next, print out the response:
 
 for r in response:     
    print(r.delta, end="") | 
| Multi-message request | Include system prompts and a chat history to your request, so Nebius AI Studio returns more precise output. | Make an array of messages and then pass it along in the mm_llm.chat()method. For more details, see Examples. | 
| Multi-message request with streaming output | Add system prompts and a chat history and receive the streaming output. | Make an array of messages and then pass it along in the mm_llm.stream_chat()method. | 
| Asynchronous request | Call a method asynchronously, so the next methods do not wait until it is finished. | Call the await mm_llm.acomplete()method and put in itpromptandimage_documentsas for a regular prompt. | 
| Asynchronous request with streaming output | Call a method asynchronously and have output typed word by word. | Call the await mm_llm.astream_complete()method withpromptandimage_documentswithin it. Next, print out the response withasync. For more details, see Examples. | 
| Asynchronous request with a multi-message request | Call a method asynchronously and add system prompts and a chat history. | Make an array of messages and then pass it along in the await mm_llm.achat()method. | 
| Asynchronous request with a multi-message request and streaming output | Combine asynchronous behavior, system prompts, a chat history and streaming output. | Make an array of messages and then pass it along in the await mm_llm.astream_chat()method. Next, print out the response withasync. |