EndPoint Documentation
Streamling LLM Infrence API Deployment on Akash Network via Ollama, vLLM and Llama-cpp
Endpoints
Conventions
Model names
Model names follow a model:tag
format, where model
can have an optional namespace such as example/model
. Some examples are phi3
and llama3:70b
. The tag is optional and, if not provided, will default to latest
. The tag is used to identify a specific version.
Durations
All durations are returned in nanoseconds.
Streaming responses
Certain endpoints stream responses as JSON objects and can optional return non-streamed responses.
Generate a completion
Generate a response for a given prompt with a provided model. This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
Parameters
model
: (required) the model nameprompt
: the prompt to generate a response forimages
: (optional) a list of base64-encoded images (for multimodal models such asllava
)
Examples
Generate request (Streaming)
Request
Response
A stream of JSON objects is returned:
The final response in the stream also includes additional data about the generation:
total_duration
: time spent generating the responseload_duration
: time spent in nanoseconds loading the modelprompt_eval_count
: number of tokens in the promptprompt_eval_duration
: time spent in nanoseconds evaluating the prompteval_count
: number of tokens in the responseeval_duration
: time in nanoseconds spent generating the responsecontext
: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memoryresponse
: empty if the response was streamed, if not streamed, this will contain the full response
Request (No streaming)
Request
A response can be received in one reply when streaming is off.
Response
If stream
is set to false
, the response will be a single JSON object:
Request (JSON mode)
When
format
is set tojson
, the output will always be a well-formed JSON object. It’s important to also instruct the model to respond in JSON.
Request
Response
The value of response
will be a string containing JSON similar to:
Generate request (With options)
If you want to set custom options for the model at runtime rather than in the Modelfile, you can do so with the options
parameter. This example sets every available option, but you can set any of them individually and omit the ones you do not want to override.
Request
Response
Load a model
If an empty prompt is provided, the model will be loaded into memory.
Request
Response
A single JSON object is returned:
Generate a chat completion
Generate the next message in a chat with a provided model. This is a streaming endpoint, so there will be a series of responses. Streaming can be disabled using "stream": false
. The final response object will include statistics and additional data from the request.
Parameters
model
: (required) the model namemessages
: the messages of the chat, this can be used to keep a chat memory
The message
object has the following fields:
role
: the role of the message, eithersystem
,user
orassistant
content
: the content of the messageimages
(optional): a list of images to include in the message (for multimodal models such asllava
)
Examples
Chat Request (Streaming)
Request
Send a chat message with a streaming response.
Response
A stream of JSON objects is returned:
Final response:
Chat request (No streaming)
Request
Response
List Loaded Models
List models that are available locally.
Examples
Request
Response
A single JSON object will be returned.
Show Model Information
Show information about a model including details, modelfile, template, parameters, license, and system prompt.
Parameters
name
: name of the model to show