LLMs#
A large language model (LLM) is a reasoning engine that can complete text, chat with users, and follow instructions.
LLM Implementations#
LLM Interface#
Schemas#
- pydantic model llama_index.llms.base.ChatMessage#
Chat message.
Show JSON schema
{ "title": "ChatMessage", "description": "Chat message.", "type": "object", "properties": { "role": { "default": "user", "allOf": [ { "$ref": "#/definitions/MessageRole" } ] }, "content": { "title": "Content", "default": "" }, "additional_kwargs": { "title": "Additional Kwargs", "type": "object" } }, "definitions": { "MessageRole": { "title": "MessageRole", "description": "Message role.", "enum": [ "system", "user", "assistant", "function", "tool", "chatbot" ], "type": "string" } } }
- Fields
additional_kwargs (dict)
content (Optional[Any])
role (llama_index.core.llms.types.MessageRole)
- field additional_kwargs: dict [Optional]#
- field content: Optional[Any] = ''#
- field role: MessageRole = MessageRole.USER#
- pydantic model llama_index.llms.base.ChatResponse#
Chat response.
Show JSON schema
{ "title": "ChatResponse", "description": "Chat response.", "type": "object", "properties": { "message": { "$ref": "#/definitions/ChatMessage" }, "raw": { "title": "Raw", "type": "object" }, "delta": { "title": "Delta", "type": "string" }, "additional_kwargs": { "title": "Additional Kwargs", "type": "object" } }, "required": [ "message" ], "definitions": { "MessageRole": { "title": "MessageRole", "description": "Message role.", "enum": [ "system", "user", "assistant", "function", "tool", "chatbot" ], "type": "string" }, "ChatMessage": { "title": "ChatMessage", "description": "Chat message.", "type": "object", "properties": { "role": { "default": "user", "allOf": [ { "$ref": "#/definitions/MessageRole" } ] }, "content": { "title": "Content", "default": "" }, "additional_kwargs": { "title": "Additional Kwargs", "type": "object" } } } } }
- Fields
additional_kwargs (dict)
delta (Optional[str])
message (llama_index.core.llms.types.ChatMessage)
raw (Optional[dict])
- field additional_kwargs: dict [Optional]#
- field delta: Optional[str] = None#
- field message: ChatMessage [Required]#
- field raw: Optional[dict] = None#
- pydantic model llama_index.llms.base.CompletionResponse#
Completion response.
- Fields:
- text: Text content of the response if not streaming, or if streaming,
the current extent of streamed text.
- additional_kwargs: Additional information on the response(i.e. token
counts, function calling information).
raw: Optional raw JSON that was parsed to populate text, if relevant. delta: New text that just streamed in (only relevant when streaming).
Show JSON schema
{ "title": "CompletionResponse", "description": "Completion response.\n\nFields:\n text: Text content of the response if not streaming, or if streaming,\n the current extent of streamed text.\n additional_kwargs: Additional information on the response(i.e. token\n counts, function calling information).\n raw: Optional raw JSON that was parsed to populate text, if relevant.\n delta: New text that just streamed in (only relevant when streaming).", "type": "object", "properties": { "text": { "title": "Text", "type": "string" }, "additional_kwargs": { "title": "Additional Kwargs", "type": "object" }, "raw": { "title": "Raw", "type": "object" }, "delta": { "title": "Delta", "type": "string" } }, "required": [ "text" ] }
- Fields
additional_kwargs (dict)
delta (Optional[str])
raw (Optional[dict])
text (str)
- field additional_kwargs: dict [Optional]#
- field delta: Optional[str] = None#
- field raw: Optional[dict] = None#
- field text: str [Required]#
- pydantic model llama_index.llms.base.LLMMetadata#
Show JSON schema
{ "title": "LLMMetadata", "type": "object", "properties": { "context_window": { "title": "Context Window", "description": "Total number of tokens the model can be input and output for one response.", "default": 3900, "type": "integer" }, "num_output": { "title": "Num Output", "description": "Number of tokens the model can output when generating a response.", "default": 256, "type": "integer" }, "is_chat_model": { "title": "Is Chat Model", "description": "Set True if the model exposes a chat interface (i.e. can be passed a sequence of messages, rather than text), like OpenAI's /v1/chat/completions endpoint.", "default": false, "type": "boolean" }, "is_function_calling_model": { "title": "Is Function Calling Model", "description": "Set True if the model supports function calling messages, similar to OpenAI's function calling API. For example, converting 'Email Anya to see if she wants to get coffee next Friday' to a function call like `send_email(to: string, body: string)`.", "default": false, "type": "boolean" }, "model_name": { "title": "Model Name", "description": "The model's name used for logging, testing, and sanity checking. For some models this can be automatically discerned. For other models, like locally loaded models, this must be manually specified.", "default": "unknown", "type": "string" }, "system_role": { "description": "The role this specific LLM providerexpects for system prompt. E.g. 'SYSTEM' for OpenAI, 'CHATBOT' for Cohere", "default": "system", "allOf": [ { "$ref": "#/definitions/MessageRole" } ] } }, "definitions": { "MessageRole": { "title": "MessageRole", "description": "Message role.", "enum": [ "system", "user", "assistant", "function", "tool", "chatbot" ], "type": "string" } } }
- Fields
context_window (int)
is_chat_model (bool)
is_function_calling_model (bool)
model_name (str)
num_output (int)
system_role (llama_index.core.llms.types.MessageRole)
- field context_window: int = 3900#
Total number of tokens the model can be input and output for one response.
- field is_chat_model: bool = False#
Set True if the model exposes a chat interface (i.e. can be passed a sequence of messages, rather than text), like OpenAI’s /v1/chat/completions endpoint.
- field is_function_calling_model: bool = False#
Set True if the model supports function calling messages, similar to OpenAI’s function calling API. For example, converting ‘Email Anya to see if she wants to get coffee next Friday’ to a function call like send_email(to: string, body: string).
- field model_name: str = 'unknown'#
The model’s name used for logging, testing, and sanity checking. For some models this can be automatically discerned. For other models, like locally loaded models, this must be manually specified.
- field num_output: int = 256#
Number of tokens the model can output when generating a response.
- field system_role: MessageRole = MessageRole.SYSTEM#
The role this specific LLM providerexpects for system prompt. E.g. ‘SYSTEM’ for OpenAI, ‘CHATBOT’ for Cohere