OpenAI

pydantic model llama_index.llms.openai.OpenAI

Show JSON schema
{
   "title": "OpenAI",
   "description": "LLM interface.",
   "type": "object",
   "properties": {
      "callback_manager": {
         "title": "Callback Manager"
      },
      "model": {
         "title": "Model",
         "description": "The OpenAI model to use.",
         "default": "gpt-3.5-turbo",
         "type": "string"
      },
      "temperature": {
         "title": "Temperature",
         "description": "The temperature to use during generation.",
         "default": 0.1,
         "gte": 0.0,
         "lte": 1.0,
         "type": "number"
      },
      "max_tokens": {
         "title": "Max Tokens",
         "description": "The maximum number of tokens to generate.",
         "exclusiveMinimum": 0,
         "type": "integer"
      },
      "additional_kwargs": {
         "title": "Additional Kwargs",
         "description": "Additional kwargs for the OpenAI API.",
         "type": "object"
      },
      "max_retries": {
         "title": "Max Retries",
         "description": "The maximum number of API retries.",
         "default": 3,
         "gte": 0,
         "type": "integer"
      },
      "timeout": {
         "title": "Timeout",
         "description": "The timeout, in seconds, for API requests.",
         "default": 60.0,
         "gte": 0,
         "type": "number"
      },
      "default_headers": {
         "title": "Default Headers",
         "description": "The default headers for API requests.",
         "type": "object",
         "additionalProperties": {
            "type": "string"
         }
      },
      "reuse_client": {
         "title": "Reuse Client",
         "description": "Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.",
         "default": true,
         "type": "boolean"
      },
      "api_key": {
         "title": "Api Key",
         "description": "The OpenAI API key.",
         "type": "string"
      },
      "api_base": {
         "title": "Api Base",
         "description": "The base URL for OpenAI API.",
         "type": "string"
      },
      "api_version": {
         "title": "Api Version",
         "description": "The API version for OpenAI API.",
         "type": "string"
      },
      "class_name": {
         "title": "Class Name",
         "type": "string",
         "default": "openai_llm"
      }
   },
   "required": [
      "api_base",
      "api_version"
   ]
}

Config
  • arbitrary_types_allowed: bool = True

Fields
Validators
  • _validate_callback_manager Β» callback_manager

field additional_kwargs: Dict[str, Any] [Optional]

Additional kwargs for the OpenAI API.

field api_base: str [Required]

The base URL for OpenAI API.

field api_key: str = None

The OpenAI API key.

field api_version: str [Required]

The API version for OpenAI API.

field default_headers: Dict[str, str] = None

The default headers for API requests.

field max_retries: int = 3

The maximum number of API retries.

field max_tokens: Optional[int] = None

The maximum number of tokens to generate.

Constraints
  • exclusiveMinimum = 0

field model: str = 'gpt-3.5-turbo'

The OpenAI model to use.

field reuse_client: bool = True

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

field temperature: float = 0.1

The temperature to use during generation.

field timeout: float = 60.0

The timeout, in seconds, for API requests.

async achat(messages: Sequence[ChatMessage], **kwargs: Any) Any

Async chat endpoint for LLM.

async acomplete(*args: Any, **kwargs: Any) Any

Async completion endpoint for LLM.

async astream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any

Async streaming chat endpoint for LLM.

async astream_complete(*args: Any, **kwargs: Any) Any

Async streaming completion endpoint for LLM.

chat(messages: Sequence[ChatMessage], **kwargs: Any) Any

Chat endpoint for LLM.

classmethod class_name() str

Get the class name, used as a unique ID in serialization.

This provides a key that makes serialization robust against actual class name changes.

complete(*args: Any, **kwargs: Any) Any

Completion endpoint for LLM.

stream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any

Streaming chat endpoint for LLM.

stream_complete(*args: Any, **kwargs: Any) Any

Streaming completion endpoint for LLM.

property metadata: LLMMetadata

LLM metadata.