Gradient Model Adapter#
- pydantic model llama_index.llms.gradient.GradientModelAdapterLLM#
Show JSON schema
{ "title": "GradientModelAdapterLLM", "description": "Simple abstract base class for custom LLMs.\n\nSubclasses must implement the `__init__`, `_complete`,\n `_stream_complete`, and `metadata` methods.", "type": "object", "properties": { "callback_manager": { "title": "Callback Manager" }, "system_prompt": { "title": "System Prompt", "description": "System prompt for LLM calls.", "type": "string" }, "messages_to_prompt": { "title": "Messages To Prompt" }, "completion_to_prompt": { "title": "Completion To Prompt" }, "output_parser": { "title": "Output Parser" }, "pydantic_program_mode": { "default": "default", "allOf": [ { "$ref": "#/definitions/PydanticProgramMode" } ] }, "query_wrapper_prompt": { "title": "Query Wrapper Prompt" }, "max_tokens": { "title": "Max Tokens", "description": "The number of tokens to generate.", "default": 256, "exclusiveMinimum": 0, "exclusiveMaximum": 512, "type": "integer" }, "access_token": { "title": "Access Token", "description": "The Gradient access token to use.", "type": "string" }, "host": { "title": "Host", "description": "The url of the Gradient service to access.", "type": "string" }, "workspace_id": { "title": "Workspace Id", "description": "The Gradient workspace id to use.", "type": "string" }, "is_chat_model": { "title": "Is Chat Model", "description": "Whether the model is a chat model.", "default": false, "type": "boolean" }, "model_adapter_id": { "title": "Model Adapter Id", "description": "The id of the model adapter to use.", "type": "string" }, "class_name": { "title": "Class Name", "type": "string", "default": "custom_llm" } }, "required": [ "model_adapter_id" ], "definitions": { "PydanticProgramMode": { "title": "PydanticProgramMode", "description": "Pydantic program mode.", "enum": [ "default", "openai", "llm", "guidance", "lm-format-enforcer" ], "type": "string" } } }
- Config
arbitrary_types_allowed: bool = True
- Fields
callback_manager (llama_index.callbacks.base.CallbackManager)
completion_to_prompt (llama_index.llms.llm.CompletionToPromptType)
messages_to_prompt (llama_index.llms.llm.MessagesToPromptType)
output_parser (Optional[llama_index.types.BaseOutputParser])
pydantic_program_mode (llama_index.types.PydanticProgramMode)
query_wrapper_prompt (Optional[llama_index.prompts.base.BasePromptTemplate])
system_prompt (Optional[str])
- Validators
_validate_callback_manager
»callback_manager
set_completion_to_prompt
»completion_to_prompt
set_messages_to_prompt
»messages_to_prompt
- field access_token: Optional[str] = None#
The Gradient access token to use.
- field host: Optional[str] = None#
The url of the Gradient service to access.
- field is_chat_model: bool = False#
Whether the model is a chat model.
- field max_tokens: Optional[int] = 256#
The number of tokens to generate.
- Constraints
exclusiveMinimum = 0
exclusiveMaximum = 512
- field model_adapter_id: str [Required]#
The id of the model adapter to use.
- field workspace_id: Optional[str] = None#
The Gradient workspace id to use.
- async acomplete(*args: Any, **kwargs: Any) Any #
Async completion endpoint for LLM.
- close() None #
- complete(*args: Any, **kwargs: Any) Any #
Completion endpoint for LLM.
- stream_complete(prompt: str, formatted: bool = False, **kwargs: Any) Generator[CompletionResponse, None, None] #
Streaming completion endpoint for LLM.
- property metadata: LLMMetadata#
LLM metadata.