PaLM#
- pydantic model llama_index.llms.palm.PaLM#
PaLM LLM.
Show JSON schema
{ "title": "PaLM", "description": "PaLM LLM.", "type": "object", "properties": { "callback_manager": { "title": "Callback Manager" }, "system_prompt": { "title": "System Prompt", "description": "System prompt for LLM calls.", "type": "string" }, "messages_to_prompt": { "title": "Messages To Prompt" }, "completion_to_prompt": { "title": "Completion To Prompt" }, "output_parser": { "title": "Output Parser" }, "pydantic_program_mode": { "default": "default", "allOf": [ { "$ref": "#/definitions/PydanticProgramMode" } ] }, "query_wrapper_prompt": { "title": "Query Wrapper Prompt" }, "model_name": { "title": "Model Name", "description": "The PaLM model to use.", "default": "models/text-bison-001", "type": "string" }, "num_output": { "title": "Num Output", "description": "The number of tokens to generate.", "default": 256, "exclusiveMinimum": 0, "type": "integer" }, "generate_kwargs": { "title": "Generate Kwargs", "description": "Kwargs for generation.", "type": "object" }, "class_name": { "title": "Class Name", "type": "string", "default": "PaLM_llm" } }, "definitions": { "PydanticProgramMode": { "title": "PydanticProgramMode", "description": "Pydantic program mode.", "enum": [ "default", "openai", "llm", "guidance", "lm-format-enforcer" ], "type": "string" } } }
- Config
arbitrary_types_allowed: bool = True
- Fields
- Validators
_validate_callback_manager
»callback_manager
set_completion_to_prompt
»completion_to_prompt
set_messages_to_prompt
»messages_to_prompt
- field generate_kwargs: dict [Optional]#
Kwargs for generation.
- field model_name: str = 'models/text-bison-001'#
The PaLM model to use.
- field num_output: int = 256#
The number of tokens to generate.
- Constraints
exclusiveMinimum = 0
- classmethod class_name() str #
Get the class name, used as a unique ID in serialization.
This provides a key that makes serialization robust against actual class name changes.
- complete(*args: Any, **kwargs: Any) Any #
Completion endpoint for LLM.
- stream_complete(*args: Any, **kwargs: Any) Any #
Streaming completion endpoint for LLM.
- property metadata: LLMMetadata#
Get LLM metadata.