Nvidia
NVIDIA #
Bases: OpenAILike
NVIDIA's API Catalog Connector.
Source code in llama-index-integrations/llms/llama-index-llms-nvidia/llama_index/llms/nvidia/base.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
|
mode #
mode(mode: Optional[Literal['nvidia', 'nim']] = 'nvidia', *, base_url: Optional[str] = None, model: Optional[str] = None, api_key: Optional[str] = None) -> NVIDIA
Change the mode.
There are two modes, "nvidia" and "nim". The "nvidia" mode is the default mode and is used to interact with hosted NIMs. The "nim" mode is used to interact with NVIDIA NIM endpoints, which are typically hosted on-premises.
For the "nvidia" mode, the "api_key" parameter is available to specify your API key. If not specified, the NVIDIA_API_KEY environment variable will be used.
For the "nim" mode, the "base_url" parameter is required and the "model" parameter may be necessary. Set base_url to the url of your local NIM endpoint. For instance, "https://localhost:9999/v1". Additionally, the "model" parameter must be set to the name of the model inside the NIM.
Source code in llama-index-integrations/llms/llama-index-llms-nvidia/llama_index/llms/nvidia/base.py
74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
|