Asi
ASI LLM package.
ASI #
Bases: OpenAILike
ASI LLM - Integration for ASI models.
Currently supported models: - asi1-mini
Examples:
pip install llama-index-llms-asi
from llama_index.llms.asi import ASI
# Set up the ASI class with the required model and API key
llm = ASI(model="asi1-mini", api_key="your_api_key")
# Call the complete method with a query
response = llm.complete("Explain the importance of AI")
print(response)
Source code in llama-index-integrations/llms/llama-index-llms-asi/llama_index/llms/asi/base.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
|
class_name
classmethod
#
class_name() -> str
Get class name.
Source code in llama-index-integrations/llms/llama-index-llms-asi/llama_index/llms/asi/base.py
77 78 79 80 |
|
stream_chat #
stream_chat(messages: Sequence[ChatMessage], **kwargs: Any) -> ChatResponseGen
Override stream_chat to handle ASI's unique streaming format.
ASI's streaming format includes many empty content chunks during the "thinking" phase before delivering the final response.
This implementation filters out empty chunks and only yields chunks with actual content.
Source code in llama-index-integrations/llms/llama-index-llms-asi/llama_index/llms/asi/base.py
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
|
astream_chat
async
#
astream_chat(messages: Sequence[ChatMessage], **kwargs: Any) -> ChatResponseAsyncGen
Override astream_chat to handle ASI's unique streaming format.
ASI's streaming format includes many empty content chunks during the "thinking" phase before delivering the final response.
This implementation filters out empty chunks and only yields chunks with actual content.
Source code in llama-index-integrations/llms/llama-index-llms-asi/llama_index/llms/asi/base.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
|