Condense Question Chat Engine
- class llama_index.chat_engine.condense_question.CondenseQuestionChatEngine(query_engine: BaseQueryEngine, condense_question_prompt: BasePromptTemplate, memory: BaseMemory, service_context: ServiceContext, verbose: bool = False, callback_manager: Optional[CallbackManager] = None)
Condense Question Chat Engine.
First generate a standalone question from conversation context and last message, then query the query engine for a response.
- async achat(*args: Any, **kwargs: Any) Any
Async version of main chat interface.
- async astream_chat(*args: Any, **kwargs: Any) Any
Async version of main chat interface.
- chat(*args: Any, **kwargs: Any) Any
Main chat interface.
- property chat_history: List[ChatMessage]
Get chat history.
- chat_repl() None
Enter interactive chat REPL.
- classmethod from_defaults(query_engine: ~llama_index.core.base_query_engine.BaseQueryEngine, condense_question_prompt: ~typing.Optional[~llama_index.prompts.base.BasePromptTemplate] = None, chat_history: ~typing.Optional[~typing.List[~llama_index.llms.base.ChatMessage]] = None, memory: ~typing.Optional[~llama_index.memory.types.BaseMemory] = None, memory_cls: ~typing.Type[~llama_index.memory.types.BaseMemory] = <class 'llama_index.memory.chat_memory_buffer.ChatMemoryBuffer'>, service_context: ~typing.Optional[~llama_index.service_context.ServiceContext] = None, verbose: bool = False, system_prompt: ~typing.Optional[str] = None, prefix_messages: ~typing.Optional[~typing.List[~llama_index.llms.base.ChatMessage]] = None, **kwargs: ~typing.Any) CondenseQuestionChatEngine
Initialize a CondenseQuestionChatEngine from default parameters.
- reset() None
Reset conversation state.
- stream_chat(*args: Any, **kwargs: Any) Any
Stream chat interface.