Condense Plus Context Chat Engine

class llama_index.chat_engine.condense_plus_context.CondensePlusContextChatEngine(retriever: BaseRetriever, llm: LLM, llm_predictor: LLMPredictor, memory: BaseMemory, context_prompt: Optional[str] = None, condense_prompt: Optional[str] = None, system_prompt: Optional[str] = None, skip_condense: bool = False, node_postprocessors: Optional[List[BaseNodePostprocessor]] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False)

Condensed Conversation & Context Chat Engine.

First condense a conversation and latest user message to a standalone question Then build a context for the standalone question from a retriever, Then pass the context along with prompt and user message to LLM to generate a response.

async achat(*args: Any, **kwargs: Any) Any

Async version of main chat interface.

async astream_chat(*args: Any, **kwargs: Any) Any

Async version of main chat interface.

chat(*args: Any, **kwargs: Any) Any

Main chat interface.

property chat_history: List[ChatMessage]

Get chat history.

chat_repl() None

Enter interactive chat REPL.

classmethod from_defaults(retriever: BaseRetriever, service_context: Optional[ServiceContext] = None, chat_history: Optional[List[ChatMessage]] = None, memory: Optional[BaseMemory] = None, system_prompt: Optional[str] = None, context_prompt: Optional[str] = None, condense_prompt: Optional[str] = None, skip_condense: bool = False, node_postprocessors: Optional[List[BaseNodePostprocessor]] = None, verbose: bool = False, **kwargs: Any) CondensePlusContextChatEngine

Initialize a CondensePlusContextChatEngine from default parameters.

reset() None

Reset conversation state.

stream_chat(*args: Any, **kwargs: Any) Any

Stream chat interface.