Response Synthesis Modules#
Detailed inputs/outputs for each response synthesizer are found below.
API Example#
The following shows the setup for utilizing all kwargs.
response_mode
specifies which response synthesizer to useservice_context
defines the LLM and related settings for synthesistext_qa_template
andrefine_template
are the prompts used at various stagesuse_async
is used for only thetree_summarize
response mode right now, to asynchronously build the summary treestreaming
configures whether to return a streaming response object or notstructured_answer_filtering
enables the active filtering of text chunks that are not relevant to a given question
In the synthesize
/asyntheszie
functions, you can optionally provide additional source nodes, which will be added to the response.source_nodes
list.
from llama_index.core.data_structs import Node
from llama_index.core.schema import NodeWithScore
from llama_index.core import get_response_synthesizer
response_synthesizer = get_response_synthesizer(
response_mode="refine",
service_context=service_context,
text_qa_template=text_qa_template,
refine_template=refine_template,
use_async=False,
streaming=False,
)
# synchronous
response = response_synthesizer.synthesize(
"query string",
nodes=[NodeWithScore(node=Node(text="text"), score=1.0), ...],
additional_source_nodes=[
NodeWithScore(node=Node(text="text"), score=1.0),
...,
],
)
# asynchronous
response = await response_synthesizer.asynthesize(
"query string",
nodes=[NodeWithScore(node=Node(text="text"), score=1.0), ...],
additional_source_nodes=[
NodeWithScore(node=Node(text="text"), score=1.0),
...,
],
)
You can also directly return a string, using the lower-level get_response
and aget_response
functions
response_str = response_synthesizer.get_response(
"query string", text_chunks=["text1", "text2", ...]
)