Chat Prompts Customization¶
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
In [ ]:
Copied!
%pip install llama-index-llms-openai
%pip install llama-index-llms-openai
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
Prompt Setup¶
Below, we take the default prompts and customize them to always answer, even if the context is not helpful.
We show two ways of setting up the prompts:
- Explicitly define ChatMessage and MessageRole objects.
- Call ChatPromptTemplate.from_messages
In [ ]:
Copied!
qa_prompt_str = (
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
)
refine_prompt_str = (
"We have the opportunity to refine the original answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question: {query_str}. "
"If the context isn't useful, output the original answer again.\n"
"Original Answer: {existing_answer}"
)
qa_prompt_str = (
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {query_str}\n"
)
refine_prompt_str = (
"We have the opportunity to refine the original answer "
"(only if needed) with some more context below.\n"
"------------\n"
"{context_msg}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question: {query_str}. "
"If the context isn't useful, output the original answer again.\n"
"Original Answer: {existing_answer}"
)
1. Explicitly Define ChatMessage
and MessageRole
objects¶
In [ ]:
Copied!
from llama_index.core.llms import ChatMessage, MessageRole
from llama_index.core import ChatPromptTemplate
# Text QA Prompt
chat_text_qa_msgs = [
ChatMessage(
role=MessageRole.SYSTEM,
content=(
"Always answer the question, even if the context isn't helpful."
),
),
ChatMessage(role=MessageRole.USER, content=qa_prompt_str),
]
text_qa_template = ChatPromptTemplate(chat_text_qa_msgs)
# Refine Prompt
chat_refine_msgs = [
ChatMessage(
role=MessageRole.SYSTEM,
content=(
"Always answer the question, even if the context isn't helpful."
),
),
ChatMessage(role=MessageRole.USER, content=refine_prompt_str),
]
refine_template = ChatPromptTemplate(chat_refine_msgs)
from llama_index.core.llms import ChatMessage, MessageRole
from llama_index.core import ChatPromptTemplate
# Text QA Prompt
chat_text_qa_msgs = [
ChatMessage(
role=MessageRole.SYSTEM,
content=(
"Always answer the question, even if the context isn't helpful."
),
),
ChatMessage(role=MessageRole.USER, content=qa_prompt_str),
]
text_qa_template = ChatPromptTemplate(chat_text_qa_msgs)
# Refine Prompt
chat_refine_msgs = [
ChatMessage(
role=MessageRole.SYSTEM,
content=(
"Always answer the question, even if the context isn't helpful."
),
),
ChatMessage(role=MessageRole.USER, content=refine_prompt_str),
]
refine_template = ChatPromptTemplate(chat_refine_msgs)
2. Call ChatPromptTemplate.from_messages
¶
from_messages
is syntatic sugar that allows you to define a chat prompt template as a list of tuples, with each tuple corresponding to a chat message ("role", "message").
In [ ]:
Copied!
from llama_index.core import ChatPromptTemplate
# Text QA Prompt
chat_text_qa_msgs = [
(
"system",
"Always answer the question, even if the context isn't helpful.",
),
("user", qa_prompt_str),
]
text_qa_template = ChatPromptTemplate.from_messages(chat_text_qa_msgs)
# Refine Prompt
chat_refine_msgs = [
(
"system",
"Always answer the question, even if the context isn't helpful.",
),
("user", refine_prompt_str),
]
refine_template = ChatPromptTemplate.from_messages(chat_refine_msgs)
from llama_index.core import ChatPromptTemplate
# Text QA Prompt
chat_text_qa_msgs = [
(
"system",
"Always answer the question, even if the context isn't helpful.",
),
("user", qa_prompt_str),
]
text_qa_template = ChatPromptTemplate.from_messages(chat_text_qa_msgs)
# Refine Prompt
chat_refine_msgs = [
(
"system",
"Always answer the question, even if the context isn't helpful.",
),
("user", refine_prompt_str),
]
refine_template = ChatPromptTemplate.from_messages(chat_refine_msgs)
Using the Prompts¶
Now, we use the prompts in an index query!
In [ ]:
Copied!
import openai
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
import openai
import os
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
Download Data¶
In [ ]:
Copied!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
# Create an index using a chat model, so that we can use the chat prompts!
llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
index = VectorStoreIndex.from_documents(documents)
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
# Create an index using a chat model, so that we can use the chat prompts!
llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
index = VectorStoreIndex.from_documents(documents)
Before Adding Templates¶
In [ ]:
Copied!
print(index.as_query_engine(llm=llm).query("Who is Joe Biden?"))
print(index.as_query_engine(llm=llm).query("Who is Joe Biden?"))
I'm unable to provide an answer to that question based on the context information provided.
After Adding Templates¶
In [ ]:
Copied!
print(
index.as_query_engine(
text_qa_template=text_qa_template,
refine_template=refine_template,
llm=llm,
).query("Who is Joe Biden?")
)
print(
index.as_query_engine(
text_qa_template=text_qa_template,
refine_template=refine_template,
llm=llm,
).query("Who is Joe Biden?")
)
Joe Biden is the current President of the United States, having taken office in January 2021. He previously served as Vice President under President Barack Obama from 2009 to 2017.