Llama Debug Handler¶
Here we showcase the capabilities of our LlamaDebugHandler in logging events as we run queries within LlamaIndex.
NOTE: This is a beta feature. The usage within different classes and the API interface for the CallbackManager and LlamaDebugHandler may change!
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
In [ ]:
Copied!
%pip install llama-index-agent-openai
%pip install llama-index-llms-openai
%pip install llama-index-agent-openai
%pip install llama-index-llms-openai
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
In [ ]:
Copied!
from llama_index.core.callbacks import (
CallbackManager,
LlamaDebugHandler,
CBEventType,
)
from llama_index.core.callbacks import (
CallbackManager,
LlamaDebugHandler,
CBEventType,
)
Download Data¶
In [ ]:
Copied!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
In [ ]:
Copied!
from llama_index.core import SimpleDirectoryReader
docs = SimpleDirectoryReader("./data/paul_graham/").load_data()
from llama_index.core import SimpleDirectoryReader
docs = SimpleDirectoryReader("./data/paul_graham/").load_data()
Callback Manager Setup¶
In [ ]:
Copied!
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
callback_manager = CallbackManager([llama_debug])
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
callback_manager = CallbackManager([llama_debug])
Trigger the callback with a query¶
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(
docs, callback_manager=callback_manager
)
query_engine = index.as_query_engine()
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(
docs, callback_manager=callback_manager
)
query_engine = index.as_query_engine()
********** Trace: index_construction |_node_parsing -> 0.134458 seconds |_chunking -> 0.132142 seconds |_embedding -> 0.329045 seconds |_embedding -> 0.357797 seconds **********
In [ ]:
Copied!
response = query_engine.query("What did the author do growing up?")
response = query_engine.query("What did the author do growing up?")
********** Trace: query |_query -> 2.198197 seconds |_retrieve -> 0.122185 seconds |_embedding -> 0.117082 seconds |_synthesize -> 2.075836 seconds |_llm -> 2.069724 seconds **********
Explore the Debug Information¶
The callback manager will log several start and end events for the following types:
- CBEventType.LLM
- CBEventType.EMBEDDING
- CBEventType.CHUNKING
- CBEventType.NODE_PARSING
- CBEventType.RETRIEVE
- CBEventType.SYNTHESIZE
- CBEventType.TREE
- CBEventType.QUERY
The LlamaDebugHandler provides a few basic methods for exploring information about these events
In [ ]:
Copied!
# Print info on the LLM calls during the summary index query
print(llama_debug.get_event_time_info(CBEventType.LLM))
# Print info on the LLM calls during the summary index query
print(llama_debug.get_event_time_info(CBEventType.LLM))
EventStats(total_secs=2.069724, average_secs=2.069724, total_count=1)
In [ ]:
Copied!
# Print info on llm inputs/outputs - returns start/end events for each LLM call
event_pairs = llama_debug.get_llm_inputs_outputs()
print(event_pairs[0][0])
print(event_pairs[0][1].payload.keys())
print(event_pairs[0][1].payload["response"])
# Print info on llm inputs/outputs - returns start/end events for each LLM call
event_pairs = llama_debug.get_llm_inputs_outputs()
print(event_pairs[0][0])
print(event_pairs[0][1].payload.keys())
print(event_pairs[0][1].payload["response"])
CBEvent(event_type=<CBEventType.LLM: 'llm'>, payload={<EventPayload.MESSAGES: 'messages'>: [ChatMessage(role=<MessageRole.SYSTEM: 'system'>, content="You are an expert Q&A system that is trusted around the world.\nAlways answer the query using the provided context information, and not prior knowledge.\nSome rules to follow:\n1. Never directly reference the given context in your answer.\n2. Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.", additional_kwargs={}), ChatMessage(role=<MessageRole.USER: 'user'>, content='Context information is below.\n---------------------\nWhat I Worked On\n\nFebruary 2021\n\nBefore college the two main things I worked on, outside of school, were writing and programming.I didn\'t write essays.I wrote what beginning writers were supposed to write then, and probably still are: short stories.My stories were awful.They had hardly any plot, just characters with strong feelings, which I imagined made them deep.The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing."This was in 9th grade, so I was 13 or 14.The school district\'s 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it.It was like a mini Bond villain\'s lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights.The language we used was an early version of Fortran.You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it.The result would ordinarily be to print something on the spectacularly loud printer.I was puzzled by the 1401.I couldn\'t figure out what to do with it.And in retrospect there\'s not much I could have done with it.The only form of input to programs was data stored on punched cards, and I didn\'t have any data stored on punched cards.The only other option was to do things that didn\'t rely on any input, like calculate approximations of pi, but I didn\'t know enough math to do anything interesting of that type.So I\'m not surprised I can\'t remember any programs I wrote, because they can\'t have done much.My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn\'t.On a machine without time-sharing, this was a social as well as a technical error, as the data center manager\'s expression made clear.With microcomputers, everything changed.Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping.[1]\n\nThe first of my friends to get a microcomputer built it himself.It was sold as a kit by Heathkit.I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer.Computers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980.The gold standard then was the Apple II, but a TRS-80 was good enough.This was when I really started programming.I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book.There was only room in memory for about 2 pages of text, so he\'d write 2 pages at a time and then print them out, but it was a lot better than a typewriter.Though I liked programming, I didn\'t plan to study it in college.In college I was going to study philosophy, which sounded much more powerful.It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge.What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn\'t much left for these supposed ultimate truths.All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored.I couldn\'t have put this into words when I was 18.All I knew at the time was that I kept taking philosophy courses and they kept being boring.So I decided to switch to AI.AI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU.I haven\'t tried rereading The Moon is a Harsh Mistress, so I don\'t know how well it has aged, but when I read it I was drawn entirely into its world.It seemed only a matter of time before we\'d have Mike, and when I saw Winograd using SHRDLU, it seemed like that time would be a few years at most.All you had to do was teach SHRDLU more words.There weren\'t any classes in AI at Cornell then, not even graduate classes, so I started trying to teach myself.Which meant learning Lisp, since in those days Lisp was regarded as the language of AI.\n\nThere, right on the wall, was something you could make that would last.Paintings didn\'t become obsolete.Some of the best ones were hundreds of years old.And moreover this was something you could make a living doing.Not as easily as you could by writing software, of course, but I thought if you were really industrious and lived really cheaply, it had to be possible to make enough to survive.And as an artist you could be truly independent.You wouldn\'t have a boss, or even need to get research funding.I had always liked looking at paintings.Could I make them?I had no idea.I\'d never imagined it was even possible.I knew intellectually that people made art — that it didn\'t just appear spontaneously — but it was as if the people who made it were a different species.They either lived long ago or were mysterious geniuses doing strange things in profiles in Life magazine.The idea of actually being able to make art, to put that verb before that noun, seemed almost miraculous.That fall I started taking art classes at Harvard.Grad students could take classes in any department, and my advisor, Tom Cheatham, was very easy going.If he even knew about the strange classes I was taking, he never said anything.So now I was in a PhD program in computer science, yet planning to be an artist, yet also genuinely in love with Lisp hacking and working away at On Lisp.In other words, like many a grad student, I was working energetically on multiple projects that were not my thesis.I didn\'t see a way out of this situation.I didn\'t want to drop out of grad school, but how else was I going to get out?I remember when my friend Robert Morris got kicked out of Cornell for writing the internet worm of 1988, I was envious that he\'d found such a spectacular way to get out of grad school.Then one day in April 1990 a crack appeared in the wall.I ran into professor Cheatham and he asked if I was far enough along to graduate that June.I didn\'t have a word of my dissertation written, but in what must have been the quickest bit of thinking in my life, I decided to take a shot at writing one in the 5 weeks or so that remained before the deadline, reusing parts of On Lisp where I could, and I was able to respond, with no perceptible delay "Yes, I think so.I\'ll give you something to read in a few days."I picked applications of continuations as the topic.In retrospect I should have written about macros and embedded languages.There\'s a whole world there that\'s barely been explored.But all I wanted was to get out of grad school, and my rapidly written dissertation sufficed, just barely.Meanwhile I was applying to art schools.I applied to two: RISD in the US, and the Accademia di Belli Arti in Florence, which, because it was the oldest art school, I imagined would be good.RISD accepted me, and I never heard back from the Accademia, so off to Providence I went.I\'d applied for the BFA program at RISD, which meant in effect that I had to go to college again.This was not as strange as it sounds, because I was only 25, and art schools are full of people of different ages.RISD counted me as a transfer sophomore and said I had to do the foundation that summer.The foundation means the classes that everyone has to take in fundamental subjects like drawing, color, and design.Toward the end of the summer I got a big surprise: a letter from the Accademia, which had been delayed because they\'d sent it to Cambridge England instead of Cambridge Massachusetts, inviting me to take the entrance exam in Florence that fall.This was now only weeks away.My nice landlady let me leave my stuff in her attic.I had some money saved from consulting work I\'d done in grad school; there was probably enough to last a year if I lived cheaply.Now all I had to do was learn Italian.Only stranieri (foreigners) had to take this entrance exam.In retrospect it may well have been a way of excluding them, because there were so many stranieri attracted by the idea of studying art in Florence that the Italian students would otherwise have been outnumbered.I was in decent shape at painting and drawing from the RISD foundation that summer, but I still don\'t know how I managed to pass the written exam.I remember that I answered the essay question by writing about Cezanne, and that I cranked up the intellectual level as high as I could to make the most of my limited vocabulary.[2]\n\nI\'m only up to age 25 and already there are such conspicuous patterns.Here I was, yet again about to attend some august institution in the hopes of learning about some prestigious subject, and yet again about to be disappointed.\n---------------------\nGiven the context information and not prior knowledge, answer the query.\nQuery: What did the author do growing up?\nAnswer: ', additional_kwargs={})]}, time='08/17/2023, 21:30:32.044216', id_='485696ef-9def-483b-a7e3-f4a1f1951aba') dict_keys([<EventPayload.MESSAGES: 'messages'>, <EventPayload.RESPONSE: 'response'>]) assistant: The author worked on writing and programming outside of school before college. They wrote short stories and tried writing programs on an IBM 1401 computer. They also built a microcomputer kit and started programming on it, writing simple games and a word processor.
In [ ]:
Copied!
# Get info on any event type
event_pairs = llama_debug.get_event_pairs(CBEventType.CHUNKING)
print(event_pairs[0][0].payload.keys()) # get first chunking start event
print(event_pairs[0][1].payload.keys()) # get first chunking end event
# Get info on any event type
event_pairs = llama_debug.get_event_pairs(CBEventType.CHUNKING)
print(event_pairs[0][0].payload.keys()) # get first chunking start event
print(event_pairs[0][1].payload.keys()) # get first chunking end event
dict_keys([<EventPayload.CHUNKS: 'chunks'>]) dict_keys([<EventPayload.CHUNKS: 'chunks'>])
In [ ]:
Copied!
# Clear the currently cached events
llama_debug.flush_event_logs()
# Clear the currently cached events
llama_debug.flush_event_logs()
See Traces & Events for Agents¶
In [ ]:
Copied!
# First create a tool for the agent
from llama_index.core.tools import QueryEngineTool
tool = QueryEngineTool.from_defaults(
query_engine=query_engine,
name="PaulGrahamQuestionAnswer",
description="Given a question about Paul Graham, will return an answer.",
)
# First create a tool for the agent
from llama_index.core.tools import QueryEngineTool
tool = QueryEngineTool.from_defaults(
query_engine=query_engine,
name="PaulGrahamQuestionAnswer",
description="Given a question about Paul Graham, will return an answer.",
)
In [ ]:
Copied!
# Now construct the agent
from llama_index.agent.openai import OpenAIAgent
agent = OpenAIAgent.from_tools(
tools=[tool], llm=llm, callback_manager=callback_manager
)
# Now construct the agent
from llama_index.agent.openai import OpenAIAgent
agent = OpenAIAgent.from_tools(
tools=[tool], llm=llm, callback_manager=callback_manager
)
In [ ]:
Copied!
response = agent.chat("What did Paul do growing up?")
response = agent.chat("What did Paul do growing up?")
********** Trace: chat |_llm -> 1.169013 seconds |_query -> 2.357469 seconds |_retrieve -> 0.107983 seconds |_embedding -> 0.099368 seconds |_synthesize -> 2.24932 seconds |_llm -> 2.239481 seconds |_llm -> 2.153333 seconds **********
In [ ]:
Copied!
# works the same for async
response = await agent.achat("What did Paul do growing up?")
# works the same for async
response = await agent.achat("What did Paul do growing up?")
********** Trace: chat |_llm -> 1.318663 seconds |_query -> 2.803533 seconds |_retrieve -> 0.121228 seconds |_embedding -> 0.116355 seconds |_synthesize -> 2.68217 seconds |_llm -> 2.676306 seconds |_llm -> 2.716374 seconds **********
In [ ]:
Copied!
# Clear the currently cached events
llama_debug.flush_event_logs()
# Clear the currently cached events
llama_debug.flush_event_logs()