Function Calling Mistral Agent¶
This notebook shows you how to use our Mistral agent, powered by function calling capabilities.
Initial Setup¶
Let's start by importing some simple building blocks.
The main thing we need is:
- the OpenAI API (using our own
llama_index
LLM class) - a place to keep conversation history
- a definition for tools that our agent can use.
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
In [ ]:
Copied!
%pip install llama-index-llms-mistralai
%pip install llama-index-embeddings-mistralai
%pip install llama-index-llms-mistralai
%pip install llama-index-embeddings-mistralai
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
In [ ]:
Copied!
import json
from typing import Sequence, List
from llama_index.llms.mistralai import MistralAI
from llama_index.core.llms import ChatMessage
from llama_index.core.tools import BaseTool, FunctionTool
import nest_asyncio
nest_asyncio.apply()
import json
from typing import Sequence, List
from llama_index.llms.mistralai import MistralAI
from llama_index.core.llms import ChatMessage
from llama_index.core.tools import BaseTool, FunctionTool
import nest_asyncio
nest_asyncio.apply()
Let's define some very simple calculator tools for our agent.
In [ ]:
Copied!
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
In [ ]:
Copied!
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
add_tool = FunctionTool.from_defaults(fn=add)
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
add_tool = FunctionTool.from_defaults(fn=add)
Make sure your MISTRAL_API_KEY is set. Otherwise explicitly specify the api_key
parameter.
In [ ]:
Copied!
llm = MistralAI(model="mistral-large-latest")
llm = MistralAI(model="mistral-large-latest")
Initialize Mistral Agent¶
Here we initialize a simple Mistral agent with calculator functions.
In [ ]:
Copied!
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.agent import AgentRunner
agent_worker = FunctionCallingAgentWorker.from_tools(
[multiply_tool, add_tool], llm=llm, verbose=True
)
agent = AgentRunner(agent_worker)
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.agent import AgentRunner
agent_worker = FunctionCallingAgentWorker.from_tools(
[multiply_tool, add_tool], llm=llm, verbose=True
)
agent = AgentRunner(agent_worker)
Chat¶
In [ ]:
Copied!
response = agent.chat("What is (121 + 2) * 5?")
print(str(response))
response = agent.chat("What is (121 + 2) * 5?")
print(str(response))
Added user message to memory: What is (121 + 2) * 5? === Calling Function === Calling function: add with args: {"a": 121, "b": 2} === Calling Function === Calling function: multiply with args: {"a": 123, "b": 5} assistant: The result of (121 + 2) * 5 is 615.
In [ ]:
Copied!
# inspect sources
print(response.sources)
# inspect sources
print(response.sources)
[ToolOutput(content='123', tool_name='add', raw_input={'args': (), 'kwargs': {'a': 121, 'b': 2}}, raw_output=123), ToolOutput(content='615', tool_name='multiply', raw_input={'args': (), 'kwargs': {'a': 123, 'b': 5}}, raw_output=615)]
Async Chat¶
In [ ]:
Copied!
response = await agent.achat("What is (121 * 3) + 5? Use one tool at a time.")
print(str(response))
response = await agent.achat("What is (121 * 3) + 5? Use one tool at a time.")
print(str(response))
Added user message to memory: What is (121 * 3) + 5? Use one tool at a time. === Calling Function === Calling function: multiply with args: {"a": 121, "b": 3} === Calling Function === Calling function: add with args: {"a": 363, "b": 5} assistant: The result of (121 * 3) + 5 is 368.
Mistral Agent over RAG Pipeline¶
Build a Mistral agent over a simple 10K document. We use both Mistral embeddings and mistral-medium to construct the RAG pipeline, and pass it to the Mistral agent as a tool.
In [ ]:
Copied!
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
--2024-03-23 11:13:41-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8003::154, 2606:50c0:8002::154, 2606:50c0:8001::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8003::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1880483 (1.8M) [application/octet-stream] Saving to: ‘data/10k/uber_2021.pdf’ data/10k/uber_2021. 100%[===================>] 1.79M --.-KB/s in 0.09s 2024-03-23 11:13:41 (19.3 MB/s) - ‘data/10k/uber_2021.pdf’ saved [1880483/1880483]
In [ ]:
Copied!
from llama_index.core.tools import QueryEngineTool, ToolMetadata
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.embeddings.mistralai import MistralAIEmbedding
from llama_index.llms.mistralai import MistralAI
embed_model = MistralAIEmbedding()
query_llm = MistralAI(model="mistral-medium")
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)
query_engine_tool = QueryEngineTool(
query_engine=uber_engine,
metadata=ToolMetadata(
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
)
from llama_index.core.tools import QueryEngineTool, ToolMetadata
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.embeddings.mistralai import MistralAIEmbedding
from llama_index.llms.mistralai import MistralAI
embed_model = MistralAIEmbedding()
query_llm = MistralAI(model="mistral-medium")
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)
query_engine_tool = QueryEngineTool(
query_engine=uber_engine,
metadata=ToolMetadata(
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
)
In [ ]:
Copied!
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.agent import AgentRunner
agent_worker = FunctionCallingAgentWorker.from_tools(
[query_engine_tool], llm=llm, verbose=True
)
agent = AgentRunner(agent_worker)
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.agent import AgentRunner
agent_worker = FunctionCallingAgentWorker.from_tools(
[query_engine_tool], llm=llm, verbose=True
)
agent = AgentRunner(agent_worker)
In [ ]:
Copied!
response = agent.chat(
"Tell me the risk factors for Uber? Use one tool at a time."
)
print(str(response))
response = agent.chat(
"Tell me the risk factors for Uber? Use one tool at a time."
)
print(str(response))
Added user message to memory: Tell me the risk factors for Uber? Use one tool at a time. === Calling Function === Calling function: uber_10k with args: {"input": "What are the risk factors for Uber?"} assistant: Uber faces several risk factors that could negatively impact its business. These include the potential failure to offer autonomous vehicle technologies on its platform, the loss of high-quality personnel due to attrition or unsuccessful succession planning, and security or data privacy breaches. Additionally, cyberattacks such as malware, ransomware, and phishing attacks could harm Uber's reputation and business. The company is also subject to climate change risks and legal and regulatory risks. Furthermore, Uber relies on third parties to maintain open marketplaces for distributing its platform and providing software, and any interference from these parties could adversely affect Uber's business. The company may also require additional capital to support its growth, and there is no guarantee that this capital will be available on reasonable terms or at all. Finally, Uber's business is subject to extensive government regulation and oversight relating to the provision of payment and financial services, and the company faces risks related to its collection, use, transfer, disclosure, and other processing of data.