TiDB Property Graph Index¶
TiDB is a distributed SQL database, it is MySQL compatible and features horizontal scalability, strong consistency, and high availability. Currently it only supports Vector Search in TiDB Cloud Serverless.
In this nodebook, we will cover how to connect to a TiDB Serverless cluster and create a property graph index.
%pip install llama-index llama-index-graph-stores-tidb
Prepare TiDB Serverless Cluster¶
Sign up for TiDB Cloud and create a TiDB Serverless cluster with Vector Search enabled.
Get the db connection string from the Cluster Details page, for example:
mysql+pymysql://user:password@host:4000/dbname?ssl_verify_cert=true&ssl_verify_identity=true
TiDB Serverless requires TSL connection when using public endpoint.
Env Setup¶
We need just a few environment setups to get started.
import os
os.environ["OPENAI_API_KEY"] = "sk-proj-..."
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
import nest_asyncio
nest_asyncio.apply()
from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
Index Construction¶
from llama_index.core import PropertyGraphIndex
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core.indices.property_graph import SchemaLLMPathExtractor
from llama_index.graph_stores.tidb import TiDBPropertyGraphStore
graph_store = TiDBPropertyGraphStore(
db_connection_string="mysql+pymysql://user:password@host:4000/dbname?ssl_verify_cert=true&ssl_verify_identity=true",
drop_existing_table=True,
)
# Note: it can take a while to index the documents, especially if you have a large number of documents.
# Especially if you are connecting TiDB Serverless to a public endpoint, it depends on the distance between your server location and the TiDB serverless location.
index = PropertyGraphIndex.from_documents(
documents,
embed_model=OpenAIEmbedding(model_name="text-embedding-3-small"),
kg_extractors=[
SchemaLLMPathExtractor(
llm=OpenAI(model="gpt-3.5-turbo", temperature=0.0)
)
],
property_graph_store=graph_store,
show_progress=True,
)
Parsing nodes: 0%| | 0/1 [00:00<?, ?it/s]
Extracting paths from text with schema: 100%|██████████| 22/22 [00:44<00:00, 2.02s/it] Generating embeddings: 100%|██████████| 1/1 [00:01<00:00, 1.66s/it] Generating embeddings: 100%|██████████| 3/3 [00:01<00:00, 1.51it/s]
Querying and Retrieval¶
retriever = index.as_retriever(
include_text=False, # include source text in returned nodes, default True
)
nodes = retriever.retrieve("What happened at Interleaf and Viaweb?")
for node in nodes:
print(node.text)
Interleaf -> USED_FOR -> software for creating documents Interleaf -> HAS -> scripting language Interleaf -> HAS -> Lisp Viaweb -> USED_FOR -> site builders Viaweb -> USED_FOR -> ecommerce software Viaweb -> USED_FOR -> retail Viaweb -> USED_FOR -> business Viaweb -> IS_A -> application service provider Viaweb -> IS_A -> software as a service
query_engine = index.as_query_engine(include_text=True)
response = query_engine.query("What happened at Interleaf and Viaweb?")
print(str(response))
Interleaf added a scripting language inspired by Emacs, which was a dialect of Lisp. The individual who worked at Interleaf found the Lisp implementation challenging due to their lack of knowledge in C. They also learned various lessons about technology companies and office dynamics during their time at Interleaf. On the other hand, Viaweb was used for site builders, ecommerce software, retail, and business purposes. The work on Viaweb and Y Combinator initially seemed unimpressive and lacked prestige, but the individual found success by working on less prestigious projects.
Loading from an existing Graph¶
If you have an existing graph (either created with LlamaIndex or otherwise), we can connect to and use it!
from llama_index.core import PropertyGraphIndex
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_index.core.indices.property_graph import SchemaLLMPathExtractor
from llama_index.graph_stores.tidb import TiDBPropertyGraphStore
graph_store = TiDBPropertyGraphStore(
db_connection_string="mysql+pymysql://user:password@host:4000/dbname?ssl_verify_cert=true&ssl_verify_identity=true",
)
index = PropertyGraphIndex.from_existing(
property_graph_store=graph_store,
llm=OpenAI(model="gpt-3.5-turbo", temperature=0.3),
embed_model=OpenAIEmbedding(model_name="text-embedding-3-small"),
)
From here, we can still insert more documents!
from llama_index.core import Document
document = Document(text="LlamaIndex is great!")
index.insert(document)
nodes = index.as_retriever(include_text=False).retrieve("LlamaIndex")
print(nodes[0].text)
Llamaindex -> Is -> Great
For full details on construction, retrieval, querying of a property graph, see the full docs page.