OceanBase Vector Store¶
OceanBase Database is a distributed relational database. It is developed entirely by Ant Group. The OceanBase Database is built on a common server cluster. Based on the Paxos protocol and its distributed structure, the OceanBase Database provides high availability and linear scalability. The OceanBase Database is not dependent on specific hardware architectures.
This notebook describes in detail how to use the OceanBase vector store functionality in LlamaIndex.
Setup¶
In [ ]:
Copied!
%pip install llama-index-vector-stores-oceanbase
%pip install llama-index
# choose dashscope as embedding and llm model, your can also use default openai or other model to test
%pip install llama-index-embeddings-dashscope
%pip install llama-index-llms-dashscope
%pip install llama-index-vector-stores-oceanbase
%pip install llama-index
# choose dashscope as embedding and llm model, your can also use default openai or other model to test
%pip install llama-index-embeddings-dashscope
%pip install llama-index-llms-dashscope
Deploy a standalone OceanBase server with docker¶
In [ ]:
Copied!
%docker run --name=ob433 -e MODE=slim -p 2881:2881 -d oceanbase/oceanbase-ce:4.3.3.0-100000142024101215
%docker run --name=ob433 -e MODE=slim -p 2881:2881 -d oceanbase/oceanbase-ce:4.3.3.0-100000142024101215
Creating ObVecClient¶
In [ ]:
Copied!
from pyobvector import ObVecClient
client = ObVecClient()
client.perform_raw_text_sql(
"ALTER SYSTEM ob_vector_memory_limit_percentage = 30"
)
from pyobvector import ObVecClient
client = ObVecClient()
client.perform_raw_text_sql(
"ALTER SYSTEM ob_vector_memory_limit_percentage = 30"
)
Config dashscope embedding model and LLM.
In [ ]:
Copied!
# set Embbeding model
import os
from llama_index.core import Settings
from llama_index.embeddings.dashscope import DashScopeEmbedding
# Global Settings
Settings.embed_model = DashScopeEmbedding()
# config llm model
from llama_index.llms.dashscope import DashScope, DashScopeGenerationModels
dashscope_llm = DashScope(
model_name=DashScopeGenerationModels.QWEN_MAX,
api_key=os.environ.get("DASHSCOPE_API_KEY", ""),
)
# set Embbeding model
import os
from llama_index.core import Settings
from llama_index.embeddings.dashscope import DashScopeEmbedding
# Global Settings
Settings.embed_model = DashScopeEmbedding()
# config llm model
from llama_index.llms.dashscope import DashScope, DashScopeGenerationModels
dashscope_llm = DashScope(
model_name=DashScopeGenerationModels.QWEN_MAX,
api_key=os.environ.get("DASHSCOPE_API_KEY", ""),
)
Load documents¶
In [ ]:
Copied!
from llama_index.core import (
SimpleDirectoryReader,
load_index_from_storage,
VectorStoreIndex,
StorageContext,
)
from llama_index.vector_stores.oceanbase import OceanBaseVectorStore
from llama_index.core import (
SimpleDirectoryReader,
load_index_from_storage,
VectorStoreIndex,
StorageContext,
)
from llama_index.vector_stores.oceanbase import OceanBaseVectorStore
Download Data & Load Data
In [ ]:
Copied!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
In [ ]:
Copied!
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
In [ ]:
Copied!
oceanbase = OceanBaseVectorStore(
client=client,
dim=1536,
drop_old=True,
normalize=True,
)
storage_context = StorageContext.from_defaults(vector_store=oceanbase)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
oceanbase = OceanBaseVectorStore(
client=client,
dim=1536,
drop_old=True,
normalize=True,
)
storage_context = StorageContext.from_defaults(vector_store=oceanbase)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
Query Index¶
In [ ]:
Copied!
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response
Out[Â ]:
'Growing up, the author worked on two main activities outside of school: writing and programming. They wrote short stories, which they admits were not particularly good, lacking plot but containing characters with strong emotions. They also started programming at a young age, initially on an IBM 1401 computer using an early version of Fortran, though they found it challenging due to the limitations of punch card input and their lack of data to process. Their programming journeyçæŁ took off when microcomputers became available, allowing them to write more interactive programs such as games, a rocket flight predictor, and a simple word processor.'
Metadata Filtering¶
OceanBase Vector Store supports metadata filtering in the form of =
ă >
ă<
ă!=
ă>=
ă<=
ăin
ănot in
ălike
ăIS NULL
at query time.
In [ ]:
Copied!
from llama_index.core.vector_stores import (
MetadataFilters,
MetadataFilter,
)
query_engine = index.as_query_engine(
llm=dashscope_llm,
filters=MetadataFilters(
filters=[
MetadataFilter(key="book", value="paul_graham", operator="!="),
]
),
similarity_top_k=10,
)
res = query_engine.query("What did the author learn?")
res.response
from llama_index.core.vector_stores import (
MetadataFilters,
MetadataFilter,
)
query_engine = index.as_query_engine(
llm=dashscope_llm,
filters=MetadataFilters(
filters=[
MetadataFilter(key="book", value="paul_graham", operator="!="),
]
),
similarity_top_k=10,
)
res = query_engine.query("What did the author learn?")
res.response
Out[Â ]:
'Empty Response'
Delete Documents¶
In [ ]:
Copied!
oceanbase.delete(documents[0].doc_id)
query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response
oceanbase.delete(documents[0].doc_id)
query_engine = index.as_query_engine(llm=dashscope_llm)
res = query_engine.query("What did the author do growing up?")
res.response
Out[Â ]:
'Empty Response'