Pinecone Vector Store - Hybrid Search¶
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
In [ ]:
Copied!
%pip install llama-index-vector-stores-pinecone
%pip install llama-index-vector-stores-pinecone
In [ ]:
Copied!
!pip install llama-index>=0.9.31 pinecone-client>=3.0.0 "transformers[torch]"
!pip install llama-index>=0.9.31 pinecone-client>=3.0.0 "transformers[torch]"
Creating a Pinecone Index¶
In [ ]:
Copied!
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
In [ ]:
Copied!
from pinecone import Pinecone, ServerlessSpec
from pinecone import Pinecone, ServerlessSpec
In [ ]:
Copied!
import os
os.environ[
"PINECONE_API_KEY"
] = #"<Your Pinecone API key, from app.pinecone.io>"
os.environ[
"OPENAI_API_KEY"
] = "sk-..."
api_key = os.environ["PINECONE_API_KEY"]
pc = Pinecone(api_key=api_key)
import os
os.environ[
"PINECONE_API_KEY"
] = #""
os.environ[
"OPENAI_API_KEY"
] = "sk-..."
api_key = os.environ["PINECONE_API_KEY"]
pc = Pinecone(api_key=api_key)
In [ ]:
Copied!
# delete if needed
# pc.delete_index("quickstart")
# delete if needed
# pc.delete_index("quickstart")
In [ ]:
Copied!
# dimensions are for text-embedding-ada-002
# NOTE: needs dotproduct for hybrid search
pc.create_index(
name="quickstart",
dimension=1536,
metric="dotproduct",
spec=ServerlessSpec(cloud="aws", region="us-west-2"),
)
# If you need to create a PodBased Pinecone index, you could alternatively do this:
#
# from pinecone import Pinecone, PodSpec
#
# pc = Pinecone(api_key='xxx')
#
# pc.create_index(
# name='my-index',
# dimension=1536,
# metric='cosine',
# spec=PodSpec(
# environment='us-east1-gcp',
# pod_type='p1.x1',
# pods=1
# )
# )
#
# dimensions are for text-embedding-ada-002
# NOTE: needs dotproduct for hybrid search
pc.create_index(
name="quickstart",
dimension=1536,
metric="dotproduct",
spec=ServerlessSpec(cloud="aws", region="us-west-2"),
)
# If you need to create a PodBased Pinecone index, you could alternatively do this:
#
# from pinecone import Pinecone, PodSpec
#
# pc = Pinecone(api_key='xxx')
#
# pc.create_index(
# name='my-index',
# dimension=1536,
# metric='cosine',
# spec=PodSpec(
# environment='us-east1-gcp',
# pod_type='p1.x1',
# pods=1
# )
# )
#
In [ ]:
Copied!
pinecone_index = pc.Index("quickstart")
pinecone_index = pc.Index("quickstart")
Download Data
In [ ]:
Copied!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
Load documents, build the PineconeVectorStore¶
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.pinecone import PineconeVectorStore
from IPython.display import Markdown, display
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.pinecone import PineconeVectorStore
from IPython.display import Markdown, display
In [ ]:
Copied!
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
In [ ]:
Copied!
# set add_sparse_vector=True to compute sparse vectors during upsert
from llama_index.core import StorageContext
if "OPENAI_API_KEY" not in os.environ:
raise EnvironmentError(f"Environment variable OPENAI_API_KEY is not set")
vector_store = PineconeVectorStore(
pinecone_index=pinecone_index,
add_sparse_vector=True,
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# set add_sparse_vector=True to compute sparse vectors during upsert
from llama_index.core import StorageContext
if "OPENAI_API_KEY" not in os.environ:
raise EnvironmentError(f"Environment variable OPENAI_API_KEY is not set")
vector_store = PineconeVectorStore(
pinecone_index=pinecone_index,
add_sparse_vector=True,
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK" HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
Upserted vectors: 0%| | 0/22 [00:00<?, ?it/s]
Query Index¶
May need to wait a minute or two for the index to be ready
In [ ]:
Copied!
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(vector_store_query_mode="hybrid")
response = query_engine.query("What happened at Viaweb?")
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(vector_store_query_mode="hybrid")
response = query_engine.query("What happened at Viaweb?")
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK" HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
In [ ]:
Copied!
display(Markdown(f"<b>{response}</b>"))
display(Markdown(f"{response}"))
At Viaweb, Lisp was used as a programming language. The speaker gave a talk at a Lisp conference about how Lisp was used at Viaweb, and afterward, the talk gained a lot of attention when it was posted online. This led to a realization that publishing essays online could reach a wider audience than traditional print media. The speaker also wrote a collection of essays, which was later published as a book called "Hackers & Painters."