PineconeVectorStore#
- pydantic model llama_index.vector_stores.PineconeVectorStore#
Pinecone Vector Store.
In this vector store, embeddings and docs are stored within a Pinecone index.
During query time, the index uses Pinecone to query for the top k most similar nodes.
- Parameters
pinecone_index (Optional[Union[pinecone.Pinecone.Index, pinecone.Index]]) โ Pinecone index instance,
clients. (pinecone.Pinecone.Index for clients >= 3.0.0; pinecone.Index for older) โ
insert_kwargs (Optional[Dict]) โ insert kwargs during upsert call.
add_sparse_vector (bool) โ whether to add sparse vector to index.
tokenizer (Optional[Callable]) โ tokenizer to use to generate sparse
default_empty_query_vector (Optional[List[float]]) โ default empty query vector. Defaults to None. If not None, then this vector will be used as the query vector if the query is empty.
Show JSON schema
{ "title": "PineconeVectorStore", "description": "Pinecone Vector Store.\n\nIn this vector store, embeddings and docs are stored within a\nPinecone index.\n\nDuring query time, the index uses Pinecone to query for the top\nk most similar nodes.\n\nArgs:\n pinecone_index (Optional[Union[pinecone.Pinecone.Index, pinecone.Index]]): Pinecone index instance,\n pinecone.Pinecone.Index for clients >= 3.0.0; pinecone.Index for older clients.\n insert_kwargs (Optional[Dict]): insert kwargs during `upsert` call.\n add_sparse_vector (bool): whether to add sparse vector to index.\n tokenizer (Optional[Callable]): tokenizer to use to generate sparse\n default_empty_query_vector (Optional[List[float]]): default empty query vector.\n Defaults to None. If not None, then this vector will be used as the query\n vector if the query is empty.", "type": "object", "properties": { "stores_text": { "title": "Stores Text", "default": true, "type": "boolean" }, "is_embedding_query": { "title": "Is Embedding Query", "default": true, "type": "boolean" }, "flat_metadata": { "title": "Flat Metadata", "default": false, "type": "boolean" }, "api_key": { "title": "Api Key", "type": "string" }, "index_name": { "title": "Index Name", "type": "string" }, "environment": { "title": "Environment", "type": "string" }, "namespace": { "title": "Namespace", "type": "string" }, "insert_kwargs": { "title": "Insert Kwargs", "type": "object" }, "add_sparse_vector": { "title": "Add Sparse Vector", "type": "boolean" }, "text_key": { "title": "Text Key", "type": "string" }, "batch_size": { "title": "Batch Size", "type": "integer" }, "remove_text_from_metadata": { "title": "Remove Text From Metadata", "type": "boolean" }, "class_name": { "title": "Class Name", "type": "string", "default": "PinconeVectorStore" } }, "required": [ "add_sparse_vector", "text_key", "batch_size", "remove_text_from_metadata" ] }
- Config
schema_extra: function = <function BaseComponent.Config.schema_extra at 0x7ff1e41e53a0>
- Fields
add_sparse_vector (bool)
api_key (Optional[str])
batch_size (int)
environment (Optional[str])
flat_metadata (bool)
index_name (Optional[str])
insert_kwargs (Optional[Dict])
namespace (Optional[str])
remove_text_from_metadata (bool)
stores_text (bool)
text_key (str)
- field add_sparse_vector: bool [Required]#
- field api_key: Optional[str] = None#
- field batch_size: int [Required]#
- field environment: Optional[str] = None#
- field flat_metadata: bool = False#
- field index_name: Optional[str] = None#
- field insert_kwargs: Optional[Dict] = None#
- field namespace: Optional[str] = None#
- field remove_text_from_metadata: bool [Required]#
- field stores_text: bool = True#
- field text_key: str [Required]#
- add(nodes: List[BaseNode], **add_kwargs: Any) List[str] #
Add nodes to index.
- Parameters
nodes โ List[BaseNode]: list of nodes with embeddings
- classmethod class_name() str #
Get the class name, used as a unique ID in serialization.
This provides a key that makes serialization robust against actual class name changes.
- delete(ref_doc_id: str, **delete_kwargs: Any) None #
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) โ The doc_id of the document to delete.
- classmethod from_params(api_key: Optional[str] = None, index_name: Optional[str] = None, environment: Optional[str] = None, namespace: Optional[str] = None, insert_kwargs: Optional[Dict] = None, add_sparse_vector: bool = False, tokenizer: Optional[Callable] = None, text_key: str = 'text', batch_size: int = 100, remove_text_from_metadata: bool = False, default_empty_query_vector: Optional[List[float]] = None, **kwargs: Any) PineconeVectorStore #
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult #
Query index for top k most similar nodes.
- Parameters
query_embedding (List[float]) โ query embedding
similarity_top_k (int) โ top k most similar nodes
- property client: Any#
Return Pinecone client.