Skip to content

Vector Stores#

Vector stores contain embedding vectors of ingested document chunks (and sometimes the document chunks as well).

Simple Vector Store#

By default, LlamaIndex uses a simple in-memory vector store that's great for quick experimentation. They can be persisted to (and loaded from) disk by calling vector_store.persist() (and SimpleVectorStore.from_persist_path(...) respectively).

Vector Store Options & Feature Support#

LlamaIndex supports over 20 different vector store options. We are actively adding more integrations and improving feature coverage for each.

Vector Store Type Metadata Filtering Hybrid Search Delete Store Documents Async
Apache Cassandra® self-hosted / cloud
Astra DB cloud
Azure Cognitive Search cloud
Azure CosmosDB MongoDB cloud
BaiduVectorDB cloud
ChatGPT Retrieval Plugin aggregator
Chroma self-hosted
DashVector cloud
Databricks cloud
Deeplake self-hosted / cloud
DocArray aggregator
DuckDB in-memory / self-hosted
DynamoDB cloud
Elasticsearch self-hosted / cloud
FAISS in-memory
txtai in-memory
Jaguar self-hosted / cloud
LanceDB cloud
Lantern self-hosted / cloud
Metal cloud
MongoDB Atlas self-hosted / cloud
MyScale cloud
Milvus / Zilliz self-hosted / cloud
Neo4jVector self-hosted / cloud
OpenSearch self-hosted / cloud
Pinecone cloud
Postgres self-hosted / cloud
pgvecto.rs self-hosted / cloud
Qdrant self-hosted / cloud
Redis self-hosted / cloud
Simple in-memory
SingleStore self-hosted / cloud
Supabase self-hosted / cloud
Tair cloud
TencentVectorDB cloud
Timescale
Typesense self-hosted / cloud
Upstash cloud
Weaviate self-hosted / cloud

For more details, see Vector Store Integrations.

Example Notebooks#

🦙

CTRL + K