Skip to content

Latest commit

 

History

History
65 lines (43 loc) · 9.79 KB

File metadata and controls

65 lines (43 loc) · 9.79 KB

Vector search in Python (Azure AI Search)

This repository contains multiple notebooks that demonstrate how to use Azure AI Search for vector and non-vector content in RAG patterns and in traditional search solutions.

See What's new in Azure AI Search for feature announcements by month and version.

Sample Description Status
New E2E Rag Demo Notebook demonstrating how to seamlessly integrate data from Google Cloud Platform storage and Amazon Web Services (AWS) S3 with Azure AI Search using OneLake indexer and the latest AI vectorization techniques. Beta (see the ChangeLog for a package version providing the OneLake data source type and skills that connect to the model catalog in Azure AI Studio).
New Multimodal RAG Demo Notebook demonstrating how to create a multimodal (text + images) vector index in Azure AI Search. Beta (see the ChangeLog for a package version providing the Azure AI Vision embedding skill and vectorizer).
New Cohere embeddings + Integrated Vectorization Cohere Embed API integration with integrated vectorization. Includes Bicep templates for automatic deployment Beta (see the ChangeLog for a package version providing an updated AML skill and the Azure AI Studio model catalog vectorizer).
New Multimodal embeddings + Integrated Vectorization Azure AI Vision API integration with integrated vectorization. Includes Bicep templates for automatic deployment Beta (see the ChangeLog for a package version providing the Azure AI Vision vectorizer and embedding skill).
New Phi 3 RAG Chat Phi 3 SLM chat with your documents. Includes optional deployment of gpt-35-turbo and gpt-4 for comparison. Includes Bicep templates for automatic deployment. Beta (see the ChangeLog for a package version providing the AzureOpenAIEmbeddingSkill).
New community-integration/cohere Cohere Embed API integration using the Cohere Embed API. Beta (see the ChangeLog for a package version providing narrow data types.)
advanced-workflow Demonstrates how to rewrite queries for improved relevance. Examples include hybrid queries and a semanticQuery option. Beta (see the ChangeLog for a package version providing semantic_query).
basic-vector-workflow Start here if you're new to vectors in Azure AI Search. Basic vector indexing and queries using push model APIs. The code reads the data/text-sample.json file, which contains the input strings for which embeddings are generated. Output is a combination of human-readable text and embeddings that's pushed into a search index. Generally available
community-integration/hugging-face Hugging Face integration using the E5-small-V2 embedding model. Beta (see the ChangeLog for a package version providing this feature)
community-integration/langchain LangChain integration using the Azure AI Search vector store integration module. Beta (see the ChangeLog for a package version providing this feature)
community-integration/llamaindex LlamaIndex integration using the llama_index.vector_stores.azureaisearch. Beta (see the ChangeLog for a package version providing this feature)
custom-vectorizer Use an open source embedding model such as Hugging Face sentence-transformers all-MiniLM-L6-v2 to vectorize content and queries. This sample uses azd and bicep to deploy Azure resources for a fully operational solution. It uses a custom skill with a function app that calls an embedding model. Beta (see the ChangeLog for a package version providing this feature)
data-chunking Examples used in the in Chunk documents article on the documentation web Beta (see the ChangeLog for a package version providing this feature)
index-backup-restore Backup retrievable index fields and restore them on a new index on a different search service. Generally available
integrated-vectorization Demonstrates integrated data chunking and vectorization (preview) using skills to split text and call an Azure OpenAI embedding model. Beta (see the ChangeLog for a package version providing this feature)
vector-quantization-and-storage-options Sample showcasing how to use vector quantization and various other storage options to reduce storage usage by vector fields. Beta (see the ChangeLog for a package version providing this feature)

Prerequisites

To run the Python samples in this folder, you should have:

  • An Azure subscription, with access to Azure OpenAI or other third-party models.
  • Azure AI Search, any tier, but choose a service that can handle the workload. We recommend Basic or higher.
  • Azure OpenAI is used in most samples. A deployment of the text-embedding-ada-002 is a common requirement.
  • Python (these instructions were tested with version 3.11.x)

You can use Visual Studio Code with the Python extension for these demos.

Set up your environment

  1. Clone this repository.

  2. Create a .env based on the code/.env-sample file. Copy your new .env file to the folder containing your notebook and update the variables.

  3. If you're using Visual Studio Code with the Python extension, make sure you also have the Jupyter extension.

Run the code

  1. Open the code folder and sample subfolder. Open a ipynb file in Visual Studio Code.

  2. Optionally, create a virtual environment so that you can control which package versions are used. Use Ctrl+shift+P to open a command palette. Search for Python: Create environment. Select Venv to create an environment within the current workspace.

  3. Copy the .env file to the subfolder containing the notebook.

  4. Execute the cells one by one, or select Run or Shift+Enter.

Troubleshoot errors

If you get error 429 from Azure OpenAI, it means the resource is over capacity:

  • Check the Activity Log of the Azure OpenAI service to see what else might be running.

  • Check the Tokens Per Minute (TPM) on the deployed model. On a system that isn't running other jobs, a TPM of 33K or higher should be sufficient to generate vectors for the sample data. You can try a model with more capacity if 429 errors persist.

  • Review these articles for information on rate limits: Understanding rate limits and A Guide to Azure OpenAI Service's Rate Limits and Monitoring.