Using Pinecone and Embeddings

ianmacartney

Ian Macartney

Posted on July 10, 2023

Using Pinecone and Embeddings

Looking to implement semantic search or add on-demand context to a GPT prompt so it doesn’t just make shit up (as much)? Pinecone and Convex are a good match when you’re looking to build an app that leverages embeddings and also has user data. Pinecone stores and queries over vectors efficiently, and Convex stores relational & document data with strong transaction guarantees and nifty end-to-end data reactivity.

Let’s walk through how this shakes out in practice. If you want to see some code you can play around with, check out this GitHub repo where you can add your own data and compare it and search over it using Convex and Pinecone.

High-level user flow

To start, what’s an example? With Pinecone and Convex, you can have a flow like this:

  1. A user submits a question and starts subscribing to the question’s results. Under the hood, Convex has stored the question in a document and kicked off an asynchronous action. If this question has been asked before, it might re-use previous results.
  2. The action creates an embedding using a service like OpenAI or Cohere. It can persist this embedding for posterity or to be able to search for similar questions.
  3. The action uses the embedding to query Pinecone (or any vector store) for related documents, products, or whatever your embeddings represent.
  4. The action stores the results in the question document, which automatically reflows to update the user’s client with the new data - potentially returning materialized data pulled from other documents in the database associated with the results.
  5. If this is part of a broader chain of operations, it might use the related documents to compose a prompt to an LLM like ChatGPT, using both the related documents and the question to get a more contextual answer.

A word on streaming updates

At every step, updates written to the Convex database will update all subscribed clients. Unlike raw HTTP streaming, Convex subscriptions can be trivially consumed by multiple clients in parallel and are resilient to network issues or page refreshes. The data received will be from a consistent snapshot of the database state, making it easier to reason about correctness.

Many of the code snippets below can be found in this GitHub repo which you’re welcome to play around with using your own data and API keys. If you are desperate for a hosted playground, let me know in Discord!

Adding data to Convex and Pinecone

Depending on the application, you may have a large mostly-static corpus of data, or be continually adding data — which I’ll refer to as a source below. The process looks something like the following:

Break up your source into bite-sized chunks.

This helps limit how much data you pass (embedding models have context limits), as well as make the embedding more targeted. You could imagine an embedding of this whole post might not rank as highly against “How do you add data to Convex and Pinecone?” as an embedding that just covered this section.

To do this, you can split it yourself or use a library like LangChain’s RecursiveCharacterTextSplitter:

import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";

const textSplitter = new RecursiveCharacterTextSplitter({
  chunkSize: ChunkSize,
});
const splitTexts = await textSplitter.createDocuments([pageContent]);
const chunks = splitTexts.map((chunk) => ({
  text: chunk.pageContent,
  lines: chunk.metadata.loc.lines,
}));
Enter fullscreen mode Exit fullscreen mode

You can tune the size and algorithm to your own needs. One tip is to add some overlap, so each chunk has some text from the previous and next sections.

Store the source in the database

The Convex database is a great place to store all the metadata that your app will want. For embeddings based on text, you’ll likely even want to store the chunk of text in the database, so you can quickly access it to return as part of queries for a client or as part of a pipeline. For larger data, like video, it makes more sense to store the data in file storage.

Importantly, you should not store the text chunk directly in Pinecone metadata, as it can quickly fill up the index because all metadata is indexed by default in Pinecone.

async function addSource(
  db: DatabaseWriter,
  name: string,
  chunks: { text: string; lines: { from: number; to: number } }[]
) {
  const sourceId = await db.insert("sources", {
    name,
    chunkIds: [],
    saved: false,
  });
  const chunkIds = await Promise.all(
    chunks.map(({ text, lines }, chunkIndex) =>
      db.insert("chunks", {
        text,
        sourceId,
        chunkIndex,
        lines,
      })
    )
  );
  await db.patch(sourceId, { chunkIds });
  return (await db.get(sourceId))!;
}
Enter fullscreen mode Exit fullscreen mode

There are a few things to note here:

  • I’m both saving a back-reference from chunks to sources, as well as a forward reference from a source to many chunks. As discussed in this post on database relationships, this is a way to get quick access in both directions without having to define an extra index when you have a small number of relations (1024 technically but 100 is my rule of thumb).
  • I’m saving an empty array at first, then patching it with the chunk IDs once I insert them. Convex generates unique IDs on insertion. At this time you can’t pre-allocate or specify custom primary IDs for documents.
  • I’m creating the source with saved: false - we’ll update this once we’ve saved the embeddings into Pinecone. This allows the client to know the insertion status, as well as help with transient failures, which we’ll see later on.

Kick off a background action

The Convex mutation function is transactional but as a result, we can’t perform a non-transactional operation like talking to a third-party service in the middle of a mutation. A Convex action is non-transactional and can talk to the outside world. One trick I like to use is to schedule an action to execute after a mutation commits, ensuring that the communication with the outside world only happens if the mutation has successfully run.

Mutations in Convex are transactions and are prohibited from having non-transactional side effects like calling other cloud services. With actions you can make these sorts of calls, but how do you “call” an action from a mutation if the mutation can’t have side effects? A pattern I really like is to schedule the action from the mutation:

await scheduler.runAfter(0, api.sources.addEmbedding, {
  source,
  texts: chunks.map(({ text }) => text),
});
Enter fullscreen mode Exit fullscreen mode

Thanks to Convex’s strong transaction guarantees, the action is only invoked if the mutation successfully commits, so you’ll never have an action running for a source that doesn’t exist.

Create an embedding

From our action, we can fetch embeddings. See this post for more information on what embeddings are. See the code for fetchEmbeddingBatch here.

const { embeddings } = await fetchEmbeddingBatch(texts);
Enter fullscreen mode Exit fullscreen mode

Upsert into Pinecone

Adding data into Pinecone is a straightforward operation. “Upsert” for those unfamiliar is an update if the specified id already exists, otherwise it inserts.

await upsertVectors(
  "chunks", // namespace
  source.chunkIds.map((id, chunkIndex) => ({
    id,
    values: embeddings[chunkIndex],
    metadata: { sourceId: source._id, textLen: texts[chunkIndex].length },
  }))
);
Enter fullscreen mode Exit fullscreen mode

Tips:

  • We aren’t including much metadata here - in general, you should only store metadata that you might want to use to limit Pinecone queries - such as keywords, categories, or in this case text length1.
  • We’re re-using the Convex document ID for the pinecone vector. This isn’t required—you could make up your own ID and store that in the Convex document—but I find it very handy. Results of Pinecone queries, without returning metadata, can be used directly with db.get which is wicked fast. It also means you can fetch or delete the Pinecone vector for a given chunk, without storing an extra ID.
  • I used the table name as the Pinecone namespace for convenience, so queries for chunks wouldn’t return vectors for other data. This isn’t required but helped me with organization and naming fatigue.

Tip: Use the @pinecone-database/pinecone Pinecone client for the best experience in Convex.

There are two action runtimes in Convex: our optimized runtime, and a generic node environment. When possible I prefer using the optimized runtime, so I can keep the actions in the same file as the queries and mutations, along with some performance benefits. However, our runtime doesn’t support all npm libraries. Thankfully the pinecone package doesn’t depend on any incompatible packages and just uses the fetch standard under the hood. This is also why I prefer using fetch and the OpenAI HTTP API directly above. See here for more information on runtimes.

Mark the source as “saved”

All we need to do to notify the frontend that the data has been saved is to update the source. Any queries that reference the source document will be updated with the latest data automatically.

await runMutation(api.sources.patch, {
  id: source._id,
  patch: { saved: true, totalTokens, embeddingMs },
});
Enter fullscreen mode Exit fullscreen mode

At this point, our data is in Convex and an embedding vector is saved in Pinecone.

Extensions

Beyond saving chunks of the source, you might also consider:

  • Adding an embedding of a summary of the whole source.
  • Add a hierarchy of searches - where you could separately search for a category of documents and then provide that category as a metadata filter in a later query.
  • Namespacing or otherwise segmenting user data so you never accidentally leak context between users.

Searching for sources

Similarly to inserting data, to do a semantic search over your documents, you can:

  1. Insert the search into a table of searches. If there’s already an identical search, you could even decide to re-use those results. This is handy for iterating on large pipelines and keeping latency and costs low.

    const searchId = await db.insert("searches", { input, count });
    
  2. Kick off an action transactionally.

    await scheduler.runAfter(0, api.searches.search, {
      input,
      searchId,
      topK: count,
    });
    
  3. Create an embedding of the search. Aside: I have a hunch there’s a lot of opportunity here for ways of transforming the raw search into a better text input for the embedding.

    const { embedding } = await fetchEmbedding(input);
    
  4. Use the pinecone query to find nearby vectors representing chunks.

    const { matches } = await pinecone.query({
      queryRequest: {
        namespace: "chunks",
        topK,
        vector: embedding,
      },
    });
    
  5. Update the search results by running a mutation.

    await runMutation(api.searches.patch, {
            id: searchId,
            patch: {
              relatedChunks,
            },
          });
    
  6. Optional: store the search embedding in Pinecone if you want to be able to search semantically over searches themselves!

Note: you could just do steps 2-4 directly in an action if you don’t care about keeping a cache and storing the search vector.

Returning results to the client

The client can subscribe to the search document’s ID:

const results = useQuery(api.searches.semanticSearch, { searchId });
Enter fullscreen mode Exit fullscreen mode

The query looks for the search and returns the related chunks along with their source’s name:

export const semanticSearch = query(
  async ({ db }, { searchId }: { searchId: Id<"searches"> }) => {
    const search = (await db.get(searchId))!;
    if (!search.relatedChunks) return null;
    return pruneNull(
      await Promise.all(
        search.relatedChunks.map(async ({ id, score }) => {
          const chunk = await db.get(id);
          if (!chunk) return null;
          const source = await db.get(chunk.sourceId);
          return { ...chunk, score, sourceName: source!.name };
        })
      )
    );
  }
);
Enter fullscreen mode Exit fullscreen mode

This is parallelized by using Promise.all and calls to db.get are cached.

Summary

In this post, we looked at using Pinecone and Embeddings in Convex. One natural extension of this is to then use the sources as part of a GPT prompt template, but I’ll leave that for a future post. Let us know in discord what you think and what you’d like to see next!


  1. At the Pinecone hackathon, there was a discussion of issues of semantic rankings sometimes behaving oddly - in my case, when I searched over a corpus of famous poems for “what is the meaning of life?” one of the top hits was a “hello world” dummy text I had added. One participant mentioned that a useful filter—after listing a plethora of sophisticated ranking strategies—was to just exclude text less than 200 characters. Intuitively this makes some sense - the longer something is, as short phrases probably have higher semantic variance. 

💖 💪 🙅 🚩
ianmacartney
Ian Macartney

Posted on July 10, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Using Pinecone and Embeddings
webdev Using Pinecone and Embeddings

July 10, 2023