A Chat with Edo Liberty, Founder and CEO at API-First Database: Pinecone

Many companies today are pouring money into their AI capabilities, but some are clearly doing it better than others. Hyperscalers like Google, Spotify, Pinterest, Facebook, and Amazon are way ahead, and their AI/ML seems to understand what customers want far better than their nearest competitors. They all share one common secret and that is the use of “vector search”, which enables them to understand what the user wants to a much higher degree, and therefore provide them with better content choices.

Pinecone is an API-first database that makes it easy for developers to add the same vector search capabilities that the hyperscalers are utilizing. We make it possible for anyone to add and benefit from the same advanced capabilities, without the need to build the complicated underlying infrastructure.
 
 

 

How did you come up with the idea for the company?

 
I saw the amazing power of vector search at AWS and Yahoo!, where I led research teams, and the impact it made on products. It was obvious to me that countless other companies could make their applications better by switching to vector-based systems. Yet I also saw the immense engineering efforts required to implement this.

So when we created Pinecone, we made ease-of-use and production-readiness a top priority. Instead of creating another convoluted library or exposing loads of parameters for engineers to mess with, we built a highly performant and scalable solution and put it behind a super simple API that any developer can figure out in a few minutes. Even today, this ease of use is what makes Pinecone unique.
 

 

How has the company evolved over the last couple of years?

 
On the product side, since introducing the first vector database we’ve seen interest and demand grow faster than we even dreamed of. And with that demand we’ve seen common use cases emerge such as semantic search, question-answering, and image similarity search just to name a few.

That led us to develop critical features for those use cases, such as filtering. We also saw customers with a wide range of performance, capacity, and cost requirements which led us to invest in building a highly performant, scalable, and cost-efficient solution.

On the company side, the wave of customer demand has validated our vision and led us to raising $38M and building a world-class team.
 

What can we hope to see from Pinecone in the future?

 
We’re still only scratching the surface. We’re growing our R&D team to continue making it easier for engineers of all ML levels to take advantage of vector search. We’re also investing in core research on machine learning (ML), information retrieval, and natural language processing (NLP) to help the aforementioned common use cases.