A Conversation with Aravind Srinivas, CEO of Perplexity AI: On Search, Truth, Curiosity, and the Future of Knowledge

9 min read

Lex Fridman: Perplexity is part search engine, part LLM. How does it work, and what role does each part play in serving the final result?

Aravind Srinivas: Perplexity is best described as an answer engine. You ask it a question, and you get an answer, but the difference is that all answers are backed by sources, much like an academic paper. This referencing, the sourcing, is where the search engine comes in. We combine traditional search, extract results relevant to the user's query, read those links, extract relevant paragraphs, and feed them into a large language model (LLM). The LLM then takes the relevant paragraphs, looks at the query, and generates a well-formatted answer with appropriate footnotes to every sentence.

The magic is all of this working together in a single orchestrated product, and that's what we build Perplexity for.

Lex: So, it was explicitly instructed to write like an academic. You found a bunch of stuff on the internet, and now you generate something coherent that humans will appreciate and cite sources in the narrative you create.

Aravind: Exactly. When I wrote my first paper, senior researchers told me that every sentence in a paper should be backed by a citation from another peer-reviewed paper or an experimental result. Anything else is just an opinion. It's a simple statement, but profound in how it forces you to say things that are accurate.

We took this principle and asked ourselves, how do we make chatbots accurate? Force them to only say things they can find on the internet, corroborated by multiple sources. This need drove the creation of Perplexity.

Lex: Fundamentally, it's about search. First, there's the search element, then the storytelling element via the LLM, and the citation element. But it's about search first. So, do you think of Perplexity as a search engine?

Aravind: I think of Perplexity as a knowledge discovery engine. Of course, we call it an answer engine, but everything matters here. The journey doesn't end once you get an answer. In my opinion, it begins there. You see related questions at the bottom, suggested questions to ask, inviting you to dig deeper. That's why our search bar says, "Where knowledge begins," because there's no end to knowledge; you can only expand and grow.

Lex: I see it as a discovery process. You start with a question, see the answers, and then explore related questions, expanding your knowledge.

[Lex and Aravind use Perplexity to answer the question, "Is Perplexity a search engine or an answer engine?" and discuss the generated results.]

Lex: The generation of related searches, the next step in the curiosity journey, is really interesting.

Aravind: Exactly! David Deutsch, in his book The Beginning of Infinity, argues that the creation of new knowledge starts from the spark of curiosity, leading to seeking explanations, finding new phenomena, and gaining more depth on existing knowledge. I love how Perplexity fosters this process.

[Lex and Aravind continue exploring related questions on Perplexity, discussing its strengths and weaknesses compared to Google.]

Lex: Can Perplexity take on and beat Google or Bing in search?

Aravind: We do not have to beat them, nor do we have to take them on. We never tried to play Google at their own game. Disrupting the search space means rethinking the entire UI. Why should links occupy the most prominent real estate on a search engine? We flipped that.

Lex: Let's talk about Google's business model, where they make money by showing ads as part of the 10 blue links. Can you explain your understanding of that model and why it doesn't work for Perplexity?

Aravind: Google's AdWords model is brilliant – the greatest business model in the last 50 years. They created a platform with the largest real estate on the internet, where advertisers bid for their links to be ranked as high as possible for relevant searches. It's a dynamic auction system that ensures high margins for Google.

However, any ad unit less profitable than a link click doesn't make sense for Google to pursue aggressively. This is where Perplexity has an opportunity. We prioritize answers, not links, making traditional ad units less relevant.

We can experiment with different ad models, perhaps incorporating relevant ads seamlessly into the user experience without sacrificing user trust and accuracy.

[Lex and Aravind further discuss the intricacies of Google's ad model and how Perplexity might integrate ads differently.]

Lex: You looked up to Larry Page and Sergey Brin. What do you find inspiring about Google and those two guys?

Aravind: They disrupted search by flipping the table. They said, let's ignore the text and look at the link structure to extract ranking signals, leading to the invention of PageRank. This deep academic grounding and focus on different ranking signals differentiated them from other search engines at the time.

I admire their obsession with latency, their philosophy of "the user is never wrong," and their contrarian insights like hiring PhDs instead of building a business team in the early days. They focused on building core infrastructure and deeply grounded research.

[Lex and Aravind discuss lessons learned from Larry Page and Sergey Brin, including the importance of minimizing user effort, obsessively reducing latency, and learning from user behavior.]

Lex: What other entrepreneurs inspire you on your journey of starting the company?

Aravind: I've taken inspiration from various entrepreneurs. From Bezos, I learned the importance of clarity of thought, operational excellence, customer obsession, and relentless pursuit of goals. From Elon Musk, I draw inspiration for his grit, first-principles thinking, and focus on distribution. From Jensen Huang, I admire his obsession with constantly improving systems and questioning conventional wisdom. And from Zuckerberg, I appreciate his commitment to open source and pushing the boundaries of AI.

[Lex and Aravind delve deeper into the specific lessons learned from these entrepreneurs and how those lessons are applied to building Perplexity.]

Lex: How surprising was it to you how effective attention, particularly self-attention, was in leading to the Transformer and the explosion of intelligence we've seen?

Aravind: Attention wasn't entirely new. Yoshua Bengio and Dimitri Bahdanau introduced soft attention, and DeepMind's PixelRNN paper showed that convolutional models could do autoregressive modeling with mask convolutions.

Google Brain's Transformer paper combined the power of attention with parallel computation, leading to a breakthrough. This, combined with OpenAI's insights on the importance of unsupervised pre-training, data scaling, and constant improvements, led to the explosion of LLM capabilities we see today.

[Lex and Aravind discuss the evolution of LLMs, from RNNs to Transformers, emphasizing the key breakthroughs and how they built upon each other.]

Lex: How important is RLHF (Reinforcement Learning from Human Feedback) to you?

Aravind: RLHF is crucial. While it might seem like the cherry on the cake, it plays a vital role in making these systems controllable and well-behaved. The pre-training phase lays the foundation of general common sense, but it's the post-training phase, including RLHF and supervised fine-tuning, that shapes the models into usable products.

[Lex and Aravind explore the concept of decoupling reasoning from facts, discussing Microsoft's work on small language models (SLMs) trained for reasoning and how Perplexity might leverage these advancements.]

Lex: You recently posted a paper on "Star bootstrapping with reasoning." Can you explain Chain of Thought reasoning and how useful it is?

Aravind: Chain of Thought is a simple yet powerful idea where, instead of training on prompt and completion alone, you force the model to go through a reasoning step, providing explanations before arriving at an answer. This prevents overfitting and enables better generalization to new questions.

The Star paper takes this further by training the model on explanations even when it doesn't arrive at the correct answer. It's a fascinating way to bootstrap the model's reasoning abilities using natural language explanations.

[Lex and Aravind discuss the potential of Chain of Thought reasoning, its limitations, and how it might contribute to building more capable AI agents.]

Lex: Do you think we live in a world where we can get an intelligence explosion from self-supervised post-training? AI systems talking to each other and learning from each other.

Aravind: It's possible. We haven't cracked recursive self-improvement yet, but there's no reason to believe it's impossible. The challenge lies in finding ways to create new signals for the AI without relying heavily on human annotation.

[Lex and Aravind speculate on the possibility of an intelligence explosion, discussing the need for new signal generation, RL sandboxes, and the role of human feedback in this process.]

Lex: What's the origin story of Perplexity?

Aravind: My co-founders and I wanted to build cool products with LLMs. We were inspired by GitHub Copilot and its success as an AI-first product. We sought to build a company that was "AI complete," meaning that advancements in AI directly translated to product improvements, creating a positive feedback loop.

Our initial idea was to disrupt Google by allowing users to search visually through a glass device, but we realized the need to start with something more manageable. We experimented with searching over relational databases using SQL queries generated by LLMs, focusing on Twitter data.

This led to the creation of a demo showcasing search capabilities for social graphs, which garnered attention from prominent figures in AI. This validation and the insights gained from this experiment paved the way for Perplexity's focus on general web search.

[Lex and Aravind recount the early days of Perplexity, their initial focus on Twitter search, the pivot to general web search, and the importance of demonstrating magic and practicality to attract users and investors.]

Lex: Can you speak to the technical details of how Perplexity works? You mentioned RAG (Retrieval Augmented Generation). How does the search happen? What does the LLM do?

Aravind: RAG is a framework where you retrieve relevant documents and paragraphs for a given query and use that information to generate an answer. Perplexity goes a step further by enforcing strict adherence to the retrieved information, ensuring factual grounding and reducing hallucinations.

[Lex and Aravind break down the different components of Perplexity's architecture, including crawling, indexing, ranking, and answer generation, highlighting the importance of balancing retrieval quality, index freshness, and model capabilities to ensure accuracy and minimize hallucinations.]

Lex: How much of search is a science, how much of it is an art?

Aravind: It's a good amount of science, but a lot of user-centric thinking is baked in. We constantly analyze user queries and identify systemic issues to improve search quality and user experience at scale.

[Lex and Aravind discuss the challenges of scaling search, the importance of tracking tail latency, and making architectural decisions to handle increasing user traffic and complex queries.]

Lex: What about the query stage? I type in a poorly structured query. What kind of processing can be done to make it usable? Is that an LLM type of problem?

Aravind: LLMs are crucial for understanding poorly structured queries. They can identify the needle in the haystack even when the initial retrieval doesn't have perfect precision, allowing us to focus on improving both the retrieval stage and the model's comprehension capabilities.

[Lex and Aravind explore the trade-offs between improving retrieval and model capabilities, highlighting the benefits of being model agnostic and how Perplexity leverages different LLMs like GPT-4, Claude, and their own Sonar model.]

Lex: What advice would you give to people looking to start a company?

Aravind: Start with an idea you love, a product you would use. Make sure you are genuinely passionate about it. The market will guide you towards building a successful business, but don't chase market trends without personal conviction.

[Lex and Aravind discuss the importance of founder-product fit, the role of dopamine systems in driving motivation, and the need for relentless dedication and a strong support system to overcome the challenges of building a company.]

Lex: What advice would you give to a young person about work-life balance?

Aravind: If there's an idea that truly consumes you, it's worth dedicating yourself to it, especially in your younger years. It's a time for exploration, learning, and building a foundation for your future. Don't be afraid to work hard and surround yourself with passionate people who drive you to be better.

[Lex and Aravind share their experiences on the value of hard work, finding mentors and peers who inspire you, and the importance of pursuing your passions early on.]

Lex: What do you think the future of search, the internet, and the web browser looks like? What is this evolving towards?

Aravind: The internet has always been about the transmission of knowledge. Search is just one tool. We're moving towards a future of knowledge discovery, where AI empowers us to ask deeper questions, explore complex topics, and gain a more profound understanding of the world. This knowledge discovery can be facilitated through various interfaces, from chatbots to voice assistants, but the core mission is to cater to human curiosity and guide users towards new discoveries.

[Lex and Aravind discuss the future of knowledge discovery, the role of AI in democratizing knowledge, and the potential of Perplexity Pages as a platform for sharing curated and personalized knowledge journeys.]

Lex: How many alien civilizations are in the universe?

[Lex and Aravind use Perplexity to explore this question, highlighting the platform's ability to provide concise answers, cite sources, and suggest relevant follow-up questions.]

Lex: You mentioned the value of sharing knowledge and learning from each other's experiences. How does Perplexity Pages contribute to this vision?

Aravind: Perplexity Pages allows users to curate and share their knowledge journeys, transforming personal explorations into valuable resources for others. It's a way to broadcast the insights gained from Q&A sessions, enabling a collaborative approach to knowledge discovery.

[Lex and Aravind discuss the potential of Perplexity Pages as a collaborative platform for knowledge creation, emphasizing the value of sharing discoveries and breaking out of echo chambers to foster a deeper understanding of diverse perspectives.]

Lex: What gives you hope about the future?

Aravind: Human curiosity. I believe that by empowering people with tools to explore their curiosities, we can foster a more knowledgeable, truth-seeking, and understanding world.

[Lex and Aravind conclude their conversation by reflecting on the importance of curiosity, the potential for AI to contribute to human flourishing, and the challenges and opportunities that lie ahead in shaping a future where knowledge empowers us to build a better world.]

Lex: Thank you for this incredible conversation. Thank you for being an inspiration to me and to all the kids out there that love building stuff. Thank you for building Perplexity.

Aravind: Thank you, Lex. Thanks for talking today.

[Lex concludes the podcast with a quote from Albert Einstein about the importance of questioning and contemplating the mysteries of reality.]