Transcript of EP 265 – Aravind Srinivas on Perplexity AI

The following is a rough transcript which has not been revised by The Jim Rutt Show or Aravind Srinivas. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Aravind Srinivas. He is the co-founder and CEO of Perplexity AI, an artificial intelligence startup that has developed an AI-powered search engine, or as some people call it, an answer engine. Aravind is a graduate of IIT, that very famous Indian technology college, which has produced an amazing number of good people, and he has a PhD in computer science in the University of California at Berkeley. Prior to co-founding Perplexity, Aravind worked as a research scientist at OpenAI, contributing to projects like the text-to-image generator, DALL-E. Welcome.

Aravind: Thank you so much for having me here. Super excited to be here.

Jim: Yeah, and I’m actually super excited to have you here. I have to confess to being quite a fanboy of Perplexity.

Aravind: Thank you.

Jim: Regular listeners know I probably mention it once every three episodes, at least. I use it every day. It’s probably replaced 50 to 60% of my Googling and a significant amount of my use of ChatGPT and Claude as answer engines. And I keep expanding my usage, which is interesting, mostly taking share from my Google searches. Though I will say, for people who are thinking about making the switch, different kind of answers, better in many ways, different than others. Once minor trade-off against Google is speed. Google’s faster. And so, I’ve always find myself doing an optimization problem in my head when I think about which one to use, so I always have both open in my browser. If it’s relatively simple and speed is better, then I’ll go with Google.

If it’s something where you could easily spend time weeding through links trying to figure out what’s spam and what’s not, then I go with Perplexity. And as the difficulty of the question increases, I’m much more likely to go with Perplexity. I still find Google better and faster. In fact, I tested it again this afternoon just to make sure that it was still the case. But things like “Best Mexican restaurants in Lawrenceville, Pittsburgh, Pennsylvania,” Google was better in several ways. And one of the nice things is that it had a map feature, which is quite a plus.

On the other hand, for many, many things, Perplexity is definitely the answer, anything that I think of as research-oriented. I’m going to give a few of the searches I’ve done in the last three days, which is indicative of the kinds of things I use it for. This is by no means the only searches I’ve done. Hilariously, the last search I did on Perplexity is “What do you know about Aravind Srinivas, CEO of Perplexity?” And it gave me the best answer that anybody had, so that was good.

Aravind: Or even other things like “Tell me about how the costs of AI have gone down in the past year” or “When is Hurricane Milton going to hit Florida, and where?” Or “Compare the stock returns of Berkshire Hathaway and S&P 500 over the last 5, 10 years,” or “How many queries per year does Google serve?” Or “How to download a particular piece of software,” or “How to fix this particular software install on my Mac Store,” or even “What is the national game of India?” or “When is the vice presidential debate happening?” or “Is there another presidential debate?” These are just all the sort of questions that Google is never going to give you a proper answer for, but Perplexity would just nail it.

Jim: Yeah, I’m amazed that Google doesn’t get the “What time’s the debate tonight?” They’re just terrible. That’s idiocy. They should be able to fix that.

Aravind: Do you know why?

Jim: No.

Aravind: It’s because they’re afraid of a mistake there. A mistake would cost them so much in terms of brand reputation and stock price. Whereas, sure, a mistake costs Perplexity too, but it’s more that we are very confident in our technology, and we are okay with making mistakes if it helps us improve and get it right. We are very transparent in our communication to the users that we are still a startup figuring this thing out.

Whereas for Google, they’re the giant, they’re the 800-pound gorilla, and they think, “Okay, when Google rolls out something, it should always be top-notch, accurate, and reliable.” When they did a live demo of Google Bard, earlier, they used to call Gemini as Bard, not a lot of people might remember the live demo failed, and then Google stock went down by 5 to 7% in a single day. So that’s basically the thing that we are taking advantage of, which is the asymmetry, the innovator dilemma, of us getting 8 out of 10 queries right is impressing people like you, Jim.

But Google getting 2 out of 10 queries wrong is making people feel like Google is losing, and the stock price goes down, and Wall Street panics, and Google is no longer seen as a leader in AI. So that is the opening for a new entrant like us.

Jim: Interesting. Yeah, that’s always fun to be the fast-maneuvering guy against the behemoth. I was able to do that a few times in my business career. Another thing that I really like about Perplexity versus Google, OpenAI, Gemini, any of them, is your Perplexity engine’s not afraid to give an opinion. Let me give you two examples that I just tested this afternoon as I was doing my research.

The first one I asked, “Which is the better band, the Beatles or the Doors?” And OpenAI gave a very equivocal, “Yeah, on the left hand, on the right hand,” blah, blah. Gemini, something very similar. Perplexity said, “Beatles, and here’s why.” Boom, boom, boom, boom, boom, boom. I go, that’s great. Well, if I ask a specific question about which is better, I want an opinion. Here’s another one. This is a little bit more controversial, probably. I asked Perplexity, “Is Donald Trump mentally ill?” And it gave quite a long readout of opinions of people that thought he was, and then it, of course, warned at the end that we can’t really diagnose without an actual blah, blah, but it told pretty much what it thought.

OpenAI, which I asked, just gave a completely bland, non-answer. Obviously deflecting the question. I’ve seen that before. That wasn’t the only time I’ve seen that, where Perplexity is not afraid to say what it thinks and doesn’t feel like it has to wrap the answer in all this equivocation like the other engines do. Is that by design or is that just an emergent result?

Aravind: I mean, look, you’re going to get qualitatively very different answers if you’re designing a system to always go search the web, read what other people are saying, and then arrive at an opinion based on that versus just try to answer what the AI model builder engineered you to do, which is just a model, AI model, just generating its own opinions about what you’re asking. And because there are several guardrails built into the AI models or different kinds of RLHF techniques of what the human preferences are in your dataset, different AI models will tend to answer in very different ways and sometimes give you non-answers because that’s what they’ve been trained to do versus telling you, “Okay, this part of the web thinks this is what it is, that part of the web thinks this is what it is, but you got to arrive at your own conclusions.”

So giving you diverse perspectives about things like actual perspective rather than verbiage that amounts to nothing is possible because of the way we built the system using this technique called Retrieval-Augmented Generation, or RAG, where we pull results from the web, ingest that as a part of the prompt, and then ask the model to think, “Okay, using all this as additional context, not just based on what you think on your own as an AI model, can you try to answer the user’s question? And also, cite it. Cite the sources, make sure that the user can even go and fact-check what you’re saying.” So that gives this qualitatively different experience on Perplexity compared to ChatGPT.

Jim: And I love the citations. I will say, more often than I like the citations are to some bland review article or something, but then you can go from there to the real source. It’s very, very good to have those citations. And OpenAI does not yet have citations, and as you laid out why, it’s going to be extremely difficult to get them directly out of a large language model, while using a RAG, of course, is much simpler because you know what the RAG pulled back.

And you can go through those and sort those out and say, “Which of these do I want to present as the source?” So I think that’s also another big, big difference, particularly when I’m doing more serious research because you and I both know, as you said, 8 out of 10, I think these days it’s more like 9 out of 10, you get a decent answer, but it’s not 10 out of 10, so you always have to check the answer, right?

Aravind: Correct. Correct.

Jim: For sense. And having citations makes it vastly simpler to do that second check that you really need to do if you’re going to use it for anything important. Let me give you two other examples that just show you the kind of thing that I really like Perplexity for. I had a question for a podcast prep the other day, very nerdy. “What are the two-way correlations amongst the categories in the Big Five personality model?” I tried it on Google, it was just a hopeless mess. I gave it to Perplexity, it came out with the exact answer. Done. Perfect.

The other is, “I’m a hobbyist who buys and sells and fixes vintage audio systems, ’70s, ’80s vintage speakers and amplifiers and stuff. And so I’m constantly having questions about some of this really old and sometimes quite obscure equipment. And I just bought today a big old set of early ’90s vintage speakers. I want to know how big they were.” And Google, it says this, it says that. I ask Perplexity, and it just gives me the answer straight out, exactly what I want. Nothing more. Extremely impressive.

Aravind: Yeah. I was asking, “Does Indian billionaire, Ratan Tata, died yesterday?” and “Did you know that Waymo’s actually, the car that Waymo runs, it’s called Jaguar?”

Jim: Yep.

Aravind: And Jaguar is actually owned by Tata. So I was actually asking, “How did Tata end up owning Jaguar?” And then Perplexity just tells me that Tata acquired Jaguar in 2008 from this company called… Basically, Ford used to own Jaguar cars and Land Rover, and then Tata bought it for $2 billion, and then it became a subsidiary owned by Tata. And then they did a deal with Google so that Waymo could use these Jaguar cars. And it’s pretty cool. Have you tried a Waymo here?

Jim: I haven’t been to San Francisco in seven years. One of these days, I’ll get back out there.

Aravind: Okay. Yeah, it’s pretty cool.

Jim: I keep getting emails from my friends and videos through the Windows. And it seems like it’s really easy to get a Waymo ride now.

Aravind: Yeah, exactly. And I’m asking all these questions, I’m giving a talk at this particular event, and these are the questions they’ve asked me, “Go check out all the interviews I’ve already done and try to tell me what I should say to these questions in a way that I haven’t spoken about before.” And it’s pretty good at that, so good that I’m almost feeling like I can be replaced.

And then “Tell me about the NVIDIA Blackwell chips. It seems like there’s a recent surge in demand. How should I think about stock, or what does this particular group have anything to do with this investor?”

San Francisco had this whole heatwave thing recently. It only cooled down today, and I was trying to understand why there was that whole heatwave going on there, and it really explained very nicely how a whole heat dome was formed because of a high-pressure system. And then it basically blocks all the cool breeze from the bay. So it’s amazing, right? Google built the whole Google Assistant thing so that you can go and try to ask these questions, but they never really delivered on giving that an experience.

And then a whole inflection moment happened in AI in end of 2022, where AI models, of course, with ChatGPT’s entrance into this thing, AI models began to be capable of good summarization, good formatting, good ability to follow context across multiple questions in a chat thread. And all these abilities could be harnessed into building very new experiences, and Search is the biggest software category of all. And so we were like, “Okay, why don’t we try to use them for search?” And that ended up being the company.

Jim: Yeah. And people have always said Google is vulnerable because they don’t have that big of a moat, really. And in fact, my biggest, well, not my biggest, my second biggest investment mistake was some friends of mine had gotten a commitment to get a big chunk of Google IPO stock, and they asked me if I wanted to come in for a chunk of it, and I said, “Nah.”

Aravind: So you should have done it.

Jim: Of course, should have done 20X, 50X. But my thesis was, search, no moat, right? I used to actually write search engines, so I knew the technology isn’t that deep.

Aravind: Oh, cool, cool.

Jim: And I said, “Where’s the moat?” And what I missed was seeing all the searches as the moat, actually. I think that Google has. But they’ve always been susceptible to a quantum leap in capacity, and I think the AI front ends to search are at least have a significant possibility of taking some serious market share from Google unless they respond. So far, they’ve been totally lame.

The only one worse is Microsoft. At first, their AI chat front end for Bing was okay. I went and tried Copilot today, and it totally sucked. It was amazingly bad. I don’t know what the hell they’re doing up there in Redmond these days. So you guys have some opportunity if you hustle.

Aravind: Yeah. So you said the word “hustle,” right? Hustle is usually associated with a startup, not with a big company. I remember going to Redmond and meeting the Bing folks very early on in our journey, and I think they had just launched Bing Chat. They were like, “This is the hardest we’ve ever worked in the last 10 years.” And that’s when I knew, “Look, if it’s all about this amount of work, then to sustain the hustle over a long period of time, it’s easier for a startup to do than a big company to do because, simply, you don’t have the muscle.

One or two months you could grind, but can you grind for five years? That’s the question, right? And then the incentives are not there. For a big company, shipping a new product doesn’t necessarily move the market cap. For a startup, it’s your only product. You keep on improving it, you keep getting new users, you keep seeing the metrics go up, you figure out monetization models, new models, new business models. Everything’s exciting, everything’s new, everything’s growing. And then you feel the adrenaline, and that helps you do the grind for multiple years instead of one or two months, and that’s what gives you the product edge and the user experience edge over a big company.

Jim: Absolutely. And I’ve sat on both sides of that line. I’ve been a C-level executive at multinational corporations, CEO, publicly traded companies. I’ve also done five startups, and I’ve also advised 17 startups. So I absolutely get what your take. In fact, it’s amazing that big business survives at all. It’s only because of its scale economies. Its actual productivity per person is way lower than a startup, and that’s real important. That actually is a good pivot to my next question.

One of the things that I’ve been digging into doing research or at least finding what people are saying about Perplexity, it may not be true, right? Is that you guys have been pretty smart in leveraging existing things that exist out in the world versus reinventing everything. On the other hand, it sounds like you have reinvented a few things. So why don’t you talk a little bit about how you’ve decided what to use from the world versus what to build yourself? Always a key question in a startup.

Aravind: Yeah, I mean, the way I think about it is get users. That’s the number one thing. Anything you delay to get users by trying to reinvent things that already exist is not in service of your mission, not in service of your company’s business and brand and distribution. You told me, right? Search doesn’t have moats. I think there’s one moat, which is brand and distribution, and of course, technology, the latency, the scale at which you can operate, the infrastructure, attracting high-quality employees, the bad revenue, all that’s part of it. But everything is in service of distribution.

When you are a startup and you’re irrelevant and nobody even knows about you, even if you have a cool demo that you built completely on your own, why should anyone care? So we always were pragmatic in that sense. Even though we had a pretty good background on AI research, we never really felt like we had to go train our own models or we had to go build the entire index ourselves. It’s all about showing users the completely new experience.

What is the company even building? What is the core product experience? And what is the thesis that this experience is going to be the next generation way people search? How do you validate that? That is your only job. And after you validate that, and after you get a certain amount of users, and after you figure out a way to monetize reasonably, then try to start building stuff on your own and taking advantage of all the data you’re collecting on a daily basis. And try to see if there’s some unique models or unique orchestration, unique indexing or ranking that you can do to make your product better. That was the approach we took, and it served us pretty well.

Jim: Yeah, this is what I read off the internet, including stuff Perplexity gave me. It sounds like you use various large language models, GPT, Claude, Llama, and Mixtral are ones that I saw mentioned for you guys. I also saw that you used both Bing and Google to be your index engine. Any other outside main pieces that you use? Are those correct, by the way?

Aravind: Yeah, so we use a bunch of ranking signals from many search providers. We build our own index, actually, but we also rely on ranking signals from a bunch of data providers. And then we also rely on third-party data providers for certain web domains that we don’t scrape ourselves, we don’t crawl ourselves, which only gives us the high-level summary snippet and the metadata related to the URL, but not the actual contents.

And we use a bunch of LLMs, open-source ones. We take them, and we train them on top to customize it for our product, which basically means making it better at summarization, formatting, conciseness, citations, referencing, contextual long context, and all those kind of abilities. And the closed-source ones, we just take the masses and work a lot on a little bit of post-training, custom post-training with our data, but a lot of prompt engineering.

And we build routers and orchestration across all these sources of data, models, and APIs for specific licensed data, third-party data providers, ranking signals. Everything basically goes into this giant orchestration router and then produces the end result for the user.

Jim: Yeah, that’s quite interesting. I like those kinds of businesses because, essentially, you’re rewarded for the skill at which you combine the pieces and then identifying what pieces you can improve or what little shims you can put in that the market’s not providing.

Aravind: Yeah, exactly.

Jim: Talk to me a little bit about how you do this orchestration, because clearly there are some dynamic decisions to be made as you’re choosing what sources to use, how to weigh them, how to prioritize them, and then how to cook them. Talk a little bit about that engine, the orchestration engine.

Aravind: Yeah, 100%. So there are a lot of things you do at the retrieval stage itself, which is depending on the user’s query, you try to pick the most relevant top 10 or 20 different links, and then even that has multiple stages of ranking. First thing is just based on query word matches, which is traditional retrieval, TF-IDF style retrieval, and then that goes through n-gram overlap as a second stage. And then that goes through embedding-based similarity, which is more semantic and fine-grained, that actually takes the contextual meaning of the documents, and then picks 10 or 20 different paragraphs across a bunch of links as the most relevant paragraphs to answer the user’s query.

And then all those paragraphs, with the corresponding URL, go into the LLM, and the LLM decides in the context of the user’s query how to piece together all these different sources and write a concise or detail, depending on the intent of the user, an answer that’s well-formatted, easily readable, and well-referenced to each of those paragraphs retrieved. And then we collect a lot of data on which domains and subdomains are worthy of citations in terms of how much people trust those domains. Some notion of page rank, essentially. Then that’s also used to influence the ranking. And then the final answer renders how it renders in the product.

And we collect a lot of data in terms of user feedback on answers. We go and debug if the answer is corrupted because the AI model itself made a mistake, but the sources were all fine. Or the AI model was not really at fault, but it was because of the poor sourcing or either an incorrect source, or the index was not really fresh but the source was fine. Or there were two or three different sources that were saying different pieces of information about the same thing. So the AI model wasn’t smart enough to figure out which is quite right and wrong.

So all these long-tail issues exist, and we try to really understand based on a lot of user logs and complaints of people on a daily basis, and then try to go and improve the product at different parts of the stack in a more general way so that we address thousands of bugs in a day. And then you compound this. Each day, we learn something new about where the product fails. As you said, it’s only 9 out of 10. It used to be 7 out of 10, 8 out of 10, 9 out of 10, 99 out of 100 eventually, then 990 out of 1,000. The error rate should go down in an exponential way. And then that basically allows us to fix the long tail of issues.

AI is all about… It starts as the 80/20, right? You create a completely new, unique experience that wasn’t possible before. You achieve a certain sweet spot and a score of 80/20 that gets users pretty excited about it, but the grind to go through the long tail of the remaining 20% is what differentiates the great products from the good products. And that’s basically our journey today, to climb the last mile.

Jim: That makes a hell of a lot of sense. Now, as part of this retrieval process, are you using semantic vector databases to do embeddings on pieces of content for your RAG?

Aravind: Yeah, we use a vector database, an open-source project called Quadrant, and then we would like some custom versions on top of that. And I should say that vector databases are misunderstood. Like I said, there are three or four stages in the ranking process, and the traditional elastic-style keyword matches and n-gram overlaps, all these things, TF-IDF-based ranking, all these things give you a lot of the heavy lifting already. You already get a really good high-recall set of documents. The precision is lower, but the recall is pretty good. And then the vector embedding-based similarity, vector database-based similarity score optimizes the precision even more. And then the LLM finally optimizes the precision the most because it’s extremely fine-grained in terms of trying to decide which sources to use for writing the answer.

So basically, think about this as search is all about making sure your results have high recall and high precision. Recall basically means the right document is in the set of documents you retrieve, but not necessarily ranked right at the top. Precision means the right document is ranked right at the top, so even in the first two or three links contain what you want.

And for a 10-blue-link search engine, you really need to nail precision, otherwise, the experience is going to be really bad. But for an LLM-based answer engine, the traditional style retrieval can just optimize for recall. Of course, not poor precision, but reasonably good precision is enough, and the LLM that ingest the final set of 10 or 20 paragraphs can weed out the irrelevant ones way better than training based on user click logs. So that’s the major advantage over the legacy system that Google has built.

Jim: And also the legacy system that OpenAI runs, right? They’re just the LLM. Well, with some other stuff on top, but not that much.

Aravind: Exactly. So they build the AI. They definitely are the pioneers of making AI very good conversationalists.

Jim: One thing I’m going to insert here, this is one of my pet peeves. More and more people are saying AI when they actually mean LLMs or generative AI, and AI is a much bigger field than just deep learning and LLMs. So let’s be really clear here that we’re not talking about AI in general. We’re talking about this particular transformer-based generative AI, which is where all the action is right now, but it may not be where all the action is forever.

Aravind: Yeah, exactly. So, yeah, call them LLMs. LLMs are completely a fine thing. My feeling is that what you call AI is the purest, probably. You think about AI as agents that go and actually, the game engines’ AI, that’s part of a game. The AI is that you write for Pac-Man, the ghost, when you’re trying to build a new game. Or when you’re trying to build a chess game, you’re writing software to build a chess simulator. You have to make an AI for the opponent, right?

So that’s what traditionally computer scientists thought about as AIs, and that’s why a lot of the milestones in AI were all about, “Can I train an AI to beat the best human player on chess or the best human player on Go?” That used to be the thing Atari. And then now, AI is anything that automates some kind of human activity. It’s an AI. That’s the working definition these days.

So, in some sense, calling LLMs as AI is not completely wrong either because you can argue that LLMs give you a lot of heavy lifting over translators, people who take notes in a meeting, people who reformat meeting notes into actionable items for whoever attended it, people who do grammar correction, speechwriters. All these basic tasks are getting automated with just a single LLM call. And I think you can say to some extent that’s an AI, but I see what you’re trying to say. It’s like, yeah, it’s a pet peeve, and totally, it’s fine to say that.

Jim: Yeah, it’s just one of my pet peeves. I’ve been working advising AI companies for a while, particularly some of the AGI-oriented things, and some of the other approaches still have a lot of merit. LLMs will be part of the solution, I’m convinced, because it solves, as you point out, a huge raft of problems, which even four years ago we thought were going to be 10 years out solving them, particularly the language problem. But there are other problems. For instance, there’s a reason that LLMs suck at doing math, and they’re never going to be good at math by themselves, while other kinds of systems will be great at math.

Working back and forth to use the LLMs to figure out what the math should solve and then using Solver AI to actually solve the math, that’s a question for another day. But they are very impressive. I use them all the time for all kinds of stuff. I wrote a program last year that writes movie scripts using LLMs. It has 40 different places where the human touches the script, but the human can touch it for just 30 seconds or a minute, and then the LLMs go off and do their thing.

And in about eight hours of human time, you can write a 90-minute feature movie, which is cool and could not have been done without LLMs. And it seems like magic, but it’s still not the full AGI, the human-capable system, but it’ll probably be an important part of that system.

I’m just going to mention one other thing I did with Perplexity, and I think I’m going to actually… This is funny, I don’t think it’s illegal, but anyway, it might be dangerous, so don’t do this at home, children. I’ve long had this investment idea, a rather complicated one, that’s essentially like a little hedge fund. And so I described it as precisely as I could to Perplexity and said, “All right, choose the investments to execute this strategy.” And it did, and they were very sensible. I’m going to go ahead and test them. I’m going to put together a little mini hedge fund, put $50,000 in it or something, and test the strategy and see what happens.

And if it works, if it doesn’t… It’s a highly hedged strategy, so it can’t go no shit too far one way or the other. But if it works, I’m just going to keep running that query once a month and adjust the weights in the portfolio based on what Perplexity has to say, on the grounds that Perplexity has some big sense of what’s happening now, more so than anybody else, and it seems to be perfectly capable of understanding this rather complicated strategy. And to the point of not being afraid, I’m sure if I asked OpenAI, they’d say, “Oh, I can’t give investment advice. I’m not a registered investment advisor, meh, meh, meh, meh.” But Perplexity, perfectly willing to put together a model portfolio that correctly implements the strategy.

Aravind: I mean, look, Perplexity is not a financial advisor, but it’s such a great tool to do your financial research. And I encourage everybody to not blindly listen to someone else’s advice on what to invest, and go on, study all these stocks. Earlier, you would go and pay a market analyst to do this for you. Now it’s just one Perplexity query to actually understand, “Oh, should I go and invest in NVIDIA? I don’t quite get it. Has everything been priced in already? What about the Blackwell chip delays? How is the demand for training GPUs? Who’s NVIDIA’s competitor right now? Does it still have no competitors? What’s the market five years from now? How does this impact Amazon Web Services revenue? How do the margins of NVIDIA ever get squeezed? Who’s going to likely do it?”

All these kinds of things are how I try to understand things that I don’t have exposure to or I don’t have access to an expert on that topic. And it’s pretty amazing the amount of knowledge you can gather in a few queries of Perplexity. It’s pretty incredible, and that’s why the product has been built so that anybody can ask any question without any fear of getting judged. I feel like it’s a pretty incredible superpower now.

Jim: Yeah, let me give you one thing that maybe you can improve on because I use it all the time to provide useful summaries of things that I need for my thinking about something else. A piece part. If thinking is chunking, being able to get these pre-processed chunks quicker and better is really huge as opposed to reading a 500-page book. So for instance, the other day, in an online argument, I needed to have a better understanding than I did of Hegel’s dialectic. It’s a philosophical thing from the German Romantic era in the 19th century.

And so today, I reran the query, and Perplexity did a pretty good job. ChatGPT 4.0 did a considerably better job. Bing Copilot totally sucked, and I used the identical query. Where ChatGPT was better was it gave quite a bit more detail. Even though the prompt was “provide a detailed explanation of Hegel’s dialectic,” it might well be good for you all to have some form of slider or a second method to get more detail when someone… And I know you have to take a shot at what they mean when they say “detailed.” But in my case, at least, ChatGPT was closer in reading what “detailed” meant to me. Do you have any thoughts about that?

Aravind: You mean like Perplexity answers getting more detail at certain things?

Jim: Yeah, being able to get more detail out of Perplexity than you can. Here was the prompt I used, “Provide a detailed explanation of Hegel’s dialectic,” and OpenAI gave about twice the amount of density of information as Perplexity.

Aravind: Did you do this query on your mobile app or on the web?

Jim: On the web.

Aravind: Okay. Yeah, so I think… I just tried your thing again. For me, it seems pretty detailed. I guess the difference could have been any particular A/B test we might be doing on trying to see if some people prefer a concise answer or a detailed answer. But clearly, if your prompt contain detail, there’s no reason to not provide it for you. Sometimes we do bias towards not outputting too much text in the mobile apps because people have limited screen space and limited attention span, so we try to optimize.

Jim: I hardly ever use mobile apps. I hate mobile apps.

Aravind: Interesting.

Jim: The app was the Empire striking back against the real internet. And so, of course, I have to use them when I’m traveling, and they’re handy for certain things like Uber. You can’t live without your Uber. But I would never… I shouldn’t say never. Sometimes when I’m sitting there reading a book, sometimes I’ll use Perplexity on my phone, but 90% of the time I’ll use it on the web. Do you guys have an API? If you had an API, I’d start building it in some of my little PC tools. That would be real cool.

Aravind: Yeah, we do have an API.

Jim: I’ll have to check that out. One of the questions we passed on previously, I’d love to get your perspective on how you’re going to address that, was that that moats for search seem relatively low. And I mentioned the one I missed when I didn’t invest in Google at the IPO, goddammit, was that having seen all the queries, they see now at 85% of the queries, that itself is a sustainable competitive advantage. Not a huge one, but it’s not zero either. What kind of a moat do you see Perplexity being able to develop over time?

Aravind: I think just the ability to handle so many different types of queries, so many different custom UIs per query, and learning how to monetize a good chunk of them, and maintaining the user experience. Even at the scale of usage we potentially are going to scale to, is going to be very, very hard and challenging. So the infrastructure needed for that, the considerations you need to have for correctness and speed and readability, orchestrating so many different tools, data sources, this is just going to be pretty difficult.

I can totally see hundreds of things or thousands of things that can go wrong. And just handling the dynamic space of AI models constantly being changing like capabilities and accuracy, latency, it’s going to be pretty challenging. So I feel like if we surpass the challenge, that in itself would be our moat. The distribution that we accumulate as a result would be pretty tremendous.

Jim: And you will have some first-mover advantage if you break through to being the thing that everybody thinks of when they want a better search than Google. Then you’ll have mindshare and room on people’s app pages. Goddamn apps. Hate apps. But that indeed does become a moat, essentially a channel moat, where there’s a constriction on the channel. People can only think about so many search engines, and if you’re in the top three of mindshare, then you’ve got something pretty significant.

Aravind: Yeah.

Jim: Now, you mentioned something else in passing, which I did want to drill into, which is monetization. Did I actually see ads on one of my accounts recently? Are you guys testing ads or running ads?

Aravind: We are not running ads yet, but we intend to run ads.

Jim: I figured some of the citations are packaged up to look like ads. And I wanted to give another one of my famous Ruttian opinions. I hate fucking advertising. I think it’s the ruination of the internet. And around 2002, when Moore’s law made networking and computation inexpensive enough that you could actually fund a pretty good-sized site with ads alone, that was the ruination of the internet right there. And I know you guys need a monetization model, but I do ask you very sincerely… Always offer a paid tier with no ads because nothing I hate worse than ads.

Aravind: Yeah, a hundred percent.

Jim: If the free product, yeah, you got to monetize it somehow, but you should give people the opportunity to preserve their cognitive sovereignty. I gave a talk on this recently at Zoo Village, Georgia, yeah, probably in Georgia, on one of the most important things, particularly for people who think of themselves as creative thinkers, is to keep your cognitive sovereignty so others don’t hijack your brain. And everything about our world today wants to hijack your brain, and hijacking your brain in almost pure form is what advertising is. What else do you think is going to be in your mix for monetization?

Aravind: Enterprise, allowing you to search over your company’s files and data, not just exclusively there. A mix of both the internal and external data. I think that’s never been done before because the kind of systems, the kind of 10-blue-link search engine you built for the web and built for internal, are completely pretty different. And so people could never build both in one place, but LLMs allow you to unify that in a pretty neat way because most of the heavy lifting is done by LLM.

So I think we can build this out of the next-generation knowledge work platform for research, and that has a huge market. Several hundred million people are working on a daily basis in so many different parts of the world, and they all become daily active users of such a product. I think the TAM is pretty tremendous there. So that is a clean monetization opportunity for us.

And we’re also going to do more of the API stuff, allowing people to build Perplexity-like things for specific verticals or things that they really care about. Are there some domains on the internet that you really want the results from where you put in the effort to pick that and then rely on Perplexity’s around capabilities to just stitch together a cool chat experience? And then we want to be part of many hardware devices, so we are going to… If there are people building new hardware devices, we would support them through our software to make really good voice-to-voice capabilities on them. So these are the current plans, and we are pretty nimble and flexible to keep experimenting here.

Jim: How much do you think the pro version premium price pay? Because again, one of my pet peeves, I know the horse has long since left the barn. But if people were willing to pay small amounts of money for real things not be attacked by ads, the world would be better off. Unfortunately, psychologically, people prefer free. Are you seeing much take-up on your pro product?

Aravind: Yeah, a hundred percent. More than several hundreds of thousands of users are paying for it today.

Jim: Oh, that’s good. I hope that increases because I think it sends a different incentive to the company than an advertising-based model where you end up like Facebook, who ended up being totally corrupted because the only metric for them is how many times they can keep your eyeballs glued to the screen. When in reality, something like Perplexity, its biggest advantage is to get me on and off four times faster than Google because I don’t have to repeat the search. I don’t have to filter through a bunch of search spam crapola that turned up in the search, et cetera. And so, in some sense, advertising is potentially a corrupting attractor for a company like Perplexity. So keep that in mind as you guys are going forward.

Aravind: A hundred percent, a hundred percent.

Jim: Talk about a little bit on what you need to succeed. Now, this may not be up-to-date, but this is what Perplexity said, is that your most recent funding round was in April, and you had a valuation of about $1 billion dollars. Is that right?

Aravind: Yeah, that’s right.

Jim: And you raised $62 million, something like that. That doesn’t seem to be enough to fight Google.

Aravind: No, no. We raised other funding rounds before that too. We raised, like, a series B before that of $70 million. And then before that, we raised another $20 million, and before that, we raised a seed round of $1 or $2 million. So we raised more rounds than that. So certainly, we have a good amount of money, and I agree, it’s all a drop in the ocean for Google. Google has $100 billion in liquid cash lying in their bank, and they’re generating more profits every year.

So we will keep raising capital, at least one or two more financing rounds, before going public. And I think it’s all about making sure that the company’s healthy, there’s a lot of usage, there’s a good amount of revenue traction, and making sure that whatever amount we raise, we know exactly what to do with it in terms of translating it to the right growth and metrics. So that’s what we are really focused on right now.

Jim: Good. Sounds good. When I looked at your list of investors, it’s quite surprising in that it was dominated mostly by individuals rather than institutions. I always considered entrepreneurial finance to be one of my sweet spots, and I’ve always been very interested in that field. I learned about it from Bill Salmon, who’s the professor of, was long time the lead professor of entrepreneurial finance at the Harvard Business School. And by mere luck, I got him on the board of my first company, so he gave me a private tutorial on entrepreneurial finance. How did you come to select having your lead investor and some of your other major investors be individuals rather than institutions?

Aravind: I mean, our rounds were always led by institutions. Like lead investor, in the sense the one who sets the price and anchors around, has always been institutional. Series A was done by NEA, Series B was done by IVP, and Series C was anchored by, I mean, that was slightly different, Daniel Gross, but he’s like an institutional investor given the amount of capital he allocates. So it’s not being completely individuals, but individuals have been a big part of our funding rounds too, along with institutions. NVIDIA is an investor, they’re an institution. Bezos is an investor, he’s an individual, but again, Bezos has so much money that-

Jim: You might as well be an institution, right?

Aravind: Exactly, right. So it’s all about what value they add to our company, their reputation, and the kind of introductions they can make for us, and the strategic advice they can provide for us, and collaborations.

Jim: Now, IVP, is that Insight Venture Partners, or that’s some other IVP?

Aravind: Institutional Venture Partners.

Jim: Okay. I don’t know them. Normally, Insight does a little bit later rounds. Your next round, they would be… They’re a really good company. NEA, of course, also a very nice company.

Aravind: Sure.

Jim: I’m going to give you my one complaint about Perplexity.

Aravind: Sure.

Jim: It’s so annoying that it’s unbelievable, and it’s no doubt easy to fix. At least once every other day, when I go to one of my Perplexity windows, and I have them up all over the place on my browser, I carefully type in a prompt, hit return, and nothing happens, but it eats the prompt. This must have happened to me 25 times over the last two months. There is something wrong with something going on there that should not be the user experience. It happened yesterday, and so I know it’s still current. OpenAI, on the other hand, occasionally will get screwed up and say you have to log in again, but it’ll never have you enter a prompt and then eat it and do nothing. Perplexity does that, not often, but regularly. I would send that note with a bullet to your team and say, “Fix that, goddammit, because it is truly annoying.”

Aravind: Okay, I’ll take a look. I’ll take a look.

Jim: And it’s on the web version. I’ve only seen it on the web, but it happens regularly. It happened as recently as yesterday. I tried to use your image generator. I mean, it just seemed to be total garbage. I must not have understood it, so don’t take that one to the bank yet. But if I spend enough time figuring out how to use it and it’s still not pleasant, I’ll send you my thoughts about that. There’s rumors running around that NVIDIA is going to buy you guys. You care to comment on that?

Aravind: I didn’t see this rumor anywhere. Where do you see it?

Jim: Type into Perplexity. Who’s talking about NVIDIA buying Perplexity?

Aravind: Okay.

Jim: You’ll probably get some. I’ve seen it floating around on Twitter.

Aravind: Okay. No, no. I can confirm that it’s not real. And they are investors in us. Jensen loves using Perplexity, and I think a lot of NVIDIA employees are daily active users of Perplexity, and we collaborate a lot on frameworks, software like TensorRT, LLM inference. We use their Nemotron framework for training, but there’s no actual acquisition talks.

Jim: And truthfully, putting on my strategist hat, it would be a stupid idea for NVIDIA because, in general, you do not want to get in the business of competing with your customers.

Aravind: Right. Right. I mean, NVIDIA is an infrastructure and platform company.

Jim: It’s always tempting to sneak into one of the markets, but it’s a bad strategic idea, and I think those guys are smart enough that they probably aren’t going to do anything as stupid as that. So finally, what do you see for the next year or two for Perplexity? What’s on your plate of things that you need to do over the next two years?

Aravind: Keep growing, keep the product amazing. Keep it fast, keep it accurate. Expand the use cases beyond just simple Q&A and fact checks and knowledge-related research, but expand to a lot of different verticals. Allow people to do a lot of their day-to-day activities here, even transact natively on Perplexity, and make the product even more personalized to people, almost like it should be their second brain. So if we do all these basic stuff, we get it right and translate that to good business enterprise value, then we’ll be very successful. And that’s the only thing I’m laser-focused on.

Jim: That sounds sensible at your stage. Eventually, you’ll have to play a higher-level strategy game, but if you just execute the best for the next two years, that’s probably all you really need to do. I will tell you, on the edge of what the thing is useful for, just for giggles, I tried it out on a coding problem. Even though it’s not what it’s designed for, I use either Claude or the new o1 preview of GPT, it is amazing for coding. And I tried out Perplexity, it didn’t do a terrible job, surprisingly, because I know it’s not designed for that, but it probably wouldn’t take too much work to get it to be a decent coding assistant.

Aravind: Yeah. Yeah. I mean, look, a lot of people do use Perplexity for writing code, and I think the pros are just pretty good at scouring pages of documentation on the web and trying to give you a code that takes into account the latest developer libraries and updates and stuff, whereas the models on their own are not up-to-date. And some people prefer that for debugging a specific issue related to a specific library or figuring out exactly how to chain two dependencies. I think these are the things that people come to Perplexity for for their coding-related questions.

I feel the coding market is really bifurcated. A lot of people like their AIs natively on their editor, VS Code, GitHub Copilot. Some people go to ChatGPT because it’s like a muscle memory now because that was the first big use case ChatGPT nailed, and it has the first mover advantage. Some people like the Claude because you can build an artifact out of the code you generate and visualize the code in terms of an actual front end for it.

And some people like Perplexity because it has access to documentation and libraries, and some people prefer that. So I feel like there’s no one winner-takes-all here in coding. People are going to go to different tools depending on what suits their current use case at that moment.

Jim: Very good. I want to thank Aravind for an extremely interesting conversation, and despite my little quibbles about this or that, all my listeners out there, use Perplexity. It is great. You will not regret it. If you do, you can get your money back from me, right? I really just think Perplexity is the most amazing product jump in capacity of any category I’ve seen in the last year. And I know you guys existed before that, but I only started using it maybe six or eight months ago. I just think it’s amazing. You guys are doing a wonderful job. Get rid of that prompt-eating problem, and you’ll have an A-plus product, in my opinion.

Aravind: Thank you, Jim. Thank you so much.

Jim: Bye-bye.