‘You don’t need to destroy Google. All you need to destroy is their monopoly.’

Srinivas believes Google will begin losing significant high-value traffic, and Google’s size and shareholder expectations will hamper its ability to innovate efficiently in the realm of search engine technology.

With over 1 million users in India, accounting for 10% of its global user base, Perplexity is also planning to open an office in India soon, with plans for a research and development (R&D) centre in the future.

Since its inception in August 2022, the company has raised $100 million from the likes of Amazon founder Jeff Bezos and Nvidia. In its last funding round, Perplexity was valued at $520 million.

In a video interview from his California office, the IIT-Madras alumnus also shared his company’s vision, learnings from his stints at DeepMind and OpenAI, his business plan, and thoughts on the evolution of search engines that are being challenged by AI answer bots. Edited excerpts:

What inspired you to start Perplexity?

Ever since I came to the US (2017), I wanted to be an entrepreneur, but did not know what type of a company to start. I was an academic, and all the examples I knew of were from TV shows on Silicon Valley, or movies like the ‘The Social Network’, which showed undergraduate dropouts like Mark Zuckerberg, Steve Jobs, and Bill Gates (starting their own companies). It was very inspiring, but I also felt that this was a closed chapter for me because I had already got my undergraduate degree, and so could not qualify as an undergrad dropout. But when I was an intern at Google DeepMind, I read books like ‘In The Plex: How Google Thinks, Works, and Shapes Our Lives’, and got to gel with the story of how Larry (Page) and Sergey (Brin) founded Google during their Ph.D days, transitioning from an academic life to building a company, which is very rare. That resonated a lot with me—it was a proof point. And that made me very mad passionate about search. But never did I dream that I would build a company that would be working on search.

How did you get in touch with your co-founders to build Perplexity?

I knew Denis (Yarats) from my Ph.D days. We wrote a similar paper but released them just with a day’s difference. Denis became a visiting student in my lab, and it’s here that we brainstormed many ideas. He brought in the other co-founder, Johny (Ho), whom he knew because they worked together at Quora. We are very complimentary people (co-founders – Yarats, Ho, and Andy Konwinski). Denis is a strong machine learning (ML) engineer and has worked at Microsoft, Bing and Quora. Johnny is a very solid software engineer—he was a competitive programmer. And I’m very good at AI, and thinking about vision, products, motivating employees, and evangelizing the company, which is naturally fit for the job as CEO.

Denis became the CTO and Johnny became, I would say, the chief architect of the product. It also helps that I’m very technical, unlike a CEO who’s more into a business-facing (role) like Sam Altman. I can actually dig deep and understand engineering issues. So, I can add more intensity to the engineering processes by talking to the engineers—this is a trade I picked up from people like Elon Musk, and even Larry used to be like this when he was the Google CEO.

So, how did you narrow down on search?

We initially wanted to just build generative AI (GenAI) products. We approached the narrow problem—searching over databases. Our first angel investor, Elad Gill, was a former Googler himself (started Google’s mobile team, and worked on AdSense). He gave us an idea of how would it be if he just ran search or a SQL-based database. But we were passionate about search, so we built cool demos, where we could just search over Twitter (now rebranded as ‘X’), LinkedIn, etc. That elicited a lot of interest even from people like Jeff Dean (now Alphabet chief scientist) who had invested in our company (in his personal capacity) after seeing our Twitter demo.

What did you learn from your stints at OpenAI and Deep Mind? What are you doing differently now?

I learned a lot from them but I’m not doing many things very differently. Most of the things these guys like Altman (Sam Altman, CEO of OpenAI) or Demis (Demis Hassabis is CEO and co-founder of DeepMind) are doing, are right. What I copy from them is the sense of urgency, being obsessed, moving really fast, focusing more on a signal over the noise, and things like that. I would say the difference from Google is that we don’t have a bureaucratic management—just a few decisionmakers. We also move fast and break things like the culture of (Mark) Zuckerberg as opposed to that of Google that likes to think carefully in terms of things.

And we are more product-focused than OpenAI, which succeeded and built the most successful consumer product in recent times—ChatGPT. But it was not by choice—they just happened to win. That’s the main difference for us—having more product-focus and consumer-focus compared to getting a general technology, and hoping people would use it in ways that are interesting, but not actually designing a product with a focus on users. We are also smaller (around 45 employees and 10 million users) and can move faster.

Our disadvantage is that we don’t have leading researchers to do cutting-edge things in terms of model development that OpenAI and Google can do because they have the best researchers working there. They also have large compute in terms of GPUs (graphic processing units).

This also implies, as you acknowledge, that taking on the might of Google is a formidable task, especially because Google itself is using AI to continuously improving its search capabilities. How do you hope to stay competitive since your company, too, is a kind of a wrapper model that is built on the GPT-3.5 and GPT-4 foundational models?

Even though Google has to keep innovating on search, they may not be the best-positioned company anymore to do that, mainly because they have a more than a trillion-dollar market cap (Google parent Alphabet’s current market capitalisation is around $1.8 trillion) to protect. They also have to work around their ad business model to create a new user experience where people don’t care about ads or SEO (search engine optimisation), but directly try to seek answers that can save them time, and help them make better and faster decisions to improve their overall life. That is what people are seeking, and that’s what they (Google) need to build. But it’s easier to build (this new model) if you have no business model to protect and are rising from zero, which we are.

Even a billion-dollar drop in Google’s ad revenue is going to upset Wall Street. While it is very clear that they’re bundling it with Google One (subscription plan with more storage across Google Drive, Gmail, and Google Photos) to grow their subscription revenue, (considering that) it is tiny compared to Google’s overall ad revenue (about 78% of Alphabet’s total revenue), and generates lower margin, Wall Street will not care, because in the subscription business, you have a lot of competition from OpenAI and Microsoft, and companies like ours.

So, your whole position as a dominant monopoly is over and, therefore, I’m not going to invest in you alone as an AI stock anymore. In the future, when people don’t need to click on ads since they will get their information from AI chatbots answer bots—and in this new economy of answer bots, they (Google) won’t have a monopoly position anymore, but will just be one of the dominant players, which will impact market perception. And the moment stock price starts going down a little, institutional investors will hit the panic button and start selling and diversifying by buying other AI stocks like Microsoft, Meta, or Nvidia, or try to invest in OpenAI—and then employees also start panicking because compensation is tied to the company stock.

Are you implying that the ad-driven search revenue model will eventually not be sustainable?

There’s definitely going to be disruptions there. But it will happen in a way that’s very different than what people have been expecting so far—that somebody is going to come and take market share from Google. And that, instead of their 95% market share, they’re going to drop to 80%, which means advertising revenue is going to move to somebody else. But the way it will happen is that out of the 95%, Google will still have 90-94% market share. But whatever they lose will be a lot of the high value traffic—from people who live in high-GDP countries and earning a lot of money, and those who value their time and are willing to pay for a service that helps them with the task.

The interesting conundrum is that these are the very users that advertisers also want to target. Hence, when these users choose another platform, even though the majority of the volume remains on the existing platform (in this case, Google), the value of that platform reduces for the advertiser. And others will slowly follow suit. It’s not going to happen in one day, or even one year. But over time, the high value traffic will slowly go elsewhere. And navigational traffic will remain on Google, but the value of that traffic is really low. So Google will become like a legacy platform that supports a lot of navigation services. They’re aware of this, and that’s why they’re building new platforms themselves like Gemini, etc. But they should use their existing user base directly and not build a user base from scratch again. Else, others can also build a big user base in the new sector of AI-powered information discovery, and they (Google) won’t be the monopoly. You do not need to disrupt (Google) or destroy them. All you need to destroy is their monopoly.

How do you see the future of search and AI evolving? What role do you see Perplexity playing in the future?

Think about a line—the left is a pure navigational link search engine, and the right is an answer engine like Perplexity, which gives you answers instead of links. And the sweet spot is somewhere in the middle, but more towards the right. It’s not like you want answers every time—you may sometimes still want to go to a website. I believe that for those use cases, you will still want to retain the interface where you do get links. The question is how many links you want to show. I believe the right interface is more like ours, where there’s links and answers, and you pick what you want. The speed at which we render the links, and the speed at which you render the answer is so amazing, that you don’t even feel the lack of need for an existing search.

On the other hand, for go through what we are trying to do, is completely go to an answer-first interface. It is more work because of the business model. So, we probably think we have an edge here in terms of how fast we can execute here—like not having a business model. The ideal sweet spot will be be more like what we’re doing with support for navigation, but largely answer-centric, and doesn’t become the default experience for most user needs.

Companies like OpenAI and Microsoft are facing copyright suits over the scraping of data (text, images, etc.) by their respective LLMs. How do you ensure the authencity of your answers and links?

We believe in attribution and citation is a fair use—we’re not stealing anybody’s content. We are providing very accurate in-line citations to every part of the answer. We still drive traffic to content publishers.

And what ‘s your current business model?

Our business model for now, at least, remains subscription for our Pro plan, with features like Copilot (interactive research assistant), unlimited file uploads, and choice of latest AI models (like GPT-4, Claude 2.1) and giving developers API (application programming interface) credits to help them build things that are different from our core product. We have a lot of paying subscribers. In January, we told the Wall Street Journal in an interview that our revenue is $5-10 million.

Give us some examples of how Perplexity is being used across sectors. How has the response been so far?

We’ve seen use cases across a variety of sectors—finance, legal, shopping, travel, health, and general trivia and knowledge-related searches, tech support programming, and market research, etc. The demand from enterprises has been very high, particularly they’re asking to make it more secure, and good data governance is followed. They’ve also been interested in some internal search use cases on their company data.

How is your partnership with Rabbit R1 (generative AI-enabled gadget) shaping up?

It’s a good response. As people start increasingly using the Rabbits, since they’re still shipping them, the perplexity API usage will also keep going up. We are yet to announce this, but we will have similar partnerships with makers of other consumer devices.

How’s the future roadmap looking?

It’s actually simple. We just need to improve on three categories—speed, accuracy and delightful user experience in terms of how the answers are rendered. We should go beyond just giving you an answer—it should be presented beyond just one or two paragraphs, with minimal user-cognitive overload and maximum information consumption bandwidth. We are constantly improving the speed and accuracy. If we do these three things right, and improve the experience on a few verticals even more, we can succeed big time this year.

What advice would you give other entrepreneurs looking to build AI-powered products services, especially in a country like India?

I would just say ‘don’t overthink and don’t worry too much’. When you have a product market fit with a lot of users, you cannot just get killed in one day. So, don’t overly think about what OpenAI is doing. But think if this is this something that OpenAI can easily roll out if they wanted to, or would it take them (OpenAI) a lot of product engineering to do it. If it’s the second, then you are betting on yourself. I can continue to make product engineering and iterations faster than OpenAI. That’s kind of a good thing, because they’re (OpenAI) doing a lot of things. And, make sure that the execution on the product requires more than just LLMs or generative AI—something like in our case, which is search, web indexing, crawling, and the internal orchestration. And, make sure that you’re working on a core product and accumulating as many users and building a brand, and controlling your own distribution channels. Don’t rely on other people to distribute, because then you will not have a brand or real estate that you own. Don’t worry about being a wrapper (building a layer atop an LLM) in the beginning—the beginning stages are to create, or answer whether you need to like scale this thing up. This means you need to have someone using your product every day.

Give us your perspective on the impact on jobs, given the incredible pace at which these technologies are moving. What are the kind of skillsets that an employee—a young worker entering the workforce, or even a middle level manager, or even senior leaders—should have at this point in time?

Monotonous, repetitive tasks that do not require thinking will very likely be automated. But it will require fewer humans because most of the work will be done by a machine. And when we learn to use these machines, new jobs will be created. For example, when the workforce begins using Copilots , who will do the work of managing all the annotation of workflows and ensuring that things are inaccurate and flagging and correcting them—that is a new job that just got created.

This displacement will initially be hard, because every time there’s a new skill, and a new kind of workflow, people will take time to understand, and adjust and train for the new skills. The people who will be able to handle this really well will be those who are nimble and flexible—ones who can adapt very fast, and are typically fast learners. People with fluid intelligence (ability to reason and think flexibly as opposed to ‘crystallised intelligence’) will learn faster.

Finally, with models like OpenAI’s Sora, Google’s Gemini, 1.5 Pro, and many others under development, do you see AI acquiring reasoning abilities in the near future? What are your thoughts about artificial general intelligence (AGI) or artificial super intelligence (ASI)?

Capabilities will be the benchmark for AI in general. You can’t just suddenly call something as ASI or AGI. So far, we’ve been able to achieve super intelligence on one narrow task like playing Go or superhuman level, at chess. We have not been able to achieve superhuman intelligence across the spectrum of tasks in any one system. When we are able to do that, that system will potentially be called as artificial super intelligence. But, even if it is unable to keep teaching itself to keep getting better, we may come up with a different name for that—something like recursive super intelligence.

It’s all about achieving some really important breakthrough, which should be lauded and committed. But in order to create a new paradigm for it, and bring even more excitement into it, they will want to build a good branding for that type of model. And ASI is supposed to be that.

But still, that’s not exactly AGI in the true cognitive sense, because a lot of physical tasks that we (humans) do, also require a good level of spatial and geometric intelligence. People will try to aim for certain capabilities, and human endeavours to be done by machines, and the debate of whether it as AGI or ASI or narrow Intelligence, will continue, with us knocking off goals

Leave a Comment