The King of Shell Products: Perplexity

Video Channel: Huang Yi He
YouTube / Medium: huangyihe
The following is the text version of the video content
I recently saw a very powerful statement:
A shell product with 100,000 users is more meaningful than having a proprietary model without any users.
If you have friends who are investing in large models or working on them, remember to pass this statement to them.
This provocative statement comes from the CEO of Perplexity. They have just completed a Series B funding round, with a valuation of $520 million. Investors include Nvidia and big names like Bezos.
The King of Shell Products: Perplexity
Perplexity’s product is a phenomenal AI-native application that could potentially replace traditional search engines with a Q&A engine.
What is a Q&A engine?
Currently, search engines return web pages. But are web pages the results we want? What we want is the content contained within those pages. The value of large models lies here:
It helps us go through all the web pages found, extracts relevant content, organizes the logic, and ultimately presents the results in one go.
This is something traditional search engine technology cannot achieve. That’s why search is a definite track that will inevitably be completely transformed by large model technology.
In the past year, I have experienced many AI applications. However, the ones that I can continuously use and feel I must use are just two:
  • GitHub Copilot
  • Perplexity AI
I strongly recommend everyone try Perplexity. It is an AI application that is helpful to everyone. After using it, you will likely no longer need Google, let alone Baidu.
I will demonstrate using the web version. This product also has mobile and iPad versions, which are very convenient.
After opening “Copilot”, it provides more accurate and in-depth answers, at the cost of being slightly slower. The free version seems to have 5 quotas every four hours, while the subscription version has 300 quotas per day, which is generally sufficient.
The “Focus” option is easy to understand, allowing the large model to focus on searching for a specific type, such as academic papers, Reddit discussions, or YouTube videos. If you choose the Writing option, it will not connect to the internet, essentially using the large model’s capabilities directly.
Perplexity’s subscription price is $20 per month. From a practical standpoint, I suggest that people can skip subscribing to ChatGPT Plus, but should definitely subscribe to this. After all, search is a high-frequency need. Perplexity’s search is stronger than ChatGPT. Moreover, if you need GPT-4 to generate directly, just select the Writing mode.
Let’s do a simple example. For instance, searching for “GitHub Copilot.” The large model will first understand the question or keywords, and then expand on them based on that understanding.
Since we only entered “GitHub Copilot,” which is quite general, the large model judges that the user is likely looking for a preliminary understanding, such as what it is, its uses, advantages, and disadvantages, etc. Therefore, it helps us make a series of expansions, searches, finds a bunch of sources, and finally gives an answer.
After the first interaction, Perplexity will guide the user to either query related questions or continue asking.
Starting from one question or keyword, the multi-turn interaction forms a collection, archived in the Library, equivalent to a history record that can be queried or continued later. This is one aspect I really like about this product.
“Collection” is the latest feature. You can make more detailed settings on a specific topic through prompts and share them with other friends.
As for Discover, it’s the official promotion of hot topics, which you can check out when bored.
Perplexity is recognized as the AI Q&A engine with the best user experience and the highest result accuracy.
First, let’s talk about user experience.
“User-centered” is not just a slogan for them; they truly believe in it. I’ll give two examples.
First, why does Perplexity help users expand search keywords first?
Because the vast majority of users do not know how to ask questions.
Just like the demonstration earlier, I only provided one keyword. If it were a traditional search engine, the results would often be poor due to the user inputting too little or inaccurately.
So, is it the user’s fault?
No, it’s a problem with your technology and product design. This is the reality that application development must face.
To add, I think this wave of large model technology explosion brings not just natural language interaction between humans and machines, but also intention interaction. Many projects are heading in this direction; it’s just a matter of who gets there first. Back to the point.
Second, since Perplexity has already provided the final answer, why list the sources?
Because users always have concerns.
They worry about the authority of your answers and whether the large model might hallucinate.
Especially if some viewpoints in the answer do not align with my expectations, I will definitely want to check the source webpage or video.
Perplexity is product-oriented; technology is merely a means to achieve it. But that doesn’t mean they lack technology.
The reason their CEO made that provocative statement at the beginning is that in the early stages, Perplexity, like many other projects, used OpenAI’s large model and was thus labeled as a “shell project.”
However, is it enough to just use GPT-3.5 or GPT-4?
First, the GPT-3.5 used by Perplexity is a self-tuned version, with significantly improved performance, but costs less than GPT-4 and is also faster.
Secondly, besides GPT, they also use other large models, such as Claude, because it supports longer contexts, making it particularly suitable for meeting user document upload needs.
Finally, Perplexity knows it cannot always rely on OpenAI. Therefore, they use open-source large models for fine-tuning, creating two large models: pplx-7b-online and pplx-70b-online.
The King of Shell Products: Perplexity
The former is based on mistral-7b, while the latter is based on llama2-70b.These two large models are specifically designed to handle real-time data from the internet.Moreover, the fine-tuning work will continue, constantly improving performance.The training data is also prepared by them, ensuring high quality and diversity.
It is estimated that when the performance of open-source large models matches that of GPT-4, Perplexity will definitely use open-source large models as a foundation, completely breaking away from reliance on OpenAI.
Having a customized large model for search is not enough; strong RAG technology is also required.
Therefore, Perplexity is definitely not a shell project; their technical strength is impressive. At the same time, Perplexity is not just a pure technology project; they know how to use technology to meet needs.
Moreover, search will definitely not be their only product. With the development of large model technology, this team will undoubtedly introduce more new products in the future. This is also one reason why I will continue to follow them.

Leave a Comment