The Rise of Large Language Models: Transforming Society and Cybersecurity
The advent of large language models (LLMs) has revolutionized the way we communicate, conduct research, and enhance our productivity, ultimately transforming society as we know it. LLMs are exceptionally skilled in natural language understanding and generating language that seems more accurate and human-like than their predecessors. However, they also pose new risks to data privacy and the security of personal information.
“The rise of LLMs has significantly changed how we communicate, conduct research, and enhance our productivity, ultimately transforming society as we know it.”
Compared to narrow AI systems, LLMs pose more complex issues like sophisticated phishing attacks, manipulation of online content, and breaches in privacy controls. A recent study by MixMode analyzed data from the National Cyber Security Index (NCSI), the Global Cybersecurity Index (GCI), the Cybersecurity Exposure Index (CEI), and findings from Comparitech to assess cyber safety across 70 countries.
Cybersecurity Index Rankings
Findings indicate that countries with the most robust cybersecurity infrastructures include Finland, Norway, and Denmark. The United Kingdom, Sweden, Japan, and the United States also maintain strong defenses against cyber threats.
Perplexity AI has recently released a new feature called Pages, which is a tool that helps create rich content with the help of an LLM. Using LLMs for content creation is not new, but the experience in Perplexity Pages has been created from the ground up for users to cooperate with the AI in creating content, from the outline to the research, to writing the draft, editing the copy, and formatting the page.
Perplexity Pages
However, the tool is not without its limitations. It can hallucinate, draw information from unreliable sources, and has some kinks that need to be ironed out. For example, it is hard to manually edit the content after the AI generates it, and you must do it through prompts. This can make it frustrating for complex topics that need a lot of editing and polishing.
The success of Perplexity will hinge on creating a moat and delivering experiences that cannot be replicated. Pages can become such a feature as it changes the LLM assistant from an information provider to a content creator. With enough critical mass, Perplexity can create a rich dataset that it can use to train its own models or to provide better answers to user queries. And it can direct user traffic to its own pages, which might provide opportunities for monetization in the future.
Creating a Moat with LLMs
As LLMs are being commoditized, many analysts are pointing out that having access to better data will be the differentiating factor between companies that are creating LLMs or building applications on top of LLMs. It will remain to be seen how reliable and successful Pages will be. For now, it is very convenient and easy to use. Perplexity will need to figure out how to make sure users don’t exploit this feature to flood its platform with poorly curated pages. Otherwise, it will degrade Perplexity’s main product.
Google has also been working on its AI Overview feature, which provides an AI-generated summary of user queries with links to the source web pages. However, the feature drew criticism for inaccurate or nonsensical answers, which it often drew from satirical or unreliable sources. The company attributed the misleading results to ‘information gaps,’ which are uncommon queries with few high-quality sources for the AI to draw on.
Google’s AI Overview
In its blog post, Google described the feature functions and the possible reasons for such failures. The company stated that the odd responses weren’t ‘hallucinations,’ as AI Overviews work differently than other chatbots. It is built on a customized LLM integrated with Google’s core search functions to carry out traditional search tasks like identifying relevant information from the search index and providing a summary of the top results, alongside links.
The post also alleged that several results shared on social media were fake images. Separately, there have been a large number of faked screenshots shared widely. Some of these faked results have been obvious and silly. Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared.
Google also revealed that it had found some cases where the AI would provide information shared from non-serious forum responses or misinterpret the context of some information and stated that it had made the necessary improvements to its algorithms.
Google’s AI Improvements
In conclusion, the rise of LLMs has transformed society in many ways, but it also poses new risks to data privacy and cybersecurity. As companies like Perplexity and Google continue to develop and refine their LLM-powered tools, it is essential to prioritize data security and ensure that these models are used responsibly.