Beyond the Hype: Debunking Misconceptions About Large Language Models in Contact Centers

Debunking common misconceptions about large language models in contact centers and exploring their potential to transform customer service.
Beyond the Hype: Debunking Misconceptions About Large Language Models in Contact Centers
Photo by Blake Cheek on Unsplash

Large Language Models: Debunking Common Misconceptions in Contact Centers

As the use of large language models (LLMs) continues to grow in contact centers, there are several misconceptions that need to be addressed. Ping Wu, CEO of Cresta, has identified three common misconceptions that contact centers have before implementing LLMs.

Misconception 1: LLMs are Just End-to-End Text Generation Machines

Many people think that LLMs are simply machines that generate text based on input. However, Wu explains that there are two parts to LLMs: comprehension and action (decoding). In many business contexts, LLMs are more useful for comprehension, understanding the user’s intent, while the output action is guided by business logic.

Understanding the user’s intent

Misconception 2: LLMs are Limited to Question-Answering

Another misconception is that LLMs are only useful for question-answering. However, Wu suggests that they are also strong in synthesis, extracting key concepts from large amounts of text. Case summarization is a mature example of this in the contact center.

Extracting key concepts from large amounts of text

Misconception 3: LLMs are Similar to Human Intelligence

The third misconception is that LLMs are similar to human intelligence. However, Wu notes that there are differences that business leaders must consider. Some tasks that are hard for humans are easy for LLMs, and vice versa. For example, LLMs can pass advanced placement biology exams but may get simple customer support questions wrong if not properly guided.

Understanding the differences between human and AI intelligence

Implementing LLMs in Contact Centers

There are two common approaches to implementing LLMs in contact centers. The first involves fine-tuning the LLM with use-case-specific data. The second approach is retrieval-augmented generation (RAG), where relevant business data are first searched and retrieved, then fed into the LLM, which synthesizes the information to answer questions by analyzing multiple documents.

Retrieval-augmented generation approach

The Future of Contact Centers

Wu believes that future contact centers will be hybrid human-AI systems, where AI augments human agents, learns from them, and improves over time. This will lead to increased automation of conversations, enhanced human abilities, and more sophisticated AI capable of multi-modal tasks, interacting with both speech and screens.

The future of contact centers