Large Language Models: The Art of Prompting
As large language models (LLMs) continue to evolve, it’s essential to understand how to effectively prompt them to get the desired results. In this article, we’ll explore three research-backed tips to improve your prompting strategies and get more out of LLMs.
Be Polite
It may seem surprising, but the tone of your prompt can significantly impact the quality of the response. Researchers at Waseda University and the Riken research institution in Japan found that LLMs’ performance improves as the politeness of the prompt increases. Being rude to an LLM tends to yield poor results. This is likely because the corpus of data LLMs are trained on shows humans responding better when we are polite with one another.
A polite conversation can go a long way in getting better responses from LLMs.
Provide Context
Providing context in your prompt can reduce the extent to which LLMs hallucinate or fabricate information. Researchers at the University of Maryland found that providing context, such as an author’s CV, can generate more accurate results. However, even with context, LLMs may still hallucinate some of the time.
Providing context can help LLMs generate more accurate results.
Assign a Role
Prompting the LLM to assume a role can yield better and more specific results. For example, before asking an LLM to generate taglines for a new product, try prefacing the request with “You are an expert in marketing.” This approach can help LLMs provide more accurate and relevant responses.
Assigning a role can help LLMs provide more accurate and relevant responses.
As LLMs continue to evolve, it’s essential to understand how to effectively prompt them to get the desired results. By being polite, providing context, and assigning a role, you can improve your prompting strategies and get more out of LLMs.
“The way that a prompt is structured has a significant impact on the quality of the response provided.” - Researchers at Waseda University and the Riken research institution in Japan
LLMs are powerful tools that require effective prompting to get the desired results.
By following these research-backed tips, you can improve your prompting strategies and get more out of LLMs. As LLMs continue to evolve, it’s essential to stay up-to-date with the latest research and best practices to get the most out of these powerful tools.