The Rise of AI Chatbots as Ethicists: A New Era of Moral Advice?

The rise of AI chatbots as ethicists has sparked a debate about their potential to replace human moral advisors. This article explores the capabilities of large language models in generating human-like responses to complex moral dilemmas.
The Rise of AI Chatbots as Ethicists: A New Era of Moral Advice?

AI Chatbots: The New Ethicists?

The world of artificial intelligence has made tremendous progress in recent years, and one of the most significant advancements is the development of large language models (LLMs). These models have demonstrated an uncanny ability to generate human-like responses to complex moral dilemmas, raising questions about their potential to replace human ethicists.

A chatbot providing moral advice

In 1691, the Athenian Mercury published what is believed to be the world’s first advice column. This marked the beginning of a thriving genre that has produced renowned columnists like Ann Landers and Kwame Anthony Appiah, who writes The Ethicist column in the New York Times magazine. However, with the emergence of LLMs, human advice-givers may soon have competition.

“The moral reasoning of LLMs is way better than the moral reasoning of an average human.” - Thilo Hagendorff, computer scientist at the University of Stuttgart in Germany

LLMs have been trained on vast amounts of text data, including descriptions of moral quandaries, which enables them to generate reasonable answers to ethical problems. Two recent studies have found that the advice given by state-of-the-art LLMs is at least as good as that provided by human ethicists.

Moral dilemmas posed to The Ethicist

One study found no significant difference between the perceived value of advice given by OpenAI’s GPT-4 and that given by Appiah, as judged by university students, ethical experts, and online evaluators. Another study concluded that advice given by GPT-4 was rated as more moral, trustworthy, thoughtful, and correct than Appiah’s advice.

While these findings are impressive, some experts caution against anthropomorphizing AIs, emphasizing that they lack self-consciousness, emotion, and intention. However, others argue that LLMs’ ability to take in staggering amounts of ethical information could be a significant advantage.

The appeal of AI-generated moral advice

The appeal of AI-generated moral advice may also have to do with its persuasiveness. According to Christian Terwiesch, chair of the Operations, Information and Decisions department at the Wharton School of the University of Pennsylvania, the mode of delivery plays a significant role in the perceived value of the advice.

“If you have the skill to be persuasive, you will be able to also convince me, through persuasion, that the ethical advice you are giving me is good.” - Christian Terwiesch

As AI-generated moral advice becomes more prevalent, it raises important questions about the potential risks and benefits. While some worry about the perpetuation of biases and emergent behaviors, others see the potential for AI to revolutionize the way we approach moral dilemmas.

The future of moral advice

In conclusion, the rise of AI chatbots as ethicists is a development that warrants attention. As we navigate the complexities of AI-generated moral advice, it is essential to consider both the benefits and the risks. One thing is certain, however - the future of moral advice has never been more exciting.