Politeness with AI: A Waste of Resources, According to New Research

A new study from researchers at George Washington University has revealed that being polite with AI models is a waste of computational resources.

Incorporating words like «please» and «thank you» into prompts has a minimal impact on the quality of responses from chatbots.

Experts found that polite language is typically «orthogonal to substantive good and bad output tokens» and has «little effect on the dot product»—meaning these polite words operate in separate areas within the model’s internal space and almost do not influence the outcomes.

This article contradicts a Japanese study from 2024 that claimed politeness enhances artificial intelligence performance. That research tested models like GPT-3.5, GPT-4, PaLM-2, and Claude-2.

David Acosta, AI Director at Arbo AI, noted that the differences in findings stem from the overly simplistic model used by George Washington University.

«The conflicting results regarding politeness and AI performance are often linked to cultural variations in training data, the subtleties of prompt design for specific tasks, and contextual interpretations of politeness, which necessitate cross-cultural experiments and task-specific evaluation systems to clarify the impacts,» he commented.

The team behind the new study acknowledged that their model is «intentionally simplified» compared to commercial systems like ChatGPT. However, they believe that applying their approach to more complex neural networks would yield similar results.

«The adequacy of an AI response depends on the training of the LLM, which shapes the embeddings of tokens, as well as the content of the tokens in the prompt—rather than on whether we were polite or not,» the study states.

It’s worth noting that in April, OpenAI CEO Sam Altman announced that the company spent tens of millions of dollars on responses to users who included «please» and «thank you.»