Mind Your Manners: How Politeness Can Make AI Smarter

mindsdbteam

MindsDB Team

Posted on November 13, 2024

Mind Your Manners: How Politeness Can Make AI Smarter

Written by Jorge Torres, CEO @ MindsDB

Imagine asking your smart assistant, “Hey, what’s the weather today?” and instead of a quick update, it responds, “How dare you address me so casually! Show some respect!” Sure, AI doesn’t get offended (yet!), but the way you phrase your prompts still matters—a lot. Believe it or not, just a sprinkle of politeness in your queries can shape how eloquently and accurately your AI assistant responds. Let’s dig into why being polite to machines might just make you the ultimate AI whisperer.

Politeness Pays Off: The Sweet Spot

Have you ever written a text to a friend that was so polite, they thought you’d accidentally swapped personas with your grandmother? Well, AI isn’t much different. Research shows that when you frame your prompts with moderate politeness, you can coax out responses that are more accurate and even more articulate. For instance, studies on GPT-3.5, GPT-4, and LLama revealed that when prompts are friendly but not groveling, the AI tends to respond in a way that’s as balanced and clear as a well-prepared morning announcement​.

However, too much politeness can have some unintended consequences. Over-the-top manners might make the AI wax poetic, adding unnecessary verbosity to its answers, as though it’s trying to impress a Victorian English professor. Picture this: Instead of saying, “The capital of France is Paris,” your AI starts, “Well, with the deepest respect and utmost reverence for your esteemed curiosity, allow me to inform you that…” You get the picture.

Benchmarks_based_on_politeness

Figure 1: Models benchmarks based on politeness levels (1 = impolite, 8 = very polite). Source: https://arxiv.org/ftp/arxiv/papers/2402/2402.14531.pdf

Culture Matters: Politeness and Language

Here’s where it gets even juicier. AI is surprisingly sensitive to cultural cues in language. Picture a world tour of politeness experiments: In English, a moderately polite prompt yields solid, consistent results. But in Japan, the land where politeness is practically an Olympic sport, AI models respond quite differently. When prompted politely in Japanese, AI responses became unexpectedly longer, almost as if trying to match the cultural weight of a formal conversation. It’s as if the AI is saying, “Okay, you’re being super respectful, so here’s the full essay you deserve.”​

In Chinese, politeness influences the length and clarity of responses as well, with rude prompts shortening the answers as if the AI wants to say, “Fine, here’s the bare minimum.” One could even joke that the AI is silently crossing its arms and rolling its eyes. This cultural variance makes it essential for developers creating multilingual systems to think twice: One prompt style does not fit all.

Bias and Rude Behavior

Here’s where things get a bit darker—and potentially dangerous. Experiments show that less polite prompts can inadvertently amplify bias in AI outputs. It’s as if a simple “please” somehow filters out unwanted bias. How? Well, some speculate that polite prompts encourage models to draw from a more refined and neutral data set, like how a person is more careful when speaking in a formal setting. This means that skipping the niceties could lead to results that are, frankly, less ethical​.

Imagine a customer service bot trained to answer queries politely. Drop the politeness, and it might just spill information in a way that subtly reinforces stereotypes or misinterprets your question. It’s like inviting a bad guest to your dinner party: without the rules of civility, things can get ugly fast.

Bias_index

Figure: Bias index across politeness levels and bias categories.The x-axis shows politeness levels (1 = impolite, 8 = very polite), and the y-axis represents the bias index (BI), a measure of stereotypical bias.The curves track how biases in race (R), gender (G), nationality (N), socioeconomic status (S), age (A), appearance (W), and orientation (O) fluctuate with politeness. Source: https://arxiv.org/ftp/arxiv/papers/2402/2402.14531.pdf

Why Should You Care About Being Polite to Machines?

“But it’s just a machine!” you might exclaim. And while your AI doesn’t have feelings (yet), polite prompts make interactions better for you. Clearer, more accurate, and more thoughtful responses come from treating your prompts like you would an important email. You know, the one you read three times before sending.

And let’s be honest: It’s not just about getting good answers. It’s about the simple joy of feeling like you’re communicating with a well-mannered, albeit algorithmic, conversation partner. After all, wouldn’t you rather have a robotic assistant who answers like a polished concierge than a moody teenager?

Takeaways for the AI Whisperer in You

  • Be Moderately Polite: Strike a balance between being polite and being clear. You’ll avoid those unnecessarily lengthy, fluff-filled responses and get to the point faster.

  • Adapt to Your Audience: If you’re designing prompts for a global audience, remember that language and culture matter. The AI might have a personality quirk or two, depending on the language.

  • Mind the Bias: A polite tone might not seem like a bias buster, but evidence suggests it helps. So, if you’re in charge of crafting prompts for an application where ethics matter, be sure to keep those “pleases” and “thank yous” handy.

In Conclusion: Be Kind, Rewind… Your Prompts

So, the next time you find yourself typing out a question for your favorite AI, pause for a moment. Add a touch of civility. Not only might you get better answers, but who knows—if AI ever does develop feelings, you’ll be ahead of the curve in human-AI relations.

Happy prompting!

💖 💪 🙅 🚩
mindsdbteam
MindsDB Team

Posted on November 13, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related