Skip to main content
Oct 24, 2024

Politeness is a virtue when it comes to AI

Academic study suggests ‘moderate politeness’ for chatbot interactions

Have you ever snapped at your AI assistant? I must admit that when Alexa doesn’t understand me properly, my tendency is to repeat my request in a much louder and more direct fashion.

By contrast (and to my shame), one of my colleagues at IR Magazine gets everyone in her household to say ‘please’ when talking to Amazon’s voice service to instill good manners.

While the question of whether to be respectful to AI is as old as the science fiction genre, it’s becoming more pressing as human-like AI tools become ubiquitous in our lives.

The debate is partly ethical: does it make you a bad person if you get snarky with ChatGPT? For some, having that kind of interaction feels like it goes against who they are – even if they know deep down they are just talking to a machine.

But for IR teams – many of which are currently exploring the possibilities of generative AI for their work – a more pressing question may be whether a good AI attitude gets you better results.

One study, conducted by academics in Japan and published earlier this year, suggests it is true. The researchers, from Waseda University and the RIKEN Center for Advanced Intelligence Project, put requests with varying levels of politeness to large language models (LLMs).

They find that impolite prompts lead to low performance, such as ‘increased bias, incorrect answers or refusal of answers’, but being overly respectful doesn’t always help, either. ‘In most conditions, moderate politeness is better, but the standard of moderation varies by language and LLM,’ the academics write.

Being civil may also lead chatbots to higher-quality information, according to an article in Scientific American. Speaking to the magazine, Nathan Bos, a senior research associate at Johns Hopkins University, says LLMs may try to match the tone of your request with the tone of the source information. Politer parts of the internet are potentially more credible, he notes.

It’s perhaps unsurprising that technology designed to mimic human interactions responds well to how most would actually treat people in normal life.

I’m reminded of the comments made by an IRO at our AI for IR Forum in London earlier this year. She said she gets the best results from ChatGPT when she speaks to it like a junior team member.

‘I usually think like I am talking to a recent graduate or intern,’ she explained. ‘I’m trying really patiently, in a clear way, to explain what I need in as much detail as I can.’

Beyond the ethics of human-machine interaction, you may simply be wondering what sort of prompts other IR professionals are using with generative AI tools.

To give you an idea, last week we published an interview with three IROs in which they detail the prompt text they use to help with daily activities. These instructions cover a wide variety of tasks, from writing investor newsletters to analyzing peer reports and brainstorming social media posts.

At IR Magazine, we also have a bigger project underway on the same topic. We are inviting you all to share your favorite AI prompts via an online form and we will release them in a report later this year. Many thanks to those of you who have already taken part.

Do you try to be nice to your chatbot? Get in touch and let us know at [email protected] or via LinkedIn.

Tags
Clicky