Artificial intelligence is advancing rapidly, with systems like chatbots able to have increasingly natural conversations. However, they still struggle with fundamental aspects of human interaction like politeness.
New research from Inria Paris published at a top AI conference explores how to make AI conversational agents more polite using a technique called "hedging." Hedging involves softening the impact of a statement by attenuating its strength. For example, saying "I think you could add 4 to both sides" rather than the more direct "Add 4 to both sides."
Hedging helps avoid embarrassing or frustrating the other person, which is critical in contexts like education. The study analyzed real teen peer tutoring sessions, where hedging occurred frequently. The goal was training AI tutors to generate hedging appropriately.
Key findings:
- Fine-tuning large language models alone did not enable generating hedges reliably. The models struggled to learn the social context for hedging.
- However, a re-ranking method to screen generated options and pick ones matching hedging labels significantly improved performance.
- The AI tutors were able to generate diverse, human-like hedging linguistically. But some key social cues for when to hedge were still missed.
- Errors showed an inherent conflict between the AI's goal of coherent responses and the social goals of polite interaction.
This reveals AI still interprets dialogues narrowly, optimizing only factual accuracy. To advance, systems need better incorporation of social intelligence - when to politely hedge statements based on the human listener's state.
For businesses utilizing AI conversational agents, this indicates current technology has limitations managing complex social norms. Engineers should prioritize expanding AI social capabilities beyond just informational goals. Teaching human politeness remains an open challenge.
Sources: