<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nownextlater.ai/Insights/tag/conversational-ai/feed" rel="self" type="application/rss+xml"/><title>Now Next Later AI - Blog #Conversational AI</title><description>Now Next Later AI - Blog #Conversational AI</description><link>https://www.nownextlater.ai/Insights/tag/conversational-ai</link><lastBuildDate>Wed, 26 Nov 2025 21:37:35 +1100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[Enhancing AI's Compositional Language Skills]]></title><link>https://www.nownextlater.ai/Insights/post/enhancing-ai-s-compositional-language-skills</link><description><![CDATA[Enhancing AI's Compositional Language Skills]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_Ag9lOtL8TDaPl-p8m7SaIA" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_S7Dlm9VTR92NhgNiuAFiPw" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_5stMruKbRsmF702-Ogmm0Q" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_a1CmfiNpzvnL4RC9yR0LIw" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_a1CmfiNpzvnL4RC9yR0LIw"] .zpimage-container figure img { width: 1090px ; height: 467.34px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_a1CmfiNpzvnL4RC9yR0LIw"] .zpimage-container figure img { width:723px ; height:309.99px ; } } @media (max-width: 767px) { [data-element-id="elm_a1CmfiNpzvnL4RC9yR0LIw"] .zpimage-container figure img { width:415px ; height:177.93px ; } } [data-element-id="elm_a1CmfiNpzvnL4RC9yR0LIw"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-fit zpimage-tablet-fallback-fit zpimage-mobile-fallback-fit hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-12%20at%2010.07.55%20am.png" width="415" height="177.93" loading="lazy" size="fit" alt="Extracting a lexicon that relates words to their meanings in each dataset" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_nwsHHNOQTGmo-IdYY3B47w" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_nwsHHNOQTGmo-IdYY3B47w"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><p>A major challenge in artificial intelligence is improving computers' ability to truly comprehend language. Humans readily grasp how the meaning of a sentence depends on the meanings of its component words and how they combine structurally. We intuitively rearrange language components while preserving overall meaning.</p><p><br></p><p>AI systems still struggle with this fluid, compositional reasoning. Mastering it would make conversational AI much more powerful and useful. For example, chatbots could handle varied questions and scenarios if they deeply understood how permutations of known linguistic elements construct meaning.</p><p><br></p><p>To advance AI capabilities in this area, researchers at MIT and IBM recently developed a novel technique called LEXSYM. Their key insight is that compositionality mathematically correlates with symmetries in how language data can be transformed while staying semantically valid.</p><p><br></p><p>For instance, swapping &quot;yellow&quot; and &quot;green&quot; in the sentence &quot;Pick up the yellow cube&quot; maintains its essential meaning. LEXSYM automatically detects such symmetries and uses them to synthesize new training examples by substituting related words and phrases.</p><p><br></p><p>In experiments, neural networks trained with LEXSYM-augmented data showed improved skills in executing new instruction combinations, answering compositional reasoning questions about images, and inferring the logical parse of unfamiliar sentences.</p><p><br></p><p>While limitations remain, LEXSYM provides a promising path toward stronger fluidity, generalization, and human-like compositional abilities in AI systems. As conversational interfaces proliferate, these skills will allow smooth, robust interactions.</p><p><br></p><p>For businesses leveraging AI, enhanced compositional language mastery can significantly increase the capability, utility, and linguistic versatility of chatbots, virtual assistants, recommendation systems, and other applications. LEXSYM offers useful foundations to make these AI agents more conversant, adaptive, and lifelike in communications.</p><div><br>Sources:</div><div><div><span style="color:inherit;"><a href="https://arxiv.org/pdf/2201.12926.pdf" title="LexSym: Compositionality as Lexical Symmetry" rel="">LexSym: Compositionality as Lexical Symmetry</a></span></div></div></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 12 Aug 2023 10:10:54 +1000</pubDate></item><item><title><![CDATA[Training Smarter AI Systems to Understand Natural Language]]></title><link>https://www.nownextlater.ai/Insights/post/Training-Smarter-AI-Systems-to-Understand-Natural-Language</link><description><![CDATA[Researchers are exploring new techniques to improve AI's ability to grasp diverse sentence structures and indirect meaning.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_1YarWTKxSpWFcYT1yEypiQ" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm__r-n6p0FTsCU2VN0Qht7Yw" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_N-CnlBB4S6GtTyMuX_7gIA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_k92LbwDhYZdUrScfGjwNLA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_k92LbwDhYZdUrScfGjwNLA"] .zpimage-container figure img { width: 800px ; height: 325.50px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_k92LbwDhYZdUrScfGjwNLA"] .zpimage-container figure img { width:500px ; height:203.44px ; } } @media (max-width: 767px) { [data-element-id="elm_k92LbwDhYZdUrScfGjwNLA"] .zpimage-container figure img { width:500px ; height:203.44px ; } } [data-element-id="elm_k92LbwDhYZdUrScfGjwNLA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-large zpimage-tablet-fallback-large zpimage-mobile-fallback-large hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-12%20at%208.43.27%20am.png" width="500" height="203.44" loading="lazy" size="large" alt="The overall framework to construct PARAAMR based on AMR back-translation. " data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_R3yIVftWS3ezwM-jxnW1Uw" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_R3yIVftWS3ezwM-jxnW1Uw"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><p style="text-align:left;">Artificial intelligence has come a long way in understanding human language, but it still struggles with the nuances and complexities of natural conversation. Researchers are exploring new techniques to improve AI's ability to grasp diverse sentence structures and indirect meaning.</p><p style="text-align:left;"><br></p><p>A team at Google, UCLA and USC recently made advances on this challenge by creating a large dataset of syntactically diverse sentence pairs with similar meaning. Their method relies on abstract meaning representations (AMRs).</p><p><br></p><p>AMRs capture the underlying semantics of sentences in a structured graph format. While two sentences can differ significantly in wording and syntax, their AMRs may convey largely the same meaning.</p><p><br></p><p>The researchers leveraged this insight for paraphrasing - generating sentences that communicate the same essence differently. First, they parsed over 15 million sentences into AMR graphs using an existing tool. Next, they systematically modified each graph's &quot;focus&quot; node and direction of connecting edges to reflect alternate ways of expressing the main idea.</p><p><br></p><p>The altered AMR graphs were then decoded back into English sentences. This yielded over 100 million novel paraphrases exhibiting substantial syntactic diversity like changes in word order, structure and focus.</p><p><br></p><p>Through both automatic metrics and human evaluation, the team showed their new corpus called PARAAMR has greater diversity than other popular paraphrasing datasets based on machine translation, while maintaining semantic similarity.</p><p><br></p><p>Unlike translating between languages, the AMR approach reliably preserves meaning without introducing errors. And forcing syntactic variations during decoding prompts more creative expression of ideas.</p><p><br></p><p>The researchers demonstrated PARAAMR's value on three NLP tasks. Using it to train systems for learning sentence embeddings, controlling paraphrase syntax, and low-shot text classification all led to improved performance over other datasets.</p><p><br></p><p>For businesses applying AI, better representing language semantics in machine learning models enables more natural interactions. Conversational systems like chatbots and voice assistants can understand users more precisely without strictly expecting fixed phrases and patterns.</p><p><br></p><p>PARAAMR shows the possibilities of graph-based semantic parsing for AI language understanding. But some limitations remain for real-world deployment:</p><ul><li>Performance depends heavily on upstream parsing and graph-to-text modules. Imperfect components propagate errors.</li><li>Many graph modifications yield unnatural outputs. The team filtered these, but some issues may remain.</li><li>Their English-only approach lacks linguistic and cultural diversity to cover all use cases.</li></ul><p><br></p><p>With smart engineering and expanded training data, AMR-based methods can make conversational AI more flexible and robust. By better grasping nuanced human language, systems can communicate more naturally across diverse applications.</p><p><br></p><p>Sources:</p><p><span style="color:inherit;"><a href="https://arxiv.org/pdf/2305.16585.pdf" title="ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-Translation" rel="">ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-Translation</a></span></p><p></p></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 12 Aug 2023 08:46:52 +1000</pubDate></item><item><title><![CDATA[Making Conversational AI More Natural: Helping Systems Understand Indirect References]]></title><link>https://www.nownextlater.ai/Insights/post/making-conversational-ai-more-natural-helping-systems-understand-indirect-references</link><description><![CDATA[Making Conversational AI More Natural: Helping Systems Understand Indirect References]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_5TfRZxwRT3CFPbDWKN0bKA" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_oA957902T6Wqc-GfeFtp0g" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_FyZN0JCMREeIVQPLegkUhw" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_J_ikcM4Ft-ulWjirJHXomg" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_J_ikcM4Ft-ulWjirJHXomg"] .zpimage-container figure img { width: 500px ; height: 341.79px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_J_ikcM4Ft-ulWjirJHXomg"] .zpimage-container figure img { width:500px ; height:341.79px ; } } @media (max-width: 767px) { [data-element-id="elm_J_ikcM4Ft-ulWjirJHXomg"] .zpimage-container figure img { width:500px ; height:341.79px ; } } [data-element-id="elm_J_ikcM4Ft-ulWjirJHXomg"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-12%20at%208.15.29%20am.png" width="500" height="341.79" loading="lazy" size="medium" alt="Annotators were shown a cartoon in which they were asked to complete the final step of a conversation." data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_N4kn64LYvsu2o4FYmzMuEg" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_N4kn64LYvsu2o4FYmzMuEg"] .zpimage-container figure img { width: 200px ; height: 143.04px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_N4kn64LYvsu2o4FYmzMuEg"] .zpimage-container figure img { width:200px ; height:143.04px ; } } @media (max-width: 767px) { [data-element-id="elm_N4kn64LYvsu2o4FYmzMuEg"] .zpimage-container figure img { width:200px ; height:143.04px ; } } [data-element-id="elm_N4kn64LYvsu2o4FYmzMuEg"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-small zpimage-tablet-fallback-small zpimage-mobile-fallback-small hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-12%20at%208.15.03%20am.png" width="200" height="143.04" loading="lazy" size="small" alt="Actions annotators were encouraged (Do) or discouraged (Don’t) to take for the BOOKS domain." data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_NbHzOM2LTFS1HARt5M9q8g" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_NbHzOM2LTFS1HARt5M9q8g"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><p>Artificial intelligence (AI) has made great strides in recent years, with systems able to hold conversations, answer questions, and make recommendations. However, these systems still struggle with the subtle complexities of natural human language. In particular, when people are choosing between options, they often refer indirectly to their choice rather than using the exact name. For example, when asked &quot;Do you want the chocolate or vanilla ice cream?&quot; someone may respond &quot;I'll have the darker one&quot; rather than saying &quot;chocolate.&quot; Teaching AI systems to understand such indirect references is an important next step to make interactions feel more natural.</p><p><br></p><p>Researchers at Google have developed a new dataset and models to tackle this problem, summarized in a recent paper. Their key innovation was creating a cartoon-style interface to collect natural conversational responses from regular people choosing between two options, such as recipes, books or songs. By framing it as a casual chat between friends looking back on options, they encouraged indirect references like &quot;the one with the green cover&quot; or &quot;the sweeter dessert&quot; rather than using item names directly.</p><p><br></p><p>After collecting a dataset of over 40,000 such indirect references across three categories, they tested different AI models at picking the intended option based on the reference. With no background knowledge beyond the item names, accuracy was just above random guessing. But given relevant textual descriptions of each item, accuracy reached over 80% with the best models. This is promising compared to previous results, but still leaves room for improvement to handle more subtle references.</p><p><br></p><p>The researchers also showed the models can learn general patterns that transfer between categories, rather than just memorizing item-specific clues. So training on books, songs and recipes enabled reasonably good performance on each area without needing new training data. This is important for applying the technology efficiently to new domains.</p><p><br></p><p>For business leaders, this research highlights both the progress and remaining challenges in making AI conversational interfaces feel natural. Indirect references are common in human conversations, so handling them well is key to users' comfort with AI systems. These results suggest current AI capabilities could support basic back-and-forth interactions, but with some limitations.</p><p><br></p><p>Looking ahead, there are several opportunities to build on this work:</p><ul><li>Expanding training data to cover more domains, languages and cultural references would make systems more robust.</li><li>Exploring different input modes beyond text, like images, audio and video, could improve understanding of indirect references.</li><li>Better reasoning capabilities would allow AI systems to make inferences about items, rather than relying completely on background knowledge descriptions.</li><li>Retrieval augmented models that proactively gather relevant information could improve disambiguation with limited initial knowledge.</li><li>Decomposing complex references into simpler concepts could enable understanding of indirect comparisons like &quot;the happier song.&quot;</li></ul><p><br></p><p>As conversational systems become integrated into more products and workflows, demand will grow for smooth and natural interactions. Investing in AI advances that unlock more human-like language understanding seems likely to offer strategic value across many industries. While current capabilities are promising, there is still plenty of work needed to truly reach the subtlety and flexibility of human conversation.</p><p><br></p><p>Sources</p><p><span style="color:inherit;"><a href="https://arxiv.org/pdf/2212.10933.pdf" title="Resolving Indirect Referring Expressions for Entity Selection" rel="">Resolving Indirect Referring Expressions for Entity Selection</a></span></p><p></p></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 12 Aug 2023 08:22:55 +1000</pubDate></item><item><title><![CDATA[Teaching AI Assistants to Be More Polite]]></title><link>https://www.nownextlater.ai/Insights/post/Teaching-AI-Assistants-to-Be-More-Polite</link><description><![CDATA[New research explores how to make AI conversational agents more polite using a technique called "hedging."]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_qfmIeXgYTp2XBBhhTo6zyQ" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_ouqIZ2c3T5mLXNbwNz_cJA" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_OgTfj3AXRQiE0p_aCj1EwQ" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_MUQiyOlWTPS3lzol7JZpNg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_MUQiyOlWTPS3lzol7JZpNg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><div><p>Artificial intelligence is advancing rapidly, with systems like chatbots able to have increasingly natural conversations. However, they still struggle with fundamental aspects of human interaction like politeness.</p><p><br></p><p>New research from Inria Paris published at a top AI conference explores how to make AI conversational agents more polite using a technique called &quot;hedging.&quot; Hedging involves softening the impact of a statement by attenuating its strength. For example, saying &quot;I think you could add 4 to both sides&quot; rather than the more direct &quot;Add 4 to both sides.&quot;</p><p><br></p><p>Hedging helps avoid embarrassing or frustrating the other person, which is critical in contexts like education. The study analyzed real teen peer tutoring sessions, where hedging occurred frequently. The goal was training AI tutors to generate hedging appropriately.</p><p><br></p><p>Key findings:</p><ul><li>Fine-tuning large language models alone did not enable generating hedges reliably. The models struggled to learn the social context for hedging.</li><li>However, a re-ranking method to screen generated options and pick ones matching hedging labels significantly improved performance.</li><li>The AI tutors were able to generate diverse, human-like hedging linguistically. But some key social cues for when to hedge were still missed.</li><li>Errors showed an inherent conflict between the AI's goal of coherent responses and the social goals of polite interaction.</li></ul><div><br></div>
<p>This reveals AI still interprets dialogues narrowly, optimizing only factual accuracy. To advance, systems need better incorporation of social intelligence - when to politely hedge statements based on the human listener's state.</p><p><br></p><p>For businesses utilizing AI conversational agents, this indicates current technology has limitations managing complex social norms. Engineers should prioritize expanding AI social capabilities beyond just informational goals. Teaching human politeness remains an open challenge.</p><p><br></p><p>Sources:</p><p><a href="https://arxiv.org/pdf/2306.14696.pdf" title="arxiv.org" rel="">arxiv.org</a><br></p><p></p></div></div><p></p></div>
</div><div data-element-id="elm_55or6F7eNgC2KeTFtYl_rA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_55or6F7eNgC2KeTFtYl_rA"] .zpimage-container figure img { width: 800px ; height: 344.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_55or6F7eNgC2KeTFtYl_rA"] .zpimage-container figure img { width:500px ; height:215.00px ; } } @media (max-width: 767px) { [data-element-id="elm_55or6F7eNgC2KeTFtYl_rA"] .zpimage-container figure img { width:500px ; height:215.00px ; } } [data-element-id="elm_55or6F7eNgC2KeTFtYl_rA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-large zpimage-tablet-fallback-large zpimage-mobile-fallback-large "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/aibooks" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Untitled%20design%20-4-.png" width="500" height="215.00" loading="lazy" size="large" alt="AI Books for Professionals"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 10 Aug 2023 07:52:22 +1000</pubDate></item></channel></rss>