<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nownextlater.ai/Insights/tag/disentqa/feed" rel="self" type="application/rss+xml"/><title>Now Next Later AI - Blog #DisentQA</title><description>Now Next Later AI - Blog #DisentQA</description><link>https://www.nownextlater.ai/Insights/tag/disentqa</link><lastBuildDate>Wed, 26 Nov 2025 21:38:30 +1100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[DisentQA: Catching Knowledge Gaps and Avoiding Misleading Users]]></title><link>https://www.nownextlater.ai/Insights/post/enabling-ai-to-untangle-different-knowledge-sources</link><description><![CDATA[Building QA Systems that catch knowledge gaps and avoid misleading users.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_ewF7pMN9Q_eczUOQpCYtUA" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_XdQfIANyTi-5Z3w2LSGv-A" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_tGWqJgjLSlyldj1XkXMcGw" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_KipIDvLOVMb6oIC8bF9TkA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_KipIDvLOVMb6oIC8bF9TkA"] .zpimage-container figure img { width: 500px ; height: 486.01px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_KipIDvLOVMb6oIC8bF9TkA"] .zpimage-container figure img { width:500px ; height:486.01px ; } } @media (max-width: 767px) { [data-element-id="elm_KipIDvLOVMb6oIC8bF9TkA"] .zpimage-container figure img { width:500px ; height:486.01px ; } } [data-element-id="elm_KipIDvLOVMb6oIC8bF9TkA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-12%20at%209.09.37%20am.png" width="500" height="486.01" loading="lazy" size="medium" alt="Example outputs from our disentangled QA model on the Natural Questions dataset. " data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_TNbKqQ17TP256B60EqRP7w" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_TNbKqQ17TP256B60EqRP7w"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><div style="color:inherit;"><p>Imagine you ask your phone &quot;Who wrote the song Hello by Adele?&quot; and it gives you an incorrect answer, insisting the song is by Taylor Swift. This shows artificial intelligence sometimes confuses its own training knowledge with external facts.</p><p><br></p><p>Researchers want to fix this issue to make AI assistants more helpful and honest. Their solution: <span style="color:inherit;">Build QA Systems that catch knowledge gaps and avoid misleading users by </span>teaching the system to provide two responses:</p><ol><li>The factual answer based on given information (e.g. Adele)</li><li>What it privately recalls from its memory (e.g. Taylor Swift)</li></ol><p><br></p><p>This highlights any mismatches between its training knowledge and external data. It's like when we say &quot;Hmm, I thought X, but the website says Y.&quot;</p><p><br></p><p>The team trained the AI model by creating quizzes with tricky examples:</p><ul><li>Swapping names in passages to elicit different responses from the context vs. the model's recollection</li><li>Removing passages altogether so the system must say &quot;I don't know&quot;</li></ul><p><br></p><p>After this special training, the model reliably distinguished its own knowledge from given facts. This improved its accuracy and truthfulness.</p><p><br></p><p>Say you ask about a movie release date. The system can now respond:</p><p><span style="font-style:italic;">&quot;The article says July 2022. But I thought it was December 2022.&quot;</span></p><p><br></p><p>This catches any knowledge gaps and avoids misleading users.</p><p><br></p><p>While not perfect, it's major progress toward AI that collaborates in a transparent, helpful manner. The benefits for businesses are clear:</p><ul><li>Avoid frustrated users with incorrect responses</li><li>Build trust by exposing limitations upfront</li><li>Reduce risk from applying flawed knowledge</li><li>Clarify when external data should override internal beliefs</li></ul><p><br></p><p>By recognizing and sharing when its knowledge is incomplete, the AI becomes a more reliable and honest partner. This research brings us closer to truly cooperative human-AI interaction.</p><p><br></p><p>Sources:</p><p><span style="color:inherit;"><a href="https://arxiv.org/pdf/2211.05655.pdf" title="DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering" rel="">DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering</a></span></p><p></p></div>
</div><p></p></div></div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 12 Aug 2023 09:22:46 +1000</pubDate></item></channel></rss>