<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nownextlater.ai/Insights/tag/reasoning/feed" rel="self" type="application/rss+xml"/><title>Now Next Later AI - Blog #Reasoning</title><description>Now Next Later AI - Blog #Reasoning</description><link>https://www.nownextlater.ai/Insights/tag/reasoning</link><lastBuildDate>Wed, 26 Nov 2025 21:23:18 +1100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[Enhancing AI with Symbolic Thinking]]></title><link>https://www.nownextlater.ai/Insights/post/enhancing-ai-with-symbolic-thinking</link><description><![CDATA[Researchers are exploring how to combine LLMs with neurosymbolic methods that incorporate logical reasoning and structure.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_92MOykrxQKud1bMa7iJXXQ" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_U54X55u7TYirdWrR1rLrvA" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_x5U3oKbOR3K_vjQPTrwGfg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_5XLnBfPx_RXLzPyqbU7ksA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_5XLnBfPx_RXLzPyqbU7ksA"] .zpimage-container figure img { width: 500px ; height: 710.69px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_5XLnBfPx_RXLzPyqbU7ksA"] .zpimage-container figure img { width:500px ; height:710.69px ; } } @media (max-width: 767px) { [data-element-id="elm_5XLnBfPx_RXLzPyqbU7ksA"] .zpimage-container figure img { width:500px ; height:710.69px ; } } [data-element-id="elm_5XLnBfPx_RXLzPyqbU7ksA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-10%20at%206.30.11%20pm.png" width="500" height="710.69" loading="lazy" size="medium" alt="Given facts, rules, and a question all ex- pressed in natural language, ProofWriter answers the question and generates a proof of the answer." data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_s3YK1MdKTt-gsY1xAsJt_w" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_s3YK1MdKTt-gsY1xAsJt_w"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><div style="color:inherit;"><p>The rapid progress of artificial intelligence over the past decade owes much to a class of algorithms called transformer neural networks. Transformers gave rise to large language models (LLMs) like GPT-4 or Claude 2 that display impressive natural language abilities.</p><p><br></p><p>But as AI becomes more integrated into business processes, sole reliance on data-driven machine learning approaches like transformers may prove limiting. Researchers are exploring how to combine these powerful statistical models with neurosymbolic methods that incorporate logical reasoning and structure.</p><p><br></p><p>The result could be AI systems that blend raw pattern recognition power with human-like compositional generalization and interpretability.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:16px;">The Rise of Large Language Models</span></p><p><br></p><p>Much of the current excitement around AI stems from the advances of LLMs over the past few years. Models like GPT-3, PaLM, and Google's LaMDA have shown the ability to generate human-like text, answer questions, and accomplish tasks from basic prompts.</p><p><br></p><p>LLMs owe their abilities to a neural network architecture called transformers. Transformers process text more holistically than previous recurrent neural networks. They capture long-range dependencies in language by attending to all words in a context.</p><p><br></p><p>Training transformers on massive text corpora like the internet produces universal language models. With enough data and compute, these models learn statistical representations that prove surprisingly versatile for language tasks.</p><p><br></p><p>Finetuning techniques allow specializing LLMs to specific applications by updating the models on task data. For example, a finetuned GPT-3 model can be adapted into a conversational chatbot or a code completion tool.</p><p><br></p><p>The broad capabilities of LLMs along with their ease of use via prompting led to widespread adoption. Startups like Anthropic and Cohere are commercializing LLMs for business use cases. Apps built on LLMs range from automating customer support to generating content to synthesizing code.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:16px;">Limits of Language Models</span></p><p></p><p><br></p><p>But for all their progress, LLMs still suffer from key limitations. Most notably, they display limited compositional generalization outside the distribution of their training data. For example, a LLM trained on English text will struggle with novel sentence structures or made-up words.</p><p><br></p><p>Humans seamlessly compose known concepts into new combinations thanks to our intuitive understanding of language syntax and meaning. Neural networks have no such innate symbolic reasoning capabilities.</p><p><br></p><p>LLMs are also black boxes. They can generate plausible and useful text or code but offer no interpretable justification for their outputs. Lack of interpretability makes it hard to audit models or identify causes of failures.</p><p><br></p><p>Finally, the massive scale of data and compute required to train LLMs makes them environmentally costly. Requiring less data and smaller models would allow much wider deployment of AI technology.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:16px;">Integrating Symbolic Representations</span></p><p></p><p><br></p><div style="color:inherit;"><p>To overcome the limits of language models, researchers are finding ways to incorporate more logical reasoning. The aim is to complement the statistical learning with capabilities closer to human understanding.</p><p><br></p><p>One approach injects structured knowledge representations into the training process. For example, some methods jointly train the language model with a knowledge graph. <span style="color:inherit;">Knowledge graphs are data structures that represent facts as networks of entities and relationships. They encode real-world knowledge in a machine-readable graph format with nodes for entities like people and edges for relationships like &quot;employed at&quot;. This allows computers to automatically reason over millions of interconnected facts. Knowledge graphs help power many AI applications today including search, recommendations, and question answering. </span>The knowledge graph acts like a symbolic memory bank to improve reasoning.</p><p><br></p><p>Other techniques draw inspiration from classic logic programming languages like Prolog. These languages represent knowledge as human-readable rules. By integrating them into the training, the aim is to bake in more systematic symbolic thinking.</p><p><br></p><p>Researchers are also finding ways to refine and check language model outputs using logical constraints. For instance, one idea runs the text through separate logic rules as an extra plausibility filter beyond the statistical patterns.</p><p><br></p><p>In each case, the goal is to guide, restrict, and enhance the pattern-finding abilities of language models with more deliberate symbolic reasoning. Just like humans blend intuitive thinking with logic, the hope is to achieve AI systems that integrate learned statistical correlations with structured symbolic representations.</p><p><br></p><p>The end result could be models that display more generalized reasoning abilities, while also producing outputs we can audit, validate, and explain.</p></div><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:16px;">Towards Hybrid Intelligence</span></p><p></p><p><br></p><p>Ultimately, the aim is achieving hybrid systems that integrate the complementary strengths of neural and symbolic AI. Some <span style="color:inherit;">researchers</span> argue that intelligence emerges from the interplay of two mechanisms:</p><ul><li>Correlation-based pattern recognition that is data-driven and associative.</li><li>Model-based compositional generalization relying on structured representations and explicit rules.</li></ul><p><br></p><p>Large transformer networks excel at the former while neurosymbolic methods specialize in the latter. Combining these two modes of reasoning could thus give rise to more human-like artificial intelligence.</p><p><br></p><p>The business implications of such hybrid AI systems are far-reaching. Logical components would allow verifying conclusions, checking ethical compliance, and generating step-by-step explanations. Incorporating domain constraints would reduce data needs and may lead to safer and less environmentally costly systems.</p><p><br></p><p>At the same time, retaining differentiable components preserves versatility, allows critiquing and updating symbolic knowledge, and facilitates integrating with downstream machine learning tasks.</p><p><br></p><p>Realizing this vision of integrated reasoning poses research challenges. Tradeoffs exist between symbolic interpretability and neural flexibility. Multi-component systems risk bottlenecks limiting end-to-end learning. Architectures that blur gradients across reasoning layers may be needed.</p><p><br></p><p>Nonetheless, the potential payoff for deployable, ethical, and broadly capable AI merits investment into these hybrid systems. Given the enthusiasm around LLMs today, injecting connections to symbolic reasoning could be a crucial next step in fulfilling their promise while mitigating risks.</p><p><br></p><p><span style="color:inherit;">Blending logical rule-based reasoning with modern neural networks could create more capable and reliable AI systems. This combination of human-like symbolic thinking and data-driven pattern recognition represents an exciting path forward. The result may be AI that better aligns with human intelligence in terms of adaptability, efficiency, and trustworthiness. Integrating the strengths of both of these approaches could lead to more advanced and human-compatible AI.</span></p><p><span style="color:inherit;"><br></span></p><p><span style="color:inherit;">Sources:</span></p><div style="color:inherit;"><a href="https://arxiv.org/abs/2205.11916" title="Constraining large language models with logic." rel="">Constraining large language models with logic</a><br></div><div style="color:inherit;"><a href="https://arxiv.org/abs/2302.07819" title="Neurologic decoding improves logical consistency of text generated by large language models." rel="">Neurologic decoding improves logical consistency of text generated by large language models</a></div><div style="color:inherit;"><a href="https://arxiv.org/abs/2305.13179" title="Teaching transformers to systematically reason with differentiable logic." rel="">Teaching transformers to systematically reason with differentiable logic</a></div><div style="color:inherit;"><a href="https://arxiv.org/abs/2012.13048" title="ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language." rel="">ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language</a></div><div style="color:inherit;"><a href="https://arxiv.org/abs/1909.03193" title="KG-BERT: BERT for knowledge graph completion." rel="">KG-BERT: BERT for knowledge graph completion</a></div><div style="color:inherit;"><a href="https://arxiv.org/abs/2305.13179" title="Neuro-symbolic concept learner: Discovering objects and their properties." rel="">Neuro-symbolic concept learner: Discovering objects and their properties</a><br></div><p></p></div></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 10 Aug 2023 18:34:26 +1000</pubDate></item></channel></rss>