<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nownextlater.ai/Insights/gen-ai-governance-and-risk-management/feed" rel="self" type="application/rss+xml"/><title>Now Next Later AI - Blog , Gen AI Governance and Risk Management</title><description>Now Next Later AI - Blog , Gen AI Governance and Risk Management</description><link>https://www.nownextlater.ai/Insights/gen-ai-governance-and-risk-management</link><lastBuildDate>Wed, 26 Nov 2025 21:23:10 +1100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[The Evolving AI Policy Landscape: Key Developments for Business Leaders]]></title><link>https://www.nownextlater.ai/Insights/post/the-evolving-ai-policy-landscape-key-developments-for-business-leaders</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nownextlater.ai/Screenshot 2024-04-29 at 11.43.40 am.png"/>We explore the rapidly evolving AI policy landscape, with a special focus on the significant policy events of 2023 and the state of AI regulation in the United States and European Union.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_YKETwGUpSAqcXXAgSGpiKw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_nBxQaAV_Q3WtcEndfXZsbQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_4rzg-RYtT1aGKwVIPOJNqA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_LileO9eAebMY8y9IrPR06w" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_LileO9eAebMY8y9IrPR06w"] .zpimage-container figure img { width: 800px ; height: 432.66px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_LileO9eAebMY8y9IrPR06w"] .zpimage-container figure img { width:500px ; height:270.41px ; } } @media (max-width: 767px) { [data-element-id="elm_LileO9eAebMY8y9IrPR06w"] .zpimage-container figure img { width:500px ; height:270.41px ; } } [data-element-id="elm_LileO9eAebMY8y9IrPR06w"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-large zpimage-tablet-fallback-large zpimage-mobile-fallback-large hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202024-04-29%20at%2011.43.40%E2%80%AFam.png" width="500" height="270.41" loading="lazy" size="large" alt="AI Regulation by Approach: Expansive vs Restrictive" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_XKehSJiuQ5itfxQBv_9XPg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_XKehSJiuQ5itfxQBv_9XPg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><div style="color:inherit;text-align:left;">The <a href="https://aiindex.stanford.edu/report/" title="2024 AI Index Report" rel="">2024 AI Index Report</a> from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides a comprehensive overview of the AI landscape. In a series of articles, we highlight key findings of the report, focusing on trends and insights that are particularly relevant for business leaders. <br></div><div style="color:inherit;text-align:left;"><br><p>In this article, we'll explore the rapidly evolving AI policy landscape, with a special focus on the significant policy events of 2023 and the state of AI regulation in the United States and European Union. As AI technologies continue to advance and permeate nearly every sector of the economy, it is crucial for business leaders to stay informed about the policy developments shaping the future of AI governance.</p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;"><br></span></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Key AI Policy Developments in 2023&nbsp;</span></p><p><br></p><p>The year 2023 witnessed a flurry of AI policy activity across the globe, reflecting policymakers' growing recognition of the need to regulate AI and harness its transformative potential. Some of the most notable policy events included:</p><ol><li>U.S. Executive Order on AI: In October 2023, President Biden issued an executive order establishing new benchmarks for AI safety, security, privacy protection, and the advancement of equity and civil rights. The order mandated the creation of guidelines and best practices to support the development and deployment of secure, reliable, and ethical AI.</li><li>EU AI Act: In December 2023, European lawmakers reached a tentative deal on the AI Act, a landmark piece of legislation that establishes a risk-based regulatory framework for AI. The act prohibits systems with unacceptable risks, classifies high-risk systems, and subjects generative AI to transparency standards.</li><li>China's AI Regulations: China introduced regulations aimed at &quot;deep synthesis&quot; technology to tackle security issues related to the creation of realistic virtual entities and multimodal media. The country also updated its measures on the cyberspace administration of generative AI, adopting a more targeted regulatory approach.</li><li>U.K. AI Safety Initiatives: The U.K. hosted the AI Safety Summit and announced the establishment of the world's first government-supported AI Safety Institute. These initiatives aim to address AI risks, promote global cooperation, and position the U.K. as a leader in AI safety research.</li></ol><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">The State of AI Regulation in the U.S. and EU&nbsp;</span></p><p><br></p><p>Both the United States and European Union have seen a significant increase in AI-related regulations in recent years. In the U.S., the number of AI regulations rose from just one in 2016 to 25 in 2023, with a 56.3% increase in the last year alone. Similarly, the EU passed 32 AI-related regulations in 2023, up from 22 in 2022.</p><p><br></p><p>In the U.S., AI regulations are increasingly being issued by a broader array of regulatory agencies. In 2023, 21 agencies issued AI regulations, compared to 17 in 2022. The agencies leading the charge include the Executive Office of the President, the Department of Commerce, the Department of Health and Human Services, and the Bureau of Industry and Security. Notably, there has been a shift toward more restrictive AI regulations in the U.S., with 10 restrictive regulations in 2023 compared to just three expansive ones.</p><p><br></p><p>In the EU, the Council of the European Union and the European Parliament have been the most active in issuing AI regulations. Unlike the U.S., the EU has seen a trend toward more expansive AI regulations, with 12 expansive regulations in 2023 compared to eight restrictive ones. The most common subject matters for EU AI regulations in 2023 were science, technology, and communications, followed by government operations and politics.</p><p><br></p><p>For business leaders, the increasing volume and complexity of AI regulations highlight the need for proactive engagement with policymakers and regulatory bodies. Businesses must closely monitor the regulatory landscape, provide input on proposed regulations, and ensure that their AI systems and practices align with emerging standards and guidelines. By staying ahead of the regulatory curve, businesses can not only mitigate compliance risks but also position themselves as leaders in responsible AI adoption.</p></div></div><p style="text-align:left;"></p></div>
</div><div data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"] .zpimage-container figure img { width: 500px ; height: 500.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } @media (max-width: 767px) { [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/responsible-ai-in-the-age-of-generative-models-ai-governance-ethics-and-risk-management" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/8.png" width="500" height="500.00" loading="lazy" size="medium" alt="AI Ethics Books for Leaders"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Mon, 29 Apr 2024 11:45:46 +1000</pubDate></item><item><title><![CDATA[The Top 10 Risks Business Leaders Need to Know About Large Language Models]]></title><link>https://www.nownextlater.ai/Insights/post/the-top-10-risks-business-leaders-need-to-know-about-large-language-models</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nownextlater.ai/1697599021768.jpeg"/>Recently, the Open Web Application Security Project (OWASP), a leading authority on cybersecurity, released their list of the Top 10 security risks for LLM applications. Here is what every executive should know about these critical LLM vulnerabilities.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_WSL7nXQ9SwGjiikEeJ1PGQ" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_0oHwNQrTSKeHuGlFD8nbqQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_T83VkOeqStSji-4xWuFbtw" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"] .zpimage-container figure img { width: 1090px ; height: 613.13px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"] .zpimage-container figure img { width:723px ; height:406.69px ; } } @media (max-width: 767px) { [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"] .zpimage-container figure img { width:415px ; height:233.44px ; } } [data-element-id="elm_dmMIJ43fKdKDQ2Me-6u85w"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-fit zpimage-tablet-fallback-fit zpimage-mobile-fallback-fit hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/1697599021768.jpeg" width="415" height="233.44" loading="lazy" size="fit" alt="OWASP Top 10 for LLM Apps" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_GurYT3FqQQCsM7DOfGAsKQ" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_GurYT3FqQQCsM7DOfGAsKQ"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><p style="font-weight:400;text-indent:0px;">The rapid rise of AI-powered chatbots and large language models like ChatGPT is transforming how businesses operate and engage with customers. These systems, built on large language models (LLMs) trained on massive datasets, offer exciting new capabilities—from generating human-like text to powering interactive virtual assistants. However, as with any powerful new technology, LLMs also introduce new risks that business leaders need to understand and mitigate.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Recently, the Open Web Application Security Project (OWASP), a leading authority on cybersecurity, released their list of the <a href="https://www.llmtop10.com/" title="Top 10 security risks for LLM applications" rel="">Top 10 security risks for LLM applications</a>. Here is what every executive should know about these critical LLM vulnerabilities:</p><p style="font-weight:400;text-indent:0px;"></p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:20px;color:rgb(41, 77, 135);">The OWASP Top 10 Risks for LLM Applications</span></p><p style="font-weight:400;text-indent:0px;"></p><div style="color:inherit;"><ol><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Prompt Injection:</span><span style="color:rgb(41, 77, 135);"></span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Attackers can manipulate the LLM to execute unintended actions by &quot;injecting&quot; malicious instructions. This could lead to data theft, privilege escalation, and more.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Insecure Output Handling:</span><span style="color:rgb(41, 77, 135);">&nbsp;</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"><span></span>If an application blindly accepts LLM outputs without proper validation, it exposes backend systems to potential exploits like cross-site scripting (XSS) attacks.</span></span></li><li><span><span style="font-size:16px;"><span style="color:rgb(41, 77, 135);">Training Data Poisoning: </span></span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">LLMs are only as good as their training data. Manipulation of training datasets can introduce harmful biases, vulnerabilities, or enable backdoor access.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Model Denial of Service: </span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">Resource-intensive LLM operations triggered by attackers can degrade system performance and drive up computing costs.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Supply Chain Vulnerabilities:</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Compromised data, models, or components anywhere in the complex LLM development lifecycle introduces risks.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Sensitive Information Disclosure: </span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">LLMs may inadvertently reveal confidential data in generated outputs, violating data privacy.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Insecure Plugin Design:</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Extensible LLM plugins with poor input validation or access control are easier for attackers to exploit.</span></span></li><li><span><span style="font-size:16px;"><span style="color:rgb(41, 77, 135);">Excessive Agency:</span></span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Granting an LLM too much functionality, autonomy or privilege amplifies the impact of any vulnerabilities.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Overreliance: </span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;">Uncritically trusting LLM outputs without human oversight can propagate misinformation, bias, and security issues at scale.</span></span></li><li><span><span style="font-size:16px;color:rgb(41, 77, 135);">Model Theft:</span></span><span style="color:inherit;"><span style="font-weight:400;text-indent:0px;"> Exfiltration of proprietary LLM models is a threat to intellectual property and can enable reverse engineering of sensitive training data.</span></span></li></ol></div><p style="font-weight:400;text-indent:0px;"></p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Oswald&quot;, sans-serif;font-size:20px;color:rgb(41, 77, 135);">Key Takeaways for Business Leaders</span></p><p style="font-weight:400;text-indent:0px;"></p><ul><li>Conduct a thorough risk assessment and threat modeling exercise before deploying any LLM application. Understand your organization's specific threat landscape.</li><li>Ensure strong access controls, monitoring, and security safeguards are in place across the entire LLM lifecycle—from initial model training to production deployment.</li><li>Establish clear policies and staff training around responsible LLM use. Humans should remain in the loop for high-stakes decisions.</li><li>Evaluate the security practices of any vendors or third-party LLM components. The security of your LLM application is only as strong as its weakest link.</li><li>Keep abreast of this rapidly evolving risk landscape. Follow OWASP and other leading voices in AI security research to stay current on emerging LLM threats and countermeasures.</li></ul><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">The potential of large language models is immense—but so are the risks they pose if not properly understood and mitigated. By taking proactive steps to address the OWASP Top 10 LLM risks, business leaders can harness the power of this transformative technology more securely and strategically. After all, responsible stewardship of AI systems is quickly becoming a core business imperative.</p></div><p></p></div>
</div><div data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"] .zpimage-container figure img { width: 500px ; height: 500.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"] .zpimage-container figure img { width:500px ; height:500.00px ; } } @media (max-width: 767px) { [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"] .zpimage-container figure img { width:500px ; height:500.00px ; } } [data-element-id="elm_lkdBDpf4VamhQKc_rPP4_A"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/responsible-ai-in-the-age-of-generative-models-ai-governance-ethics-and-risk-management" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Navy%20and%20Blue%20Modern%20We%20Provide%20Business%20Solutions%20Facebook%20Ad%20-1200%20x%201200%20px-.png" width="500" height="500.00" loading="lazy" size="medium" alt="AI Governance Books for Leaders"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 04 Apr 2024 16:21:55 +1100</pubDate></item><item><title><![CDATA[Generative AI in Enterprises: Gartner's Survey Unveils Opportunities and Risks]]></title><link>https://www.nownextlater.ai/Insights/post/generative-ai-in-enterprises-gartner-s-survey-unveils-opportunities-and-risks</link><description><![CDATA[A new survey from Gartner has found that the availability of generative AI systems like ChatGPT is quickly becoming a top concern for enterprise risk management.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_j5demmQORoOF4Wg784Iazg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_1dJ36n6sQtei3VyaSuIQfQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_VFXoZ9k7RTOGRPtyOd11VA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width: 500px ; height: 223.54px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width:500px ; height:223.54px ; } } @media (max-width: 767px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width:500px ; height:223.54px ; } } [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-10%20at%207.29.18%20am.png" width="500" height="223.54" loading="lazy" size="medium" alt="top five most cited emerging risks Q2 2023" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_-7NthNw2SzOzETHrVXfwrg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_-7NthNw2SzOzETHrVXfwrg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><p style="font-size:16px;font-weight:400;text-indent:0px;"><strong style="font-weight:600;"></strong></p><div style="color:inherit;"><p>A new survey from Gartner has found that the availability of generative AI systems like ChatGPT is quickly becoming a top concern for enterprise risk management. Out of 249 senior risk executives surveyed in Q2 2023, 66% cited generative AI as an emerging risk needing attention.</p><p><br></p><p>This reflects the rapid mainstreaming of AI systems that can generate original text, images, and code. While the technology promises benefits, it also poses new risks around data privacy, security, bias, and legal compliance.</p><p><br></p><p>According to Gartner, enterprises should take three main steps to manage generative AI risks:</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">1) Assess Intellectual Property and Data Privacy Exposure</span></p><p><br></p><p>Sensitive data entered into public systems like ChatGPT can become part of the training dataset and end up in outputs seen by other users. This threatens privacy and intellectual property. Firms should issue guidelines against entering confidential data and carefully review any generative AI outputs.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">2) Mitigate Cybersecurity and Fraud Risks</span></p><p><br></p><p>Hackers are already using generative AI to create fake content and phishing scams at scale. Businesses should coordinate with cybersecurity teams to defend against threats like prompt injection attacks. They should also verify due diligence sources as generative models may fabricate plausible-sounding but false information.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">3) Evaluate Legal and Regulatory Obligations</span></p><p><br></p><p>Generative AI risks violating copyright and fair lending laws if biases exist in the models. Organizations must ensure transparency in AI use, perform impact assessments, and provide human oversight of outputs. Firms should monitor emerging regulations in jurisdictions and prepare accordingly.</p><p><br></p><p><br></p><p>Gartner recommends that legal, compliance, security, and technology leaders work together closely to build AI governance and controls that balance innovation with responsible use. Though regulations are still developing, proactive oversight of generative AI will reduce legal, reputational, and financial risks.</p><p><br></p><p>With powerful generative models now widely available, enterprises can no longer ignore the downsides. Assessing and mitigating risks will enable firms to tap the technology's benefits while avoiding pitfalls. But neglecting appropriate safeguards makes organizations vulnerable on many fronts.</p><p><br></p><p>Sources:</p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-08-08-gartner-survey-shows-generative-ai-has-become-an-emerging-risk-for-enterprises" title="Gartner Survey Shows Generative AI Has Become an Emerging Risk for Enterprises " rel="">Gartner Survey Shows Generative AI Has Become an Emerging Risk for Enterprises </a><br></p><p></p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-05-18-gartner-identifies-six-chatgpt-risks-legal-and-compliance-leaders-must-evaluate" title="Gartner Identifies Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate" rel="">Gartner Identifies Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate</a><br></p><p></p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-03-01-gartner-identifies-four-critical-areas-for-legal-leaders-to-address-around-ai-regulation" title="Gartner Identifies Four Critical Areas for Legal Leaders to Address Around AI Regulation" rel="">Gartner Identifies Four Critical Areas for Legal Leaders to Address Around AI Regulation</a><br></p><p></p></div>
<p style="font-size:16px;font-weight:400;text-indent:0px;"></p></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 10 Aug 2023 07:26:27 +1000</pubDate></item></channel></rss>