<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nownextlater.ai/Insights/tag/ai-governance/feed" rel="self" type="application/rss+xml"/><title>Now Next Later AI - Blog #AI Governance</title><description>Now Next Later AI - Blog #AI Governance</description><link>https://www.nownextlater.ai/Insights/tag/ai-governance</link><lastBuildDate>Wed, 26 Nov 2025 21:33:29 +1100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[The Evolving AI Policy Landscape: Key Developments for Business Leaders]]></title><link>https://www.nownextlater.ai/Insights/post/the-evolving-ai-policy-landscape-key-developments-for-business-leaders</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nownextlater.ai/Screenshot 2024-04-29 at 11.43.40 am.png"/>We explore the rapidly evolving AI policy landscape, with a special focus on the significant policy events of 2023 and the state of AI regulation in the United States and European Union.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_YKETwGUpSAqcXXAgSGpiKw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_nBxQaAV_Q3WtcEndfXZsbQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_4rzg-RYtT1aGKwVIPOJNqA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_LileO9eAebMY8y9IrPR06w" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_LileO9eAebMY8y9IrPR06w"] .zpimage-container figure img { width: 800px ; height: 432.66px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_LileO9eAebMY8y9IrPR06w"] .zpimage-container figure img { width:500px ; height:270.41px ; } } @media (max-width: 767px) { [data-element-id="elm_LileO9eAebMY8y9IrPR06w"] .zpimage-container figure img { width:500px ; height:270.41px ; } } [data-element-id="elm_LileO9eAebMY8y9IrPR06w"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-large zpimage-tablet-fallback-large zpimage-mobile-fallback-large hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202024-04-29%20at%2011.43.40%E2%80%AFam.png" width="500" height="270.41" loading="lazy" size="large" alt="AI Regulation by Approach: Expansive vs Restrictive" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_XKehSJiuQ5itfxQBv_9XPg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_XKehSJiuQ5itfxQBv_9XPg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><div style="color:inherit;text-align:left;">The <a href="https://aiindex.stanford.edu/report/" title="2024 AI Index Report" rel="">2024 AI Index Report</a> from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides a comprehensive overview of the AI landscape. In a series of articles, we highlight key findings of the report, focusing on trends and insights that are particularly relevant for business leaders. <br></div><div style="color:inherit;text-align:left;"><br><p>In this article, we'll explore the rapidly evolving AI policy landscape, with a special focus on the significant policy events of 2023 and the state of AI regulation in the United States and European Union. As AI technologies continue to advance and permeate nearly every sector of the economy, it is crucial for business leaders to stay informed about the policy developments shaping the future of AI governance.</p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;"><br></span></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Key AI Policy Developments in 2023&nbsp;</span></p><p><br></p><p>The year 2023 witnessed a flurry of AI policy activity across the globe, reflecting policymakers' growing recognition of the need to regulate AI and harness its transformative potential. Some of the most notable policy events included:</p><ol><li>U.S. Executive Order on AI: In October 2023, President Biden issued an executive order establishing new benchmarks for AI safety, security, privacy protection, and the advancement of equity and civil rights. The order mandated the creation of guidelines and best practices to support the development and deployment of secure, reliable, and ethical AI.</li><li>EU AI Act: In December 2023, European lawmakers reached a tentative deal on the AI Act, a landmark piece of legislation that establishes a risk-based regulatory framework for AI. The act prohibits systems with unacceptable risks, classifies high-risk systems, and subjects generative AI to transparency standards.</li><li>China's AI Regulations: China introduced regulations aimed at &quot;deep synthesis&quot; technology to tackle security issues related to the creation of realistic virtual entities and multimodal media. The country also updated its measures on the cyberspace administration of generative AI, adopting a more targeted regulatory approach.</li><li>U.K. AI Safety Initiatives: The U.K. hosted the AI Safety Summit and announced the establishment of the world's first government-supported AI Safety Institute. These initiatives aim to address AI risks, promote global cooperation, and position the U.K. as a leader in AI safety research.</li></ol><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">The State of AI Regulation in the U.S. and EU&nbsp;</span></p><p><br></p><p>Both the United States and European Union have seen a significant increase in AI-related regulations in recent years. In the U.S., the number of AI regulations rose from just one in 2016 to 25 in 2023, with a 56.3% increase in the last year alone. Similarly, the EU passed 32 AI-related regulations in 2023, up from 22 in 2022.</p><p><br></p><p>In the U.S., AI regulations are increasingly being issued by a broader array of regulatory agencies. In 2023, 21 agencies issued AI regulations, compared to 17 in 2022. The agencies leading the charge include the Executive Office of the President, the Department of Commerce, the Department of Health and Human Services, and the Bureau of Industry and Security. Notably, there has been a shift toward more restrictive AI regulations in the U.S., with 10 restrictive regulations in 2023 compared to just three expansive ones.</p><p><br></p><p>In the EU, the Council of the European Union and the European Parliament have been the most active in issuing AI regulations. Unlike the U.S., the EU has seen a trend toward more expansive AI regulations, with 12 expansive regulations in 2023 compared to eight restrictive ones. The most common subject matters for EU AI regulations in 2023 were science, technology, and communications, followed by government operations and politics.</p><p><br></p><p>For business leaders, the increasing volume and complexity of AI regulations highlight the need for proactive engagement with policymakers and regulatory bodies. Businesses must closely monitor the regulatory landscape, provide input on proposed regulations, and ensure that their AI systems and practices align with emerging standards and guidelines. By staying ahead of the regulatory curve, businesses can not only mitigate compliance risks but also position themselves as leaders in responsible AI adoption.</p></div></div><p style="text-align:left;"></p></div>
</div><div data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"] .zpimage-container figure img { width: 500px ; height: 500.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } @media (max-width: 767px) { [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"] .zpimage-container figure img { width:500px ; height:500.00px ; } } [data-element-id="elm_vD0JazC2EE7xXPdz8UtrtA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/responsible-ai-in-the-age-of-generative-models-ai-governance-ethics-and-risk-management" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/8.png" width="500" height="500.00" loading="lazy" size="medium" alt="AI Ethics Books for Leaders"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Mon, 29 Apr 2024 11:45:46 +1000</pubDate></item><item><title><![CDATA[The Responsible AI Imperative: Key Insights for Business Leaders]]></title><link>https://www.nownextlater.ai/Insights/post/the-responsible-ai-imperative-key-insights-for-business-leaders</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nownextlater.ai/Screenshot 2024-04-29 at 11.15.19 am.png"/>We explore the current state of responsible AI, examining the lack of standardized evaluations for LLMs, the discovery of complex vulnerabilities in these models, the growing concern among businesses about AI risks, and the challenges posed by LLMs outputting copyrighted material.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_g5c30FzYQ6q2Hmznn2d3RA" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_DZxbAm3yTPiz8Si0i3LOPw" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_pvarJvDWS_uiWro98qUZtQ" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_d2xnR2zVTD_f0YYqeOBqsA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_d2xnR2zVTD_f0YYqeOBqsA"] .zpimage-container figure img { width: 800px ; height: 457.75px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_d2xnR2zVTD_f0YYqeOBqsA"] .zpimage-container figure img { width:500px ; height:286.09px ; } } @media (max-width: 767px) { [data-element-id="elm_d2xnR2zVTD_f0YYqeOBqsA"] .zpimage-container figure img { width:500px ; height:286.09px ; } } [data-element-id="elm_d2xnR2zVTD_f0YYqeOBqsA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-large zpimage-tablet-fallback-large zpimage-mobile-fallback-large hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202024-04-29%20at%2011.15.19%E2%80%AFam.png" width="500" height="286.09" loading="lazy" size="large" alt="Harmful Responses Accross Different Foundation Models" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_VhBdC4qDTAOLt_BJH2vR9g" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_VhBdC4qDTAOLt_BJH2vR9g"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><p>The<a href="https://aiindex.stanford.edu/report/" title=" 2024 AI Index Report" rel=""> 2024 AI Index Report</a> from the Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides a comprehensive overview of the AI landscape. In a series of articles, we highlight key findings of the report, focusing on trends and insights that are particularly relevant for business leaders.</p><p><br></p><p>In this article, we'll explore the current state of responsible AI, examining the lack of standardized evaluations for large language models (LLMs), the discovery of complex vulnerabilities in these models, the growing concern among businesses about AI risks, and the challenges posed by LLMs outputting copyrighted material. We'll also discuss the low transparency scores of AI developers and the rising number of AI incidents. By understanding these critical issues, business leaders can make more informed decisions about the responsible development and deployment of AI systems.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Lack of Standardized Evaluations for LLM Responsibility&nbsp;</span></p><p><br></p><p>One of the most significant findings from the 2024 AI Index Report is the lack of robust and standardized evaluations for assessing the responsibility of LLMs. New analysis reveals that leading AI developers, such as OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This inconsistency in benchmark selection complicates efforts to systematically compare the risks and limitations of top AI models, making it difficult for businesses to make informed decisions when choosing AI solutions. To improve responsible AI reporting, it is crucial that a consensus is reached on which benchmarks model developers should consistently test against.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Complex Vulnerabilities Discovered in LLMs&nbsp;</span></p><p><br></p><p>Researchers have uncovered increasingly complex vulnerabilities in LLMs over the past year. While previous efforts to &quot;red team&quot; AI models focused on testing adversarial prompts that intuitively made sense to humans, recent studies have found less obvious strategies to elicit harmful behavior from LLMs. For example, asking models to infinitely repeat random words can lead to the inadvertent revelation of sensitive personal information from training datasets. This finding highlights the need for businesses to be aware of potential risks associated with LLMs and to implement appropriate safeguards and monitoring mechanisms.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">AI Risks Concern Businesses Globally&nbsp;</span></p><p><br></p><p>A global survey on responsible AI highlights that companies' top AI-related concerns include privacy, security, and reliability. The survey shows that while organizations are beginning to take steps to mitigate these risks, most have only mitigated a portion of them so far. For business leaders, this underscores the importance of prioritizing responsible AI practices and investing in comprehensive risk mitigation strategies. By proactively addressing AI risks, businesses can build trust with their stakeholders and ensure the long-term success of their AI initiatives.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">LLMs Can Output Copyrighted Material&nbsp;</span></p><p><br></p><p>Multiple researchers have demonstrated that the generative outputs of popular LLMs may contain copyrighted material, such as excerpts from The New York Times or scenes from movies. This raises significant legal questions about whether such output constitutes copyright violations. For businesses looking to leverage LLMs for content generation or other applications, it is essential to be aware of these potential legal risks and to implement appropriate monitoring and filtering mechanisms to prevent the unauthorized use of copyrighted material.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Low Transparency Scores for AI Developers </span><br></p><p><br></p><p>The newly introduced Foundation Model Transparency Index reveals that AI developers generally lack transparency, particularly regarding the disclosure of training data and methodologies. This lack of openness hinders efforts to further understand the robustness and safety of AI systems. For businesses, this means that they may not have access to all the information they need to fully assess the risks and limitations of the AI solutions they are considering. To make informed decisions, business leaders should demand greater transparency from AI developers and prioritize solutions that provide comprehensive documentation and disclosure.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Rising Number of AI Incidents&nbsp;</span></p><p><br></p><p>According to the AI Incident Database, which tracks incidents related to the misuse of AI, there were 123 reported incidents in 2023, representing a 32.3% increase from 2022. Since 2013, AI incidents have grown by over twentyfold. A notable example includes AI-generated, sexually explicit deepfakes of Taylor Swift that were widely shared online. For businesses, this trend underscores the importance of implementing robust AI governance frameworks and monitoring systems to detect and mitigate potential misuse of their AI systems. By staying vigilant and responsive to emerging AI risks, businesses can protect their reputation and maintain the trust of their customers and stakeholders.</p><p><br></p><p><span style="font-family:&quot;Archivo Black&quot;, sans-serif;">Conclusion</span></p><p><br></p><p>The 2024 AI Index Report highlights the urgent need for businesses to prioritize responsible AI practices as they increasingly adopt and deploy AI systems. From the lack of standardized evaluations for LLM responsibility to the discovery of complex vulnerabilities and the rising number of AI incidents, the report underscores the importance of proactively addressing AI risks and challenges.</p><p><br></p><p>By demanding greater transparency from AI developers, investing in comprehensive risk mitigation strategies, and implementing robust AI governance frameworks, business leaders can ensure the responsible development and deployment of AI systems. Only by prioritizing responsible AI practices can businesses fully realize the benefits of this transformative technology while protecting the interests of their stakeholders and society at large.</p></div></div>
</div><div data-element-id="elm_zlnSHQgJHMStBtZLdNL4DQ" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_zlnSHQgJHMStBtZLdNL4DQ"] .zpimage-container figure img { width: 500px ; height: 500.00px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_zlnSHQgJHMStBtZLdNL4DQ"] .zpimage-container figure img { width:500px ; height:500.00px ; } } @media (max-width: 767px) { [data-element-id="elm_zlnSHQgJHMStBtZLdNL4DQ"] .zpimage-container figure img { width:500px ; height:500.00px ; } } [data-element-id="elm_zlnSHQgJHMStBtZLdNL4DQ"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium "><figure role="none" class="zpimage-data-ref"><a class="zpimage-anchor" href="/responsible-ai-in-the-age-of-generative-models-ai-governance-ethics-and-risk-management" target="" rel=""><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/8.png" width="500" height="500.00" loading="lazy" size="medium" alt="Responsible AI for Business Leaders"/></picture></a></figure></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Mon, 29 Apr 2024 11:19:06 +1000</pubDate></item><item><title><![CDATA[Navigating the Murky Waters of AI and Copyright]]></title><link>https://www.nownextlater.ai/Insights/post/Navigating-the-Murky-Waters-of-AI-and-Copyright</link><description><![CDATA[How exactly should business leaders navigate the complex intersection between AI creation and existing copyright laws? A new research paper by legal scholar Dr Andres Guadamuz provides an enlightening analysis of this murky terrain.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_z4uqCdUFQrqnZEgldwLQlw" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_aOWQ2USmTbmP023Qv0rTBA" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_ORawxEK0SH-HOkckCTZ-Dw" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"] .zpimage-container figure img { width: 1090px ; height: 568.94px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"] .zpimage-container figure img { width:723px ; height:377.38px ; } } @media (max-width: 767px) { [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"] .zpimage-container figure img { width:415px ; height:216.61px ; } } [data-element-id="elm_1fgfd69wX4lJTXbkM4fBHA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-fit zpimage-tablet-fallback-fit zpimage-mobile-fallback-fit hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-09-15%20at%209.35.54%20am.png" width="415" height="216.61" loading="lazy" size="fit" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_u3Poqg1lQv2RoamY6O2c-A" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_u3Poqg1lQv2RoamY6O2c-A"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><div style="color:inherit;"><p style="font-weight:400;text-indent:0px;">Powerful Generative AI systems can now generate stunning works of art, human-sounding text, and original music with the click of a button. This emerging technology holds immense promise, yet also surfaces intricate legal questions around copyright protections. How exactly should business leaders navigate the complex intersection between AI creation and existing copyright laws? A new research paper by legal scholar Dr Andres Guadamuz provides an enlightening analysis of this murky terrain.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Guadamuz explains that modern AI relies heavily on a process called machine learning. Here, algorithms are fed vast troves of data—such as text corpuses, images, or audio samples - which they analyze to discern patterns and complete tasks. As the AI ingests more data, its performance improves. This data serves as the lifeblood for systems like ChatGPT, DALL-E 2, and Midjourney to produce their creative outputs.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Of course, much of this training data consists of <span style="text-decoration:underline;">copyrighted works</span>. And herein lies the crux of the issue. Does an AI system infringe copyright through its utilization of such data? Are laws adequately calibrated to protect rights holders while also giving space for AI innovation to blossom? Guadamuz's research suggests we are in a legal gray zone lacking definitive precedents.</p><p style="font-weight:400;text-indent:0px;"><br></p><div style="color:inherit;"><p style="font-weight:400;text-indent:0px;">One fundamental question is whether the data used to train AI systems is eligible for copyright protection in the first place. Raw facts, statistics, and randomly generated information are not subject to copyright laws as they lack originality. However, some training datasets do involve meaningful creative choices by humans in the selection and arrangement of data. For example, a dataset of images captioned with descriptive text would have more original compilation than a random assortment of photos. These types of datasets with creative selection potentially clear the originality bar needed for copyright protection.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">That said, many AI models utilize purely factual data, public domain content, or freely licensed works that do not warrant copyright restrictions. According to Guadamuz's analysis, there are plenty of legitimate large-scale datasets available that teach AI systems without necessarily infringing on copyrighted source material. For instance, collections of Shakespeare's works or Van Gogh's paintings that are in the public domain can train models without legal concerns. Additionally, open access datasets like those under Creative Commons licenses offer content that creators have explicitly authorized for reuse. So there are many lawful paths for feeding data to AI systems without trampling on copyright protections.</p></div><p style="font-weight:400;text-indent:0px;"></p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">What about the actual training process? Here Guadamuz explains there is considerable uncertainty. Widely adopted machine learning methods require the AI to intake copies of data to extract patterns. Guadamuz notes this likely constitutes reproduction under copyright law and thus requires permission. However, the research highlights that temporary copies or text and data mining exceptions in some jurisdictions may permit this usage without authorization. The EU specifically created new exceptions for text and data mining for both non-commercial and commercial purposes. But their precise boundaries remain untested so far.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Analyzing copyright issues around AI outputs adds further Complexity according to Guadamuz. Three main requirements must be fulfilled to show infringement: 1) violation of exclusive rights, 2) a causal connection to copyrighted inputs, and 3) substantially similar copying.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Guadamuz suggests the second and third factors make infringement difficult to prove outside verbatim re-creations. With vast datasets and compressed latent representations, directly connecting outputs to specific inputs poses challenges. Similarly, replication of broad styles and ideas is not protected by copyright. Substantial similarity requires qualitatively important expressions to be copied. But Guadamuz notes that character copyright issues could arise with AI generations. He argues current fair dealing style exceptions around parody and pastiche may shield some AI outputs.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">In conclusion, Guadamuz paints a complex landscape filled with legal uncertainty. With few definitive court precedents so far, business leaders should closely track how laws are interpreted as AI copyright cases inevitably unfold. In the meantime, pursuing ethical approaches that respect rights holder interests appears prudent. Additionally, supporting collaborative initiatives and technological solutions like opt-out databases could help ease emerging tensions. But the path forward will require nuance, cooperation and openness to new models between all stakeholders.</p><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;">Footnotes:</p><p style="font-weight:400;text-indent:0px;"><span style="color:inherit;"><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4371204" title="A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs" rel="">A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs</a> by </span><span style="color:inherit;">Dr Andres Guadamuz</span></p><p style="font-weight:400;text-indent:0px;"></p><div style="color:inherit;"><h1 style="font-size:28px;font-weight:500;text-indent:0px;"><br></h1></div><p style="font-weight:400;text-indent:0px;"></p></div></div></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Fri, 15 Sep 2023 09:33:55 +1000</pubDate></item><item><title><![CDATA[Generative AI in Enterprises: Gartner's Survey Unveils Opportunities and Risks]]></title><link>https://www.nownextlater.ai/Insights/post/generative-ai-in-enterprises-gartner-s-survey-unveils-opportunities-and-risks</link><description><![CDATA[A new survey from Gartner has found that the availability of generative AI systems like ChatGPT is quickly becoming a top concern for enterprise risk management.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_j5demmQORoOF4Wg784Iazg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_1dJ36n6sQtei3VyaSuIQfQ" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_VFXoZ9k7RTOGRPtyOd11VA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width: 500px ; height: 223.54px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width:500px ; height:223.54px ; } } @media (max-width: 767px) { [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"] .zpimage-container figure img { width:500px ; height:223.54px ; } } [data-element-id="elm_n07Ggn9U0gyHL9ZqWal2MA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-10%20at%207.29.18%20am.png" width="500" height="223.54" loading="lazy" size="medium" alt="top five most cited emerging risks Q2 2023" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_-7NthNw2SzOzETHrVXfwrg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_-7NthNw2SzOzETHrVXfwrg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div style="color:inherit;"><p style="font-size:16px;font-weight:400;text-indent:0px;"><strong style="font-weight:600;"></strong></p><div style="color:inherit;"><p>A new survey from Gartner has found that the availability of generative AI systems like ChatGPT is quickly becoming a top concern for enterprise risk management. Out of 249 senior risk executives surveyed in Q2 2023, 66% cited generative AI as an emerging risk needing attention.</p><p><br></p><p>This reflects the rapid mainstreaming of AI systems that can generate original text, images, and code. While the technology promises benefits, it also poses new risks around data privacy, security, bias, and legal compliance.</p><p><br></p><p>According to Gartner, enterprises should take three main steps to manage generative AI risks:</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">1) Assess Intellectual Property and Data Privacy Exposure</span></p><p><br></p><p>Sensitive data entered into public systems like ChatGPT can become part of the training dataset and end up in outputs seen by other users. This threatens privacy and intellectual property. Firms should issue guidelines against entering confidential data and carefully review any generative AI outputs.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">2) Mitigate Cybersecurity and Fraud Risks</span></p><p><br></p><p>Hackers are already using generative AI to create fake content and phishing scams at scale. Businesses should coordinate with cybersecurity teams to defend against threats like prompt injection attacks. They should also verify due diligence sources as generative models may fabricate plausible-sounding but false information.</p><p><br></p><p><span style="font-family:&quot;Oswald&quot;, sans-serif;">3) Evaluate Legal and Regulatory Obligations</span></p><p><br></p><p>Generative AI risks violating copyright and fair lending laws if biases exist in the models. Organizations must ensure transparency in AI use, perform impact assessments, and provide human oversight of outputs. Firms should monitor emerging regulations in jurisdictions and prepare accordingly.</p><p><br></p><p><br></p><p>Gartner recommends that legal, compliance, security, and technology leaders work together closely to build AI governance and controls that balance innovation with responsible use. Though regulations are still developing, proactive oversight of generative AI will reduce legal, reputational, and financial risks.</p><p><br></p><p>With powerful generative models now widely available, enterprises can no longer ignore the downsides. Assessing and mitigating risks will enable firms to tap the technology's benefits while avoiding pitfalls. But neglecting appropriate safeguards makes organizations vulnerable on many fronts.</p><p><br></p><p>Sources:</p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-08-08-gartner-survey-shows-generative-ai-has-become-an-emerging-risk-for-enterprises" title="Gartner Survey Shows Generative AI Has Become an Emerging Risk for Enterprises " rel="">Gartner Survey Shows Generative AI Has Become an Emerging Risk for Enterprises </a><br></p><p></p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-05-18-gartner-identifies-six-chatgpt-risks-legal-and-compliance-leaders-must-evaluate" title="Gartner Identifies Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate" rel="">Gartner Identifies Six ChatGPT Risks Legal and Compliance Leaders Must Evaluate</a><br></p><p></p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2023-03-01-gartner-identifies-four-critical-areas-for-legal-leaders-to-address-around-ai-regulation" title="Gartner Identifies Four Critical Areas for Legal Leaders to Address Around AI Regulation" rel="">Gartner Identifies Four Critical Areas for Legal Leaders to Address Around AI Regulation</a><br></p><p></p></div>
<p style="font-size:16px;font-weight:400;text-indent:0px;"></p></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Thu, 10 Aug 2023 07:26:27 +1000</pubDate></item></channel></rss>