<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nownextlater.ai/Insights/tag/security/feed" rel="self" type="application/rss+xml"/><title>Now Next Later AI - Blog #Security</title><description>Now Next Later AI - Blog #Security</description><link>https://www.nownextlater.ai/Insights/tag/security</link><lastBuildDate>Wed, 26 Nov 2025 21:23:32 +1100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[Protecting LLMs from Theft with Watermarks]]></title><link>https://www.nownextlater.ai/Insights/post/protecting-ai-models-from-theft-with-invisible-tags</link><description><![CDATA[Protecting the Copyright of Large Language Models Using Waterrmarks]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_ymudEg5NS3aoDNYjxF8zSg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_bOcygWg7TFW3-eEikvm1Zg" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_5R91ehywSvScFKJMg4XXMA" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"></style><div data-element-id="elm_trrFO_YDBtN-63EgnR1NeA" data-element-type="image" class="zpelement zpelem-image "><style> @media (min-width: 992px) { [data-element-id="elm_trrFO_YDBtN-63EgnR1NeA"] .zpimage-container figure img { width: 500px ; height: 469.92px ; } } @media (max-width: 991px) and (min-width: 768px) { [data-element-id="elm_trrFO_YDBtN-63EgnR1NeA"] .zpimage-container figure img { width:500px ; height:469.92px ; } } @media (max-width: 767px) { [data-element-id="elm_trrFO_YDBtN-63EgnR1NeA"] .zpimage-container figure img { width:500px ; height:469.92px ; } } [data-element-id="elm_trrFO_YDBtN-63EgnR1NeA"].zpelem-image { border-radius:1px; } </style><div data-caption-color="" data-size-tablet="" data-size-mobile="" data-align="center" data-tablet-image-separate="false" data-mobile-image-separate="false" class="zpimage-container zpimage-align-center zpimage-size-medium zpimage-tablet-fallback-medium zpimage-mobile-fallback-medium hb-lightbox " data-lightbox-options="
                type:fullscreen,
                theme:dark"><figure role="none" class="zpimage-data-ref"><span class="zpimage-anchor" role="link" tabindex="0" aria-label="Open Lightbox" style="cursor:pointer;"><picture><img class="zpimage zpimage-style-none zpimage-space-none " src="/Screenshot%202023-08-12%20at%2010.34.07%20am.png" width="500" height="469.92" loading="lazy" size="medium" alt="EmbMarker Framework" data-lightbox="true"/></picture></span></figure></div>
</div><div data-element-id="elm_5xUXcenETG-dzs8KZYQXYg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_5xUXcenETG-dzs8KZYQXYg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-left " data-editor="true"><div></div><div style="color:inherit;"><div style="color:inherit;"><div style="color:inherit;"><p style="font-size:16px;font-weight:400;text-indent:0px;"><strong style="font-weight:600;"></strong><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">AI models, like GPT-4, are like gold in the tech world. Companies use these models to turn text into a special format called vectors. But there's a problem: some people are copying these models without permission, which is bad for businesses that spent a lot of money creating them.</span></p><p style="font-size:16px;font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;"><br></span></p><p style="font-size:16px;font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">Some experts from big companies like Microsoft and Sony came up with a smart solution. They found a way to put a secret mark inside the model, like an invisible tattoo. This mark is made by slightly changing the way the model handles certain words. So, if someone tries to copy the model, the mark will also be copied. This way, the original company can prove they own the model.</span></p><p style="font-size:16px;font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;"><br></span></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">How does it work? These secret words (let's call them 'trigger words') are chosen carefully. They're not super common, so they don't mess up the model's usual tasks. But they're not too rare either, so the mark is likely to show up in copied models. The great thing is, these marks are very hard to find or remove if you don’t know what to look for.</span></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;"><br></span></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">Why is this important for businesses?</span></p><ol><li><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">Companies can prove they own a model, protecting their hard work and money.</span></li><li><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">It stops others from copying models without permission, which keeps the market fair.</span></li><li><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">Customers using the original service won't notice any difference, so they still get top-quality service.</span></li><li><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">This method can be used in many different AI models and situations.</span></li><li><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">It could also help companies track if their own employees are sharing things they shouldn’t.</span></li></ol><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;"><br></span></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;">In summary, this invisible marking system is like a shield for AI models in the cloud. It makes sure companies' hard work is safe, stops people from cheating, and helps the whole AI industry stay fair and trustworthy. While it's not perfect, it's a big step forward in keeping AI models secure.</span></p><p style="font-weight:400;text-indent:0px;"><span style="font-family:&quot;Questrial&quot;, sans-serif;font-size:14px;"><br></span></p><p style="font-weight:400;text-indent:0px;"><span style="color:inherit;"><span style="font-size:14px;font-family:&quot;Oswald&quot;, sans-serif;">Critically Analyzing the Priorities of Companies Like Microsoft<br></span></span></p><p style="font-weight:400;text-indent:0px;"><span style="color:inherit;"><span style="font-size:14px;"><br></span></span></p><div style="color:inherit;"><p style="font-weight:400;text-indent:0px;"><span style="font-size:14px;">While the invisible marking system is an innovative way to safeguard AI models, there's a more fundamental issue many companies are overlooking: the ethical and legal implications of training these models on copyrighted data. Often, AI models like GPT-4 are trained on vast datasets that include copyrighted materials, like books, articles, or artwork. This training process might infringe on the rights of artists, authors, and other content creators, leading to significant legal and ethical quandaries.</span></p><p style="font-size:16px;font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;"><span style="font-size:14px;">These creators often don't consent to their work being used in such a manner, and it denies them the rightful recognition or compensation they deserve. It's imperative that companies prioritize the sourcing of their training data ethically, ensuring it respects copyrights and intellectual property rights. <br></span></p><p style="font-weight:400;text-indent:0px;"><span style="font-size:14px;"><br></span></p><p style="font-weight:400;text-indent:0px;"><span style="font-size:14px;">Before adopting advanced protection measures for the models, the first step should be to ensure that these models aren't built upon the unrecognized or uncompensated work of others. The industry must acknowledge and address this foundational issue, ensuring AI advancements are both technologically and ethically sound.</span></p><p style="font-weight:400;text-indent:0px;"><span style="font-size:14px;"><br></span></p><p style="font-weight:400;text-indent:0px;"><span style="font-size:14px;">Sources:</span></p><div style="color:inherit;"><p>ACL 2023 — Area Chair Awards — NLP Applications: <a href="https://arxiv.org/pdf/2305.10036.pdf" rel="noopener" target="_blank">Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark</a></p></div><p style="font-weight:400;text-indent:0px;"><br></p><p style="font-weight:400;text-indent:0px;"></p></div><p style="font-weight:400;text-indent:0px;"></p><p style="font-weight:400;text-indent:0px;"><span style="color:inherit;"><span style="font-size:14px;"><br></span></span></p></div></div></div><p></p></div>
</div></div></div></div></div></div> ]]></content:encoded><pubDate>Sat, 12 Aug 2023 10:41:41 +1000</pubDate></item></channel></rss>