top of page

You Need a GenAI Policy in Your Company, and You Need it Yesterday

In the previous article, I explored what it means to manage ChatGPT and other generative AI tools. But I left a part that needs to be addressed, one so large it needed its own article - the legal aspect. With everyone using AI tools in their work, often no one takes a moment to think about what they should and shouldn’t do from a business perspective. We’re at a point where any business that does not have a company-wide generative AI policy does so at its own risk. 


Disclaimer: I am not a lawyer. I do have vast experience in working with compliance and legal departments, and I recognize where companies might be overlooking some aspects of the new technology.


What Do You Mean, It’s Not Mine?

Let's begin by talking about ownership. In most countries, the person who created something also owns it. If that person created it as part of their job, their company owns it. The important term there is “person.” As in “a human being.” If an AI made it, even though a person wrote the prompt, and the EULA of the AI tool states that you own it, nobody does. It’s automatically in the public domain. That’s usually not an issue for emails or small elements in a brochure or a piece of code, but if it is something like a logo, a company’s boilerplate, or a crucial part of your algorithm—stuff that you need to own, make sure a human wrote it. 


To get a sense of how important ownership can get, take Wizards of the Coast for example. For those unfamiliar, Wizards of the Coast (WotC for short) is a Hasbaro subsidiary that creates and manages two franchises: Dungeons and Dragons, and Magic: the Gathering. The former is more well-known, but it’s the latter that makes more money. What’s important for our purposes is that both franchises use a lot of commissioned art. As in, millions of dollars worth per year. So generative AI would mean massive savings, right? Wrong. WotC made it clear that it would not be replacing its human artists. They didn’t do it out of the kindest of their hearts, they did it because selling branded merchandise with commissioned art is a significant vector for profit for them, and they wouldn’t be able to sell exclusive licenses to their art if it’s done by AI.


Similarly, someone at your company needs to decide what’s crucial that you own and what’s ok to do with AI. And that’s just one reason why you need a GenAI policy.


Stealing in the Age of GenAI

Ownership is easy to untangle compared to the real copyright issue. The big hot-button issue currently debated: Is generative AI committing theft?


To understand what I mean, here’s an example of what happened when I asked Midjourney to create a picture of me, Asaf Myrav:


Four photos generated by AI showing a similar-looking man, proving that GenAI used a single source


None of these guys really looks like me, but they all have features that are similar to mine: dark hair, a trim beard, and even a certain Mediterranean ethnicity. How could Midjourny know that these describe me? Well, because this is the image I most commonly use online:



Asaf Myrav, one of the best writers for hire on the web

Look at that handsome fellow and compare him to Midjourney’s versions. The hair color, the beard, the complexion, it’s the same. There’s no doubt that when asked to figure out how to draw “Asaf Myrav” Midjourney used this picture. The picture that I own, and have not given permission for it to train on.


Did Midjourney steal from me? That’s still up for debate. As in, several court cases are looking into this question as of the time of writing. But what you need to know is that it’s entirely possible, and quite common, for generative AI to create something that has an identifiable origin that’s under copyrights. 


This problem is not limited to image generation alone. I was writing an industry report and asked ChatGPT for some background information, which included a summary of another company. When I fact-checked, I discovered that the “summary” I received was almost verbatim from that company’s website. 


Why Should You Care?

While no one expects you to weigh in on the “Generative AI is theft” debate, if someone were to accuse your company of using copyrighted material it could become unpleasant. And it’s easy to avoid such unpleasantness by simply writing up a policy to limit what you ask of AI and checking for plagiarism. 


Here are a few examples of how to do it in practice. I’m using images because it’s more fun, but this is true for text and coding as well:


Product shot.  May or may not be stolen. Use a Gen AI policy to determine


Let’s start with a simple product shot. This shoe does not exist. But notice that there is some sort of logo there. And if you do a search through Google Lens, you’ll find a few shoes that look almost identical. Is that enough for you to disqualify the image and try again? That’s up to you to decide, but you should make that decision as informed as possible.


Here’s another interesting one, this time we’ll get a bit wild:



Crazy cosmic skull. Signiture at the bottom means your GenAI policy should disqualify it.


Now this is stylized and completely unreal. I think I owned a poster like this in the 90s, but the chances you’ll find this exact picture are slim. Except… check out the bottom left corner. Yep, that’s an autograph. Whose? Can’t tell. But if I were you, I wouldn’t be using this picture commercially and take the risk someone would recognize it. 


Now let’s try something else: 



Generated still-life painting. Public domain art could be used under any GenAI policy


It’s a still-life painting. Is the style similar to that of a recognizable artist? Possibly, but they are most likely dead for a century or so, meaning even if they have a painting that looks the same, it’s in the public domain. Use this as much as you want. 


And that all takes – a few minutes of googling and attention to check that what you’re using is not going to cause issues down the line. How rigid you want to go about it is up to you, but you need to have that conversation.


The Time to Start Was a Year Ago

Here’s the thing: Even if you don’t have a policy in place, your employees are already using generative AI. They’re writing emails with ChatGPT, using Copilot to code faster, generating images with Dalle-E and Midjourney, or asking Gemini for research leads. And that’s not a bad thing - as long as certain lines are not crossed. Where those lines lie is different for each company. I focused on the legal aspect, but there is also the data security aspect - one company I worked with has gone as far as to forbid workers from using any OpenAI products on any company computer. While that’s an extreme example, it showcases the need to look at things closely rather than go with the flow. 


The important thing is that you need to have that conversation. Have an open discussion between the Legal team and the rest of the employees to ensure your employees can turn to generative AI when necessary while ensuring they don’t infringe or open your company to embarrassing situations. It’s the best of both worlds - responsibility without sacrificing access to the cutting edge.

Comments


Commenting has been turned off.
bottom of page