I’ve gone from 100% anti-generative AI to a reluctant adopter. One of the reasons I was so unwilling to embrace gen AI as a copywriter is the many ethical issues surrounding AI use. It feels a bit nihilistic to feed the machine that may one day put me out of work permanently (I don’t actually believe this, but sometimes it feels that way).

Yet, over the past few months, I’ve reversed course and found that AI makes me more effective as a writer and a marketer. Is it possible to use Gen AI ethically? I often ponder this question as I work, and I believe the answer is yes, but you must be mindful of your approach. The line between inspiration and plagiarism is thin.

In this post, I’ll explain the ethical issues with AI. Part 2 outlines some of the best practices I’ve developed over the past few months.


Why did I start using Gen AI?

The short answer is that I had to. Some time back, we had a client who wanted to repurpose many blog posts from one business imprint for another. We always do this kind of work at Blue Star, but not at the volume or speed this client was looking for. The client specifically wanted to utilize generative (gen) AI to rehash this content as fast as humanly (or robotically) possible.

I had several objections at this point. Even repurposed content still takes some time to get right. Not to mention that a valid SEO strategy would need to be sorted out to get the kind of web traffic they were looking for. The client was unmoved. So, I changed my perspective, swallowed my pride, and decided to take this opportunity to take gen AI out for a spin.

Ironically, the client canceled the project due to some internal restructuring. But once I started using Gen AI and found a couple of compelling uses, I was hooked.

The ethical issues with AI

Before we discuss using gen AI to enhance marketing content, we need to revisit whether it is ethical to use AI in marketing.

Right now, I believe it is possible to use Gen AI ethically, but you need to be careful and thoughtful about how you use it. (I reserve the right to change my mind later.) As I dug into using Gen AI to repurpose content, I discovered firsthand the ethical and potentially legal dilemmas of using Gen AI to write copy.


Plagiarism and accuracy

The previously mentioned client expected we could input a 1000+ word blog, tell the AI to “rewrite”, and be on our merry way. I didn’t expect this to work (spoiler alert, it didn’t). I did try but found that our gen AI app of choice (Grammarly) would crap out with longer texts. But with a small block of text, it worked beautifully.

An example: The quick brown fox jumps over the lazy dog.

I prompted Grammarly to “rephrase”: A swift auburn fox leaps over the idle dog.

It’s a little clunky, but it does the job.

The other issue that I encountered was plagiarism. Because Grammarly Go is trained on the whole of the internet (as well as its users’ data; more on that later) when you prompt Grammarly to “rephrase,” “rewrite,” or “improve” a piece of text, it looks in two places. The first is the whole text file you are currently working on. The other is the picture of the internet on which the AI has been trained.

So, as I started my experiments, I found that Grammarly occasionally lifted whole chunks of other people’s work and inserted them into the text. Ironically, I could tell this because of Grammarly’s Plagiarism detector. I should clarify that Grammarly isn’t copying and pasting massive blocks of text wholesale. It will likely borrow a significant portion of a sentence or paragraph.

Is that plagiarism? It’s a little too close for comfort for me.

Another issue that came up for me was what the AI industry calls “hallucinations”. This phenomenon is when the AI makes something up and presents it as accurate. For example, in a separate project for the same client, they sent us an AI-generated outline (helpful!) in the project brief. However, the AI messed up the company’s founding date (by more than a decade) and where it was founded (sad trombone!). Given that this was part of an “About Us” section, that would have been a terrible set of facts to flub.

For a detailed explanation of why this happens, read Ted Chiang’s article “ChatGPT Is a Blurry JPEG of the Web”.

Let’s take a step back for a moment. The original impetus of this project was to deploy AI to speed up repurposing content. I found that I could go paragraph by paragraph and rephrase the text of the original content. But that didn’t save time; it took me a bit longer than just rewriting the text myself.

Also, if this thing fed inaccuracies into the work, could I trust it?

Since then, I’ve found Grammarly Go increasingly reliable, but I always fact-check any significant dates, locations, or facts. I treat content generated by Grammarly (or any other AI) as a source; I have to ensure it is accurate.


Lack of opt-in/opt-out

This point of contention will play out in the court system over the next few years. Content has value, and content creators should have some say over whether their work is used to train AI or not. A few companies have options to opt out of AI training, but they are intentionally hard to access. To its credit, Grammarly has an opt-out option for businesses and is planning on releasing this feature for individual accounts.

To a certain extent, Pandora’s box is already open. AI large language models (LLMs) have already scraped the internet. In other words, AI has access to anything recently on the internet.


Deepfakes and scammers

There have recently been well-documented cases of AI generating fake phone calls in which a loved one claims to be kidnapped. Phishing schemes use grammatically correct content to craft more convincing grifts.

Just as this technology has the prospect of improving your work, it can also be used for evil, even with the guidelines that AI providers set. I once read a horrifying account of a writer who, to prove a point, circumvented Chat GPT’s moral guidelines to describe, in detail, how to build a concentration camp.


Bias

In researching this post, I found a fascinating study by Bloomberg that determined that the Stable Diffusion AI image engine discriminates more than actual society. How? By comparing real-world Department of Labor data against thousands of image prompts, the researchers found that Stable Diffusion routinely created images that were less diverse than actual job numbers. For example, in reality, approximately 70% of CEOs are caucasian. However, stable diffusion generated 80% of CEOs as caucasian. It also generated nearly 90% Black people when prompted with “dishwasher” and “fast food worker”, well above industry data.

Society is biased, and AI reflects and intensifies those biases.


Legal issues

So, could your company infringe copyright if you use generative AI in your marketing work? As of this writing, there isn’t a clear answer, and hundreds of lawsuits are pending.

I hope guidelines and government regulations will be passed over the next few years to clarify this legal gray area.


Sensitive data

As a rule of thumb, don’t use AI on sensitive or proprietary intellectual property unless you know beyond a shadow of a doubt that you are using a secure, private GPT (which is unlikely at the moment unless you work at a large company with a dedicated AI department). There have been some high-profile cases of companies exposing unreleased designs to public search on ChatGPT. Additionally, compliance laws may apply to customer data.


Conclusion

After all that, does it even make sense to use generative AI? Again, my answer is yes, but mindfully. It comes down to the specific use case on whether or not generative AI is worth adopting.

In part two of this post, I explore the areas where I’ve found generative AI useful while keeping my soul unvarnished by AI ethical issues.

Read: Ethical AI: Part 2 – Best Practices of Using Gen AI Responsibly