AI and content design

I had to, didn’t I?

Yael Ben-David
6 min readJul 26, 2023

AI is the talk of the town! And as a writer, I suppose I would be remiss not to have a POV. So here we go.

AI-generated image for the prompt “generative AI in content design”

Generative AI is a tool

The worst iteration of anything I write is the first one. That’s the version I’m more than happy to let someone else suck at and then hand over to me to edit into a proper piece of content. Generative AI is a great tool for drafting that first version which may have little in common with the real thing, but is a necessary first step. I don’t believe AI can significantly cut down the time I spend creating content as a whole, but it can certainly help with the first stage and I appreciate that.

At the end of the process, I can also feed my final iteration back through the tool and ask it to raise potential accessibility and inclusivity flags. That’s cool. It’s like an audit that AI can do faster than I can and once it flags the problems, I can jump straight to solving them. We shouldn’t expect AI to catch everything, but it will catch at least some, maybe most, faster than we would.

Bottom line is that generative AI, if used well, can make some of our work better, faster, and even more enjoyable. It can’t do most of what we do, but I appreciate that like style guides and design software, it is a beneficial tool.

Generative AI will not replace us

I would venture to guess that generative AI, on average, could produce better strings for in-product content than the average UX writer. It probably won’t do a better job than the most experienced. It probably will do a better job than the least experienced. On average, I’d guess it’s either a tie or AI does a little better. The thing is, microcopy is only a small part of what we do.

If AI is coming for your job, the scope of your job is far too narrow.

I myself was pretty much doing what AI can do now, back in 2017, but within a year was bored and thirsty for more. I’ve expanded the range of projects I’ve taken on ever since, to the point where writing for UI is about 10% of what fills my time and brain. I’m not super concerned that employers would replace me with a free solution that can only do 10% of my job, and for now, still needs a human to apply it. The human collaboration and brainstorming, the strategic and process initiatives that have nothing to do with writing, the interaction with users that provides invaluable insights — all of that and more, generative AI can’t do (yet?)

Prompt engineer is a dumb term

Calling ourselves “prompt engineers” is like calling designers “Figma operators”. We are prompt engineers as much as we are keyboard activators. All it means is that we know how to use a tool. Putting something so basic on your resume, in my opinion, looks pretty bad. All it says is, “I know about this new tool and can use it!” which really should go without saying.

Have to understand it; don’t have to like or use it

A critical part of any role in tech is staying current and content design is no different. You should probably know that Writer, Hemingway, Sketch, Figma, G Suite, usertesting.com, fullstory, and a whole bunch of other tools, are a thing. You do not need to use them. Not at the same time, not at all.

Do you know how many meals I can eat without a knife? My life is better off for knowing what a knife is but I can do some awesome, healthy, delicious eating without it. More than enough. Even if I’m a soup connoisseur, I’d argue that I’m better off knowing knives exist than not. It gives me options. Even if I don’t need or want them right now.

Know a little something about generative AI even if you don’t like or use it — I don’t (for now).

Generative AI’s greatest flaw is its built in bias

AI only knows what it’s been told and it’s primarily been told things about the white, English-speaking, western world. Also, the inputs are things people came up with and people are biased. Everything AI is generating is being generated based on a narrow set of inputs which means it’s not broadening our collective creativity and output — it’s narrowing it. I hope that this will improve over time.

That having been said, I’ve found that chatGPT often generates more balanced content in response to prompts than a Google search would turn up. So there’s that.

Also, it’s success is defined based on the assumption that faster is better. But is it? As a religious person, I follow a lot of rules in my daily life. I’ve been asked if so many laws feels restrictive. I heard the analogy made once to painting with a massive roller vs. a tiny brush — isn’t the tiny brush restrictive? And yet, which would you choose for creating beautiful art? That analogy resonates with me personally and I indeed feel that my life is elevated because of the limitations I live by. A huge paint roller is faster but that does not make it the best tool for painting — it makes it the best tool for painting in specific circumstances, and the worst for painting in others. Generative AI is great if great means fast; but fast is great only sometimes.

I see the built-in bias of generative AI inputs, and equating value with speed, as its greatest flaws.

Disclaimer on the first screen of https://chat.openai.com/

Second greatest flaw is its divisiveness but it won’t win

At WebExpo I heard a super interesting talk about predicting trends which got into the evolution of content creation and quantity over time and what that means for the common human experience. Briefly, once upon a time content was hard to generate so there wasn’t much of it. We all consumed the same stuff because that’s all there was. Audrey Hepburn, the influencer of influencers of her time, put out an average of 2 movies/pieces of content a year. A whole generation has Audrey Hepburn content in common. That’s why those of us say, 30 years old and older can still think back nostalgically to the 90s (my husband’s current obsession) because “the 90s experience” is a thing. Or at least “the 90s for much of America” is a thing. I would expect that I would have at least a handful of cultural references in common with most Americans my age from when we were younger.

Fast forward to the rise of apps like TikTok where content is churned out at a rate that Hepburn couldn’t imagine in her wildest dream. There is SO. MUCH. Combine that with the algorithms that send us down rabbit holes and keep us in echo chambers and we end up extremely fragmented. 15 year olds today can easily have absolutely no common content consumption with most other 15 year olds because they are each in their own rabbit holes. Each 15 year old with a smartphone is seeing and hearing, and then thinking, all the time, about a tiny sliver of current content and so in 30 years when these 45 year olds look back, there will be no common 2020s to look back on. “The 2020s” will not be a thing like the 90s is today. Everyone’s experience will be so different. AI is the biggest technological contributing factor to dividing human civilization. However, I have hope.

I recently listened to an episode of the Re:Thinking podcast hosted by Adam Grant, with guest Baratunde Thurston, where Thurston called out the we will always have emotion in common. No matter what we hear and see and think, we share the way we feel. Leaning into human emotion can protect us from the divisiveness of AI because no technology can take away that common thread of humanity (yet). That thought gives me comfort.

That’s what I think about generative AI, as a content designer. For now.

--

--