Posted in Uncategorized

Tech Comm is safe from AI

Hoshi Sato, the 1st Comms Officer for the Enterprise.
She did not rely on AI alone, and was Uhura’s idol.
(I also named my car “Hoshi” in her honor.)

I know, I know. I definitely don’t write here as often as I used to–not by a longshot. But, that’s actually a good thing. It means that I’m doing a lot to keep busy. Between work, STC volunteering, and dealing with an ornery young autistic adult, my bandwidth is usually taken up. My job is helping me flex my tech comm muscles often, as does my STC volunteering.

Of course, I’m sure you’re looking at the title of this post and thinking, “What? Is she nuts? Why would she think that with all the chatter going on these days with AI?”

First of all, if you’ve read this blog long enough or even gotten to know me personally, you know I’m already nuts. I’ve been thinking about this since everyone got excited (for better or worse) about tools like ChatGPT and the like. My last entry was even done using ChatGPT and answered this question.

Yet, since it’s constantly being questioned and such, I feel even more strongly that we in the tech comm industry are fairly safe for now. To begin with, these AI tools are still in their infancy. Sure, they are very good, but there are problems with them, namely that they pull from all sources, which means that they can pull from sources that provide fake or incorrect information. Until that’s sorted, you have humans who can make that distinction. Next, it will be a long time until the writing is superior to human writing–or at least trained, GOOD writing. Again, ChatGPT is an mediocre writer and the writing passes as acceptable. But it’s just that…acceptable.

But for me, the real test was when I had to try out a tool that I was advised to try. The company had recently acquired the tool, and felt it would be good for the tasks I do for writing knowledge articles for our knowledge base. WELL, let me tell you, it was eye-opening in the sense that it actually proved that my human brain was better than the AI tool. Here’s why: the tool is set up to write newsletters, not knowledge articles, in a short, concise way with special formatting so that it would be a quick yet informative and comprehensive read for anyone reading the content. Fair enough. The principles behind the tool were based on a formatting technique that the company also adopted and that my team adapted as we saw fit.

I tested this tool using one of our longest, most complicated articles that was in the traditional long-form format. Surely, if this tool was all that and a bag of chips, it would be the equivalent of a slaughterhouse, slashing my sentences and paragraphs with virtual red ink everywhere in the test article showing where numerous corrections were. Instead, it made a few suggestions for sections that I could put in bold for emphasis (not a dealbreaker) and maybe a few spots for more concise wording (some were appropriate, some were not). Overall, though, it did not impress. After working with the new formatting technique without the tool for so long over the past 6 months, I found that I was better able to apply the new formatting technique than the tool was. The tool was useless for me. Now, this isn’t to say that for the average, untrained writer who wrote newsletters that this tool wouldn’t be appropriate. For that purpose, it had its benefits. But for what I do, it was a no-go. I could actually do a better job. Even my manager who tested the tool as well agreed that it wasn’t helpful for writing knowledge articles, and we humans (or at least she and I) could do a better job.

It got me to thinking…what AI tools do we already have at hand that help us improve our writing? We’ve had at least two that I can think of off the top of my head. First is one that I use all the time–the Editor tool in Word. Other word processing tools have similar functions, but the idea that it will tell you if you are using concise language, formal language, bad grammar, provide word counts, etc. is already AI helping us do a better job. Another one is also Grammarly. While I haven’t used this tool much, it uses AI to provide you with suggestions. I have read (I can’t remember where, though) that Grammarly also pulls from sites without permissions, so that’s not cool AI, even if it’s helpful for some people to improve their writing. In other words, many of us have been using some form of AI to help tighten up what we already know and help us improve to be better writers.

I also remember the words of a panelist at this past year’s STC Summit who responded to a question about AI. She’s deep in doing translations in a manufacturing industry, and she said that when machine translations first came out, translation specialists like herself were worried that they would be replaced. That was twenty-ish years ago. While machine translation has improved, it has definitely NOT replaced human intervention in the translation. Machines can’t distinguish context–which is a huge part of translation and language, and it can’t attest for culture and other language localization. To me, that was a powerful idea. Experiencing the tool that we were experimenting with at work reinforced it for me.

And if you want me to bring in the geek me, look at Star Trek. We still have Hoshi Sato and Nyota Uhura, two of the most famous Star Trek communications/translators, and even they can’t always get all of it through the translators perfectly every time. How many times has someone like Geordi LaForge or Data asked the computer to provide a calculation or provide something in the Holodeck, and it’s like talking to Siri or Alexa who doesn’t get it on the first (or second or third) try to understand what we need unless we get super explicit in our request?

So, we’re safe. If anything, AI might change how we do things, but it might make our life a little easier to do the initial “lifting”, but not the full refinement. Like machine translation, it can get most of the translation correct, but you still need a human to ensure that the message is actually correct.

Author:

Danielle M. Villegas is a technical communicator who currently employed at Cox Automotive, Inc., and freelances as her own technical communications consultancy, Dair Communications. She has worked at the International Refugee Committee, MetLife, Novo Nordisk, BASF North America, Merck, and Deloitte, with a background in content strategy, web content management, social media, project management, e-learning, and client services. Danielle is best known in the technical communications world for her blog, TechCommGeekMom.com, which has continued to flourish since it was launched during her graduate studies at NJIT in 2012. She has presented webinars and seminars for Adobe, the Society for Technical Communication (STC), the IEEE ProComm, TCUK (ISTC) and at Drexel University’s eLearning Conference. She has written articles for the STC Intercom, STC Notebook, the Content Rules blog, and The Content Wrangler as well. She is very active in the STC, as a former chapter president for the STC-Philadelphia Metro Chapter, and is currently serving on three STC Board committees. You can learn more about Danielle on LinkedIn at www.linkedin.com/in/daniellemvillegas, on Twitter @techcommgeekmom, or through her blog. All content is the owner's opinions, and does not reflect those of her employers past or present.

One thought on “Tech Comm is safe from AI

What say you?

This site uses Akismet to reduce spam. Learn how your comment data is processed.