Posted in Uncategorized

Tech Comm is safe from AI

Hoshi Sato, the 1st Comms Officer for the Enterprise.
She did not rely on AI alone, and was Uhura’s idol.
(I also named my car “Hoshi” in her honor.)

I know, I know. I definitely don’t write here as often as I used to–not by a longshot. But, that’s actually a good thing. It means that I’m doing a lot to keep busy. Between work, STC volunteering, and dealing with an ornery young autistic adult, my bandwidth is usually taken up. My job is helping me flex my tech comm muscles often, as does my STC volunteering.

Of course, I’m sure you’re looking at the title of this post and thinking, “What? Is she nuts? Why would she think that with all the chatter going on these days with AI?”

First of all, if you’ve read this blog long enough or even gotten to know me personally, you know I’m already nuts. I’ve been thinking about this since everyone got excited (for better or worse) about tools like ChatGPT and the like. My last entry was even done using ChatGPT and answered this question.

Yet, since it’s constantly being questioned and such, I feel even more strongly that we in the tech comm industry are fairly safe for now. To begin with, these AI tools are still in their infancy. Sure, they are very good, but there are problems with them, namely that they pull from all sources, which means that they can pull from sources that provide fake or incorrect information. Until that’s sorted, you have humans who can make that distinction. Next, it will be a long time until the writing is superior to human writing–or at least trained, GOOD writing. Again, ChatGPT is an mediocre writer and the writing passes as acceptable. But it’s just that…acceptable.

But for me, the real test was when I had to try out a tool that I was advised to try. The company had recently acquired the tool, and felt it would be good for the tasks I do for writing knowledge articles for our knowledge base. WELL, let me tell you, it was eye-opening in the sense that it actually proved that my human brain was better than the AI tool. Here’s why: the tool is set up to write newsletters, not knowledge articles, in a short, concise way with special formatting so that it would be a quick yet informative and comprehensive read for anyone reading the content. Fair enough. The principles behind the tool were based on a formatting technique that the company also adopted and that my team adapted as we saw fit.

I tested this tool using one of our longest, most complicated articles that was in the traditional long-form format. Surely, if this tool was all that and a bag of chips, it would be the equivalent of a slaughterhouse, slashing my sentences and paragraphs with virtual red ink everywhere in the test article showing where numerous corrections were. Instead, it made a few suggestions for sections that I could put in bold for emphasis (not a dealbreaker) and maybe a few spots for more concise wording (some were appropriate, some were not). Overall, though, it did not impress. After working with the new formatting technique without the tool for so long over the past 6 months, I found that I was better able to apply the new formatting technique than the tool was. The tool was useless for me. Now, this isn’t to say that for the average, untrained writer who wrote newsletters that this tool wouldn’t be appropriate. For that purpose, it had its benefits. But for what I do, it was a no-go. I could actually do a better job. Even my manager who tested the tool as well agreed that it wasn’t helpful for writing knowledge articles, and we humans (or at least she and I) could do a better job.

It got me to thinking…what AI tools do we already have at hand that help us improve our writing? We’ve had at least two that I can think of off the top of my head. First is one that I use all the time–the Editor tool in Word. Other word processing tools have similar functions, but the idea that it will tell you if you are using concise language, formal language, bad grammar, provide word counts, etc. is already AI helping us do a better job. Another one is also Grammarly. While I haven’t used this tool much, it uses AI to provide you with suggestions. I have read (I can’t remember where, though) that Grammarly also pulls from sites without permissions, so that’s not cool AI, even if it’s helpful for some people to improve their writing. In other words, many of us have been using some form of AI to help tighten up what we already know and help us improve to be better writers.

I also remember the words of a panelist at this past year’s STC Summit who responded to a question about AI. She’s deep in doing translations in a manufacturing industry, and she said that when machine translations first came out, translation specialists like herself were worried that they would be replaced. That was twenty-ish years ago. While machine translation has improved, it has definitely NOT replaced human intervention in the translation. Machines can’t distinguish context–which is a huge part of translation and language, and it can’t attest for culture and other language localization. To me, that was a powerful idea. Experiencing the tool that we were experimenting with at work reinforced it for me.

And if you want me to bring in the geek me, look at Star Trek. We still have Hoshi Sato and Nyota Uhura, two of the most famous Star Trek communications/translators, and even they can’t always get all of it through the translators perfectly every time. How many times has someone like Geordi LaForge or Data asked the computer to provide a calculation or provide something in the Holodeck, and it’s like talking to Siri or Alexa who doesn’t get it on the first (or second or third) try to understand what we need unless we get super explicit in our request?

So, we’re safe. If anything, AI might change how we do things, but it might make our life a little easier to do the initial “lifting”, but not the full refinement. Like machine translation, it can get most of the translation correct, but you still need a human to ensure that the message is actually correct.

Posted in Uncategorized

I think we’re safe.

The big topic right now in the content world is the emergence of Chat GPT and other AI tools. Chat GPT has the head start as it’s an open source project. Today, after much resistance of checking into it, I happened to have watched a demonstration of the tool, and it was compelling. So, I’ve gotten my copy of Chat GPT, and entered the chat, “Explain why technical writers are superior to Chat GPT.”

This was the response:

As an AI language model, I can provide quick, accurate, and informative answers to a wide range of questions, but there are certain areas where technical writers have advantages over AI models like me.

  1. Contextual Understanding: Technical writers have a deep understanding of their industry and domain, which enables them to provide context to complex technical information and explain it in a way that is easy to understand.
  2. Personal Touch: Technical writers bring their own unique voice and perspective to the content they create, making it more engaging and human.
  3. Collaboration: Technical writers work closely with subject matter experts and other stakeholders to ensure that the content they produce is accurate and up-to-date. They also have the ability to incorporate feedback and make revisions to their work.
  4. Adaptability: Technical writers are able to adapt to changes in technology and industry standards, ensuring that their content remains relevant and current.
  5. Ethical considerations: Technical writers are aware of ethical considerations such as ensuring accuracy and avoiding plagiarism, which are important for maintaining credibility and trust with the audience.

While AI models like me can provide quick and accurate answers, technical writers bring a level of expertise and personal touch that is often important for creating effective technical content.

Hey, Chat GPT. I don’t think I could have said it better myself. 😉

For exactly the reasons that Chat GPT provided, I don’t think technical writers and communicators will be out of a job just yet. Even when I thought about it, it’ll be tricky putting this in several places. While the education field is already getting nervous about the implications for students to write original research papers or essays, I’m thinking about my own job. I work on a form of internal communications, and the specific policies and procedures would only apply to the company, and would not necessarily be public information, so it wouldn’t necessarily know what to write for certain knowledge articles. I’m sure there are other exceptions, but you get the idea.

What do you think of Chat GPT or other AI creation tools? Comment below.