Posted in Uncategorized

Tech Comm is safe from AI

Hoshi Sato, the 1st Comms Officer for the Enterprise.
She did not rely on AI alone, and was Uhura’s idol.
(I also named my car “Hoshi” in her honor.)

I know, I know. I definitely don’t write here as often as I used to–not by a longshot. But, that’s actually a good thing. It means that I’m doing a lot to keep busy. Between work, STC volunteering, and dealing with an ornery young autistic adult, my bandwidth is usually taken up. My job is helping me flex my tech comm muscles often, as does my STC volunteering.

Of course, I’m sure you’re looking at the title of this post and thinking, “What? Is she nuts? Why would she think that with all the chatter going on these days with AI?”

First of all, if you’ve read this blog long enough or even gotten to know me personally, you know I’m already nuts. I’ve been thinking about this since everyone got excited (for better or worse) about tools like ChatGPT and the like. My last entry was even done using ChatGPT and answered this question.

Yet, since it’s constantly being questioned and such, I feel even more strongly that we in the tech comm industry are fairly safe for now. To begin with, these AI tools are still in their infancy. Sure, they are very good, but there are problems with them, namely that they pull from all sources, which means that they can pull from sources that provide fake or incorrect information. Until that’s sorted, you have humans who can make that distinction. Next, it will be a long time until the writing is superior to human writing–or at least trained, GOOD writing. Again, ChatGPT is an mediocre writer and the writing passes as acceptable. But it’s just that…acceptable.

But for me, the real test was when I had to try out a tool that I was advised to try. The company had recently acquired the tool, and felt it would be good for the tasks I do for writing knowledge articles for our knowledge base. WELL, let me tell you, it was eye-opening in the sense that it actually proved that my human brain was better than the AI tool. Here’s why: the tool is set up to write newsletters, not knowledge articles, in a short, concise way with special formatting so that it would be a quick yet informative and comprehensive read for anyone reading the content. Fair enough. The principles behind the tool were based on a formatting technique that the company also adopted and that my team adapted as we saw fit.

I tested this tool using one of our longest, most complicated articles that was in the traditional long-form format. Surely, if this tool was all that and a bag of chips, it would be the equivalent of a slaughterhouse, slashing my sentences and paragraphs with virtual red ink everywhere in the test article showing where numerous corrections were. Instead, it made a few suggestions for sections that I could put in bold for emphasis (not a dealbreaker) and maybe a few spots for more concise wording (some were appropriate, some were not). Overall, though, it did not impress. After working with the new formatting technique without the tool for so long over the past 6 months, I found that I was better able to apply the new formatting technique than the tool was. The tool was useless for me. Now, this isn’t to say that for the average, untrained writer who wrote newsletters that this tool wouldn’t be appropriate. For that purpose, it had its benefits. But for what I do, it was a no-go. I could actually do a better job. Even my manager who tested the tool as well agreed that it wasn’t helpful for writing knowledge articles, and we humans (or at least she and I) could do a better job.

It got me to thinking…what AI tools do we already have at hand that help us improve our writing? We’ve had at least two that I can think of off the top of my head. First is one that I use all the time–the Editor tool in Word. Other word processing tools have similar functions, but the idea that it will tell you if you are using concise language, formal language, bad grammar, provide word counts, etc. is already AI helping us do a better job. Another one is also Grammarly. While I haven’t used this tool much, it uses AI to provide you with suggestions. I have read (I can’t remember where, though) that Grammarly also pulls from sites without permissions, so that’s not cool AI, even if it’s helpful for some people to improve their writing. In other words, many of us have been using some form of AI to help tighten up what we already know and help us improve to be better writers.

I also remember the words of a panelist at this past year’s STC Summit who responded to a question about AI. She’s deep in doing translations in a manufacturing industry, and she said that when machine translations first came out, translation specialists like herself were worried that they would be replaced. That was twenty-ish years ago. While machine translation has improved, it has definitely NOT replaced human intervention in the translation. Machines can’t distinguish context–which is a huge part of translation and language, and it can’t attest for culture and other language localization. To me, that was a powerful idea. Experiencing the tool that we were experimenting with at work reinforced it for me.

And if you want me to bring in the geek me, look at Star Trek. We still have Hoshi Sato and Nyota Uhura, two of the most famous Star Trek communications/translators, and even they can’t always get all of it through the translators perfectly every time. How many times has someone like Geordi LaForge or Data asked the computer to provide a calculation or provide something in the Holodeck, and it’s like talking to Siri or Alexa who doesn’t get it on the first (or second or third) try to understand what we need unless we get super explicit in our request?

So, we’re safe. If anything, AI might change how we do things, but it might make our life a little easier to do the initial “lifting”, but not the full refinement. Like machine translation, it can get most of the translation correct, but you still need a human to ensure that the message is actually correct.

Posted in Uncategorized

Google’s AI Assistant kicks it up a few notches!

This just came out in the news today, which I saw through the Mashable feed.  Google’s AI Assistant is really learning how to interact using natural language in a big way. The future, if it’s not now, is coming very soon!

If this is truly working, and I’m guessing it’ll be available to the public soon enough, it’s going to be kicking the back end of Siri and Alexa and Cortana.  I’ve used Siri for a while now, and it’s not perfect, but it’s okay–it’s gotten better over the years.  Alexa has been a bit of a disappointment to me–Siri can usually do better.  With mixed results from those two, I haven’t really ventured into trying Cortana, but I’m willing to bet that it’s still not as developed as the Google Assistant.

How does this affect technical communicators? Big time.  From what I can tell, this is about the chatbots and machine language learning that’s been talked about recently. But at the same time, it affects how we communicate through rhetoric or voice.  Sometimes we take actual speaking for granted, and it’s when we try to describe something that one sees clearly that it becomes difficult. Or, sometimes we can write it out well, but can’t explain well in voice.  This means that plain and very clear language is going to be helpful going forward as we develop the content for these AI assistants that will be developing.

Soon enough, we’ll be talking to HAL or to our starship’s computer with ease.

Scotty talking to a computer mouse.
When going back in time in Star Trek IV, Chief Engineer Scott forgot that there wasn’t AI in the late 1980s.

What do you think about this development? It’s exciting to me–enough to make me want to purchase a Google Assistant! It definitely raises the bar for Apple, Microsoft, and Amazon, for sure. Let some healthy competition begin! (And more tech comm jobs associated with it!) Include your thoughts below.

Posted in Uncategorized

Book Review: Global Content Strategy: A Primer by Val Swisher

Image courtesy of XML Press
Image courtesy of XML Press

Anyone who has read this blog for a while knows how much I love Val Swisher of Content Rules.  Why? Well, first of all, she’s a lovely person and great friend all around. But that’s beside the point. As a technical communicator, she is one of the foremost experts on content strategy, specializing in global content strategy.  I’ve seen Val give presentations at various conferences, and the thing I like about her presentations is that while her topics might be high-level topics, someone like me who is still learning can understand what she is talking about.  I never walk away from a Val Swisher presentation without feeling like I absorbed something that I can use in my own work–or at least have a better understanding of how it fits into the content strategy field.  I have often credited Val for providing me with the ideas that have helped me get my current job and make an impact there.

So on that note, I was pleased to see that she had written a book on her specialty called, Global Content Strategy: A Primer, which is available through The Content Wrangler’s Content Strategy Series published by XML Press.  This book is easily read in an afternoon, and is loaded with a lot of information.

If you’ve never had the pleasure of hearing Val’s presentations, then this book is a great way to have many of the concepts she talks about in her presentations found in one place. She provides not only the basics of what global content strategy is, but breaks down bigger ideas into simple terms, and includes color images to provide examples, which is a good move. Val explains that globalization is not just about translating content, but also being sensitive and knowledgeable about localization as well. For example, what works in Portugese in Portugal doesn’t necessarily work for Portugese in Brazil. Translations can’t always be made word for word because of idioms and expressions that aren’t universal. Val provides many examples of this applies not only textually, but in imagery as well.

The book also talks about how the translation process can get complex and bungled without establishing a translation memory database and consistency of terminology. Val provides some pointers to help global strategists wade through these issues to keep it all straight, including what not to do as well as what best practices are.

My only criticism of the book, ironically enough, is that it doesn’t seem to be written for a global audience, but rather for an American–or perhaps North American–audience. There’s nothing wrong with that, but what if I was in another country and looking to create a global strategy? Perhaps it’s because as the Internet has grown, it has seemingly been American English-centric, and by writing for an American audience, Val has written for the group that needs to become more aware of the global audience it needs to reach!

I’ve been a team member on a global web project for the last six months or so, and I remember much of what I learned from Val had taught me to the members of my immediate team that would show that even though we were working on a North American section of a website, we had to find that balance between the global and local content. Sometimes my words were heard, and sometimes it fell on deaf ears. Upon reading this book and revisiting the concepts that I’ve heard her present in the past, I do wish this book had been given to someone at the top of the global team to understand that translation alone is not enough, and that localization makes a big difference. They could’ve used this book as a great reference to better streamline the process and the web project. Many global companies could benefit from reading this book to help put their content in perspective.

If you are looking to acclimate to the concept of global content strategy and what that entails, then this is a great resource for you to read. There are a lot of details squeezed into this slim volume that will be easy to understand, and yet you’ll feel a little overwhelmed at first at how many details one needs to consider when creating a global strategy. Fortunately, this reference book breaks it down so that it isn’t as overwhelming as it could be, and helps content strategists think in a more single-sourced, consistent way to provide the best ROI for a project.

I highly recommend this book.

You can find purchasing information about the book at XML Press:
Global Content Strategy: A Primer

Have you read this book? What do you think? Include your comments below.

Posted in Uncategorized

“Lucy, you have some ‘splanin’ to do!”: Considering your ESL Customers

Lucille-Ball-Desi-ArnazContent Rules Inc. was kind enough to extend their invitation to have me blog for them again. This time, it’s on a subject that’s near and dear to their hearts as well as mine.

This article talks about my own personal experiences in trying to use standardized language. Whether you use standardized language in your personal or professional life, it’s something that one needs to keep in mind as a writer, especially when writing for a global audience, and even more so if you are writing for a digital format that is easily accessed through the Internet. It’s not easy to do, but it’s something that should be tucked in the back of every writer’s brain.

Read the article for more:
“Lucy, you have some ‘splanin’ to do!”: Considering your ESL Customers

Many thanks again to Val Swisher and the gang at Content Rules, Inc. for the opportunity!

Posted in Uncategorized

Adobe Day @ Lavacon 2013: Scott Abel’s 5 Technologies Tech Comm Can’t Ignore

voodoodonutsignI realized as I was writing this post that this would be my 500th post on TechCommGeekMom. Who knew that so much information and thought could accumulate through original posts and curated content?  I’m also very close to my all-time 15,000 hits mark (only a few hits away at this writing). I wouldn’t have believed you if you told me that I’d hit these benchmarks when I started this blog, but of course, I’m going to keep going! I debated about what I should write for my 500th post–whether to finish my Adobe Day coverage or do something else, and in the end, it seems fitting to finish my Adobe Day coverage, because in many respects, knowing and writing about the presentation of Scott Abel, aka “The Content Wrangler”, shows how far I’ve come already in my tech comm journey from beginner to covering internationally known presenters.

Scott is one of the most prolific and vocal speakers out there on the conference circuit speaking about content–whether it be content management or other technical communication topics.  It also seems like he has written the forewords of many of the best tech comm books out there. He’s everywhere! To boot, he’s an accomplished DJ, and I found myself “bonding” with him over dance remixes and mash-ups while at Lavacon, because I always enjoy when he posts either his mash-ups or his favorite mash-ups on Facebook. (I’ll be writing a post about the relationship between tech comm and dance mash-ups in the near future.)  He is a person who is full of so much kinetic energy that you wonder when he’s going to explode, but he doesn’t. Even the time I saw him at the STC Summit last spring with a bad cold, he was still more on top of his game than a lot of people would be on a good day.  Much like Val Swisher, my love for all things Scott Abel also knows no bounds.  He knows how to stir things up at times, but there is no denying that in his frenetic pace of delivering a presentation, you learn SO much. I’m lucky that he’s so kind to be one of my cheerleaders!

ScottAbel
Scott Abel checking his files before his presentation

So when it came to thinking of a garden in Portland to use as an analogy to Scott, I had to deviate. In my mind, he’s the Voodoo Doughnuts shop located about four or five blocks away from the Chinese Garden. Scott’s talks always have lines going out the door, and like many of the Voodoo Doughnuts themselves, the unique flavors dispensed open your mind up to new and delicious possibilities and ideas, and you come back wanting more (hence, more long lines!).  They are both crazy and sweet at the same time. You can’t beat that combination.

Scott was the keynote speaker for Adobe Day as well as the moderator of the discussion panel later in the event. Scott’s topic for his talk was called, “Five Revolutionary Technologies Technical Communicators Can’t Afford To Ignore.”  If Joe Gollner went fast during his presentation, then Scott went at lightning speed, so my notes below are the highlights.

Scott started by telling us that translation is going to be an important part of automated content going forward. It’s important to understand that for the web, the World Wide Web (WWW) is equal to the “land of opportunity.” The WWW can reach a global market reaching new consumers. As American users, we forget that 96% of web users are not in the US. We don’t all speak English globally. In fact, less than 6% of the global population speaks English well, but don’t necessarily read or write it well.

Scott’s list of the five technologies the Tech Comm can’t ignore were as follows:

1) Automated Translation
Why would be need automated translation? We write for the *worldwide* web.  There are over 6000 languages in the world, so translation is a big deal for a global reach and global connection. We need to recognize that content is written for both machines and humans. Even though we write for both machines and humans, we need to write for machines first, as they are the “gatekeepers” of content, such as for searches. Everything goes through the machine first. We need to recognize that writing rules learned in elementary school are no longer sufficient for a world in which language science is needed.  We need to examine our content from the vantage point of a rules-processing engine and ensure it’s optimized for machine translation.

2) Automated Transcription
Automated transcription involves software that translates speech to text for machine use. Without transcription, content is locked and hidden from view. Transcription allows for better searchability of content.  Scott recommended Koemei as a good transcription software tool for video and general transcription, as it can help transform editable content into other languages.

3) Terminology Management
Terminology management controls words in a central place, namely the words used the most and used consistently for branding, products, etc. Terminology management is important for consistency as well as for regulatory reasons. This is an instance where seeking a global content strategist is needed to help standardize processes.  It’s best to adopt a terminology management system, such as Adobe partner and Scott’s suggestion, Acrolinx.

4) Adaptive content
Adaptive content is content that is structured and designed to adapt to the needs of your customer; it’s about substance of the content. Adaptive content adapts to the devices, e.g. laptops, GPS, and smartphones.  Customers are demanding exceptional experiences, so we need to meet their expectations, so it’s up to responsive designers to meet that challenge. Adaptive content makes it possible to publish to multiple platforms and devices.  It is content separated from formatting information. By allowing authors to focus on what they do best, adaptive content makes content findable and reuseable by others who need it. We need to rethink content, as the move to adaptive content involves work, but the ROI (return on investment) can be realized in months instead of years.

5) Component Content Management
Component content management systems are needed. They focus on the storing of content components that are used to assemble documents. Components can be in all sizes, and can be photos, video, and text. It’s about managing CONTENT not FILES.

Scott provided these slides as his example to show this:

ScottAbel_ExampleA ScottAbel_ExampleB

Structured content, combined with a component content management system, supports personalized content and  targeted marketing, which in turn increases response rates. In this end, this process can save money! The key is to remember that all customers are not the same! Reusing content without the “copy and paste” methods produce the best results. You can ensure that content is consistent by seeking a content strategist who understands content and is a technologist. Implement a component management system. Scott suggested checking out Astoria Software for a good component content management system. 

At this point, Scott’s talk had pretty much finished, but in answering audience questions, he pointed out that there’s a lot more than just these five technologies to watch. He suggested that we should look out for wireless electricity, flexible surfaces, more wireless devices, wearable computing, and augmented reality as well. He also said that in order to mature as a discipline, we need to be content craftspeople, content designers and content engineers. We need to leverage using content and code. We need to think more like engineers, and less like writers and editors. Even websites that are very localized still need to be written for global purposes to improve the English used for the native speakers as well. Controlled vocabulary helps all end users!

Scott covered a LOT of information in a short amount of time, and he set the tone for the rest of the session, as the presentations that followed repeated much of the same information. (This is a good thing, because then we know that the information is valid, coming from several experienced technical communicators!)

Scott posted on Twitter than his presentation was available on SlideShare, but I have it below.

And as always–Scott, if I misinterpreted or misquoted any of the information I summarized above, please let us know in the comments!