Posted in Uncategorized

TechCommGeekMom’s TechComm Predictions for 2014

sarahjane-crystalball
Sarah Jane Smith of Doctor Who is gazing into her crystal ball, trying to figure out why the 3rd Doctor and the Master aren’t going at it with more mobile tech.
Happy New Year! Welcome to 2014!

I had set several goals for 2013, and for the most part, I achieved many of them.  Due to finally having a job this past year, I was able to pay for my new kitchen outright (okay, we saved on labor costs because my multi-talented husband installed everything–and I mean everything–except the Silestone countertops), so I have a new kitchen that I love.  I definitely travelled more, as I visited Atlanta for the first time in 21 years due to the STC Summit, and I got to visit Portland, Oregon again for Lavacon.  I didn’t get to go to the UK, however. And I still don’t look like a supermodel yet.

My 2014 goals are still fairly ambitious, I think. I would like to build upon my web publishing experiences at work, and figure out how to become a content engineer, rather than merely a content manager. I’m hoping that attending the Intelligent Content Conference in San Jose, CA this February and attending this year’s STC Summit in Phoenix, AZ will with help with that. I’m not sure if I’ll be able to make it to Lavacon again until later in the year. I still want to go to the UK, but I think I may have to wait a little longer for that. If there’s a way to combine a vacation and a conference there at the same time, perhaps I can pull it off later in the year instead of going to Lavacon (just to vary things up a bit).  I had hoped to become a certified Muse expert last year, and that didn’t happen. Perhaps I can try this year. I also got the “WordPress for Dummies” book this year, which has inspired me to become more expert at using WordPress. I currently use the version hosted by WordPress itself, but I think it might be helpful to understand how the independently managed version works, too. If I can achieve some weight loss in the process during all of this, I will consider 2014 a success. 😉

As for predictions for 2014 in tech comm, I decided that I would be a little more analytical about it. Two years ago, it seemed that the push in tech comm was that we needed to think more carefully about content management reuse of content, and think in terms of mobile content.  This past year, that was extended to translation and localization of content, taking it a step further. So with those concepts in mind, what’s the next step? In my mind, it’s implementation of all of these with more vigor. Some companies are on top of this, but it wouldn’t be surprising to me if many companies–even large, global companies–are not on top of any of this yet, or on top of it in an effective way. I think about companies that I’ve worked for in the past, and how, despite their size and availability of resources, these companies wouldn’t be cutting edge in distributing content for desktops or mobile, and regional sites were not as localized nor standardized as they should be. So, in my mind, this is the year of implementation.

googleglassAnother thing to consider is technology changes. Over the past few years, we’ve been adapting not only to desktop or laptop interfaces, but we’ve also been adapting to more mobile devices like smartphones and tablets.  Marta Rauch, a technical communicator friend of mine who is part of the Google Glass beta testing, pointed

samsungwatch

out that 2014 is due to be a year in which even more portable, wearable mobile devices will become relevent. These devices would include something like Google Glass or similar products, but it also would include devices like Samsung’s wristband device or devices that are synchronized with car components. She’s got a point. Components are getting smaller, and technological portability is becoming more and more mainstream all the time. How do we decide what content is most user-friendly, reuseable, streamlined, and pertinent for these kinds of mobile devices? It’s something we need to start thinking about now.

riker-commbadge
“Riker to the Tech Comm community–are you there?”

So there you have it. At least in my mind, if we aren’t all wearing Comm Badges like in Star Trek by the end of the year, I don’t know what this world is coming to. 😉 But it’s hard for someone like me to figure out where the future is going. I’m grateful there are those who are on the cutting edge that can help me figure that sort of thing out, and can educate me on the latest and greatest so that I can bring it to my own workplace, as well as talk about it here on TechCommGeekMom.

I’m sure that there will be plenty of surprises coming up in 2014. As I said, I have three conferences that I’ll be attending in the first half of the year, and I know with the continuation of this great work contract I have, I will probably be learning a lot of new things through that opportunity, too. My philosophy is to never stop learning, and I plan to continue to learn a lot more going forward in the coming year.

What are you predictions for the coming year? Am I on target, or off-base? What did I forget to mention? Let me know in the comments.

Posted in Uncategorized

Adobe Day @ Lavacon 2013: Scott Abel’s 5 Technologies Tech Comm Can’t Ignore

voodoodonutsignI realized as I was writing this post that this would be my 500th post on TechCommGeekMom. Who knew that so much information and thought could accumulate through original posts and curated content?  I’m also very close to my all-time 15,000 hits mark (only a few hits away at this writing). I wouldn’t have believed you if you told me that I’d hit these benchmarks when I started this blog, but of course, I’m going to keep going! I debated about what I should write for my 500th post–whether to finish my Adobe Day coverage or do something else, and in the end, it seems fitting to finish my Adobe Day coverage, because in many respects, knowing and writing about the presentation of Scott Abel, aka “The Content Wrangler”, shows how far I’ve come already in my tech comm journey from beginner to covering internationally known presenters.

Scott is one of the most prolific and vocal speakers out there on the conference circuit speaking about content–whether it be content management or other technical communication topics.  It also seems like he has written the forewords of many of the best tech comm books out there. He’s everywhere! To boot, he’s an accomplished DJ, and I found myself “bonding” with him over dance remixes and mash-ups while at Lavacon, because I always enjoy when he posts either his mash-ups or his favorite mash-ups on Facebook. (I’ll be writing a post about the relationship between tech comm and dance mash-ups in the near future.)  He is a person who is full of so much kinetic energy that you wonder when he’s going to explode, but he doesn’t. Even the time I saw him at the STC Summit last spring with a bad cold, he was still more on top of his game than a lot of people would be on a good day.  Much like Val Swisher, my love for all things Scott Abel also knows no bounds.  He knows how to stir things up at times, but there is no denying that in his frenetic pace of delivering a presentation, you learn SO much. I’m lucky that he’s so kind to be one of my cheerleaders!

ScottAbel
Scott Abel checking his files before his presentation

So when it came to thinking of a garden in Portland to use as an analogy to Scott, I had to deviate. In my mind, he’s the Voodoo Doughnuts shop located about four or five blocks away from the Chinese Garden. Scott’s talks always have lines going out the door, and like many of the Voodoo Doughnuts themselves, the unique flavors dispensed open your mind up to new and delicious possibilities and ideas, and you come back wanting more (hence, more long lines!).  They are both crazy and sweet at the same time. You can’t beat that combination.

Scott was the keynote speaker for Adobe Day as well as the moderator of the discussion panel later in the event. Scott’s topic for his talk was called, “Five Revolutionary Technologies Technical Communicators Can’t Afford To Ignore.”  If Joe Gollner went fast during his presentation, then Scott went at lightning speed, so my notes below are the highlights.

Scott started by telling us that translation is going to be an important part of automated content going forward. It’s important to understand that for the web, the World Wide Web (WWW) is equal to the “land of opportunity.” The WWW can reach a global market reaching new consumers. As American users, we forget that 96% of web users are not in the US. We don’t all speak English globally. In fact, less than 6% of the global population speaks English well, but don’t necessarily read or write it well.

Scott’s list of the five technologies the Tech Comm can’t ignore were as follows:

1) Automated Translation
Why would be need automated translation? We write for the *worldwide* web.  There are over 6000 languages in the world, so translation is a big deal for a global reach and global connection. We need to recognize that content is written for both machines and humans. Even though we write for both machines and humans, we need to write for machines first, as they are the “gatekeepers” of content, such as for searches. Everything goes through the machine first. We need to recognize that writing rules learned in elementary school are no longer sufficient for a world in which language science is needed.  We need to examine our content from the vantage point of a rules-processing engine and ensure it’s optimized for machine translation.

2) Automated Transcription
Automated transcription involves software that translates speech to text for machine use. Without transcription, content is locked and hidden from view. Transcription allows for better searchability of content.  Scott recommended Koemei as a good transcription software tool for video and general transcription, as it can help transform editable content into other languages.

3) Terminology Management
Terminology management controls words in a central place, namely the words used the most and used consistently for branding, products, etc. Terminology management is important for consistency as well as for regulatory reasons. This is an instance where seeking a global content strategist is needed to help standardize processes.  It’s best to adopt a terminology management system, such as Adobe partner and Scott’s suggestion, Acrolinx.

4) Adaptive content
Adaptive content is content that is structured and designed to adapt to the needs of your customer; it’s about substance of the content. Adaptive content adapts to the devices, e.g. laptops, GPS, and smartphones.  Customers are demanding exceptional experiences, so we need to meet their expectations, so it’s up to responsive designers to meet that challenge. Adaptive content makes it possible to publish to multiple platforms and devices.  It is content separated from formatting information. By allowing authors to focus on what they do best, adaptive content makes content findable and reuseable by others who need it. We need to rethink content, as the move to adaptive content involves work, but the ROI (return on investment) can be realized in months instead of years.

5) Component Content Management
Component content management systems are needed. They focus on the storing of content components that are used to assemble documents. Components can be in all sizes, and can be photos, video, and text. It’s about managing CONTENT not FILES.

Scott provided these slides as his example to show this:

ScottAbel_ExampleA ScottAbel_ExampleB

Structured content, combined with a component content management system, supports personalized content and  targeted marketing, which in turn increases response rates. In this end, this process can save money! The key is to remember that all customers are not the same! Reusing content without the “copy and paste” methods produce the best results. You can ensure that content is consistent by seeking a content strategist who understands content and is a technologist. Implement a component management system. Scott suggested checking out Astoria Software for a good component content management system. 

At this point, Scott’s talk had pretty much finished, but in answering audience questions, he pointed out that there’s a lot more than just these five technologies to watch. He suggested that we should look out for wireless electricity, flexible surfaces, more wireless devices, wearable computing, and augmented reality as well. He also said that in order to mature as a discipline, we need to be content craftspeople, content designers and content engineers. We need to leverage using content and code. We need to think more like engineers, and less like writers and editors. Even websites that are very localized still need to be written for global purposes to improve the English used for the native speakers as well. Controlled vocabulary helps all end users!

Scott covered a LOT of information in a short amount of time, and he set the tone for the rest of the session, as the presentations that followed repeated much of the same information. (This is a good thing, because then we know that the information is valid, coming from several experienced technical communicators!)

Scott posted on Twitter than his presentation was available on SlideShare, but I have it below.

And as always–Scott, if I misinterpreted or misquoted any of the information I summarized above, please let us know in the comments!

Posted in Uncategorized

Adobe Day @Lavacon 2013 – Val Swisher Says It Starts With The Source

Ladds_1
Ladd’s Addition Rose Garden
Photo from http://www.rosegardenstore.org

Val Swisher was the next to last individual to speak at the Adobe Day at Lavacon 2013 event. For those who are regular readers of this blog, you know that my love for all things Val Swisher has no bounds. I’ve always been able to take her easy-to-digest information, and absorb it quickly into my brain, as well as relay her knowledge to others.  When I looked at Portland Gardens to compare her to, I chose Ladd’s Addition Rose Garden.  While it’s not as well-known (unlike Val, who is very well-known), this particular park, according to The Rose Garden Store,  was one of four rose gardens especially built from the Ladd estate, in which the design included these gardens coming together to form the points of a compass. I often think of Val as my compass, as she has never steered me wrong with her information or with the wisdom and fun that she’s shared with me one-on-one.

Val’s Adobe Day presentation centered on talking about source English terminology in a multi-channelled, global world, and how terminology affects structured authoring, translation and global mobile content. She started the talk by reminding us that historically, we’ve always created content, whether it’s been on cave walls, through stenography, through typewriters or eventually on word processors. In every instance, consistent terminology has been essential for structured authoring and content. Managing terminology is also essential for translation and for reuse.  She stated that prior attitudes used to be that the more complicated the writing was, the more “fancy” the product was. Today, that’s definitely not true.  She used the example that I’ve heard her use before, but it’s so simple itself that it’s a classic. Her example involves writing for a pet website. If multiple words meaning dog are used, there can be problem with reuse, because you can’t reuse content if you use different words.

Val_example_dog
Here’s the example Val showed.

Val pointed out that it would be an even worse situation if technological or medical terminology was used instead.

Val continued by saying that when it comes to  XML, reuse , and terminology, you cannot realize the gains of structured authoring if you’re not efficient with your words. Terminology is critically important to gain more opportunities.

ValSwisher
Val Swisher explaining how to approach content from a translation perspective.

Translation comes down to three elements– we’re trying to get better, cheaper, and faster translation output. We MUST use technology to push terminology and style/usage rules to content developers. In order to make it cheaper, we need fewer words, reused words, and reused sentences. It’s impossible for writers to know or even know to look up all term and usage rules. We MUST automate with technology. For example, “Hitting the button” is not translatable, but “Select OK” is fine!  She said, “Say the same thing the same way every time you say it.”

For better translation, translation quality needs to improve and meanings need to match in order for better machine translation to be a possibility. Bad translation comes from the source itself.  If the source information is problematic, then the translation will be problematic.  The best way to save money and time is to say the same thing, every time, using the same words, and use shorter sentences. For machine translation, don’t go over 24 words in a sentence.

Faster translation is seen as content that takes less time to translate, needs fewer in-country reviews, and gets to market more quickly. The key to delivering global mobile content is responsive design, global mobile apps, text selection is key, and terminology is the most important element. Val showed this example of how translation in responsive design isn’t working, where the Bosch websites are not exactly in synchronization:

The mobile website on the left looks nothing like the English language version on the right.
The mobile website on the left looks nothing like the English language version on the right.

The simpler the design is for the website–especially in mobile, the less you have to tweak it. This is especially true where consistent terminology is important, because consistency is needed for structured authoring. Creating truly faster, cheaper, and better translation enables a true global responsive design. This is not a simple task, as there is no such thing as simple, even when writing about complex concepts. Even if you think you’re not translating, your customers are, so the content needs to be very clear. The scary part of this is that some companies use Google Translate as their translation strategy, which is risky at best. To use something like Google Translate as the translation software, the content had better be tight, clear, and consistent.

One of the things I enjoy with Val Swisher’s presentations is that it all comes down to common sense, and she breaks it down into easy manageable parts for those of us–like me–who might not have thought about the context of language for structured authoring, and the consequences for not strategizing content to include translation considerations.

I highly recommend checking out Val’s blog for other great insights.

(As always, Val–if you’d like to add or correct anything here, please do in the comments below!)

Posted in Uncategorized

Adobe Day at LavaCon 2012 Roundup!

This post is just a quick summary of the Adobe Day at LavaCon 2012 series from this past week. As you see, there was so much information that it took six posts to try to summarize the event!

Being in Portland, Oregon was great. It was my first trip there, and being a native Easterner, my thoughts pushed me to that pioneer spirit of moving westward in this country. Once there, I saw a hip, young, modern city, continuing to look towards the future.  The information I gathered at Adobe Day was general information that was endorsement-free, and practical information that I can use going forward as a technical communicator, and that by sharing it, I hope that others in the field will equally take on that pioneering spirit to advance what technical communications is all about, and bring the field to the next level.

To roundup the series, please go to these posts to get the full story of this great event. I hope to go to more events like this in the future!

As I said, I really enjoyed the event, and learned so much, and enjoyed not only listening to all the speakers, but also enjoyed so many people who are renowned enthusiasts and specialists in the technical communications field and talking “shop”. I rarely get to do that at home (although it does help to have an e-learning developer in the house who understands me), so this was a chance for me to learn from those who have been doing this for a while and not only have seen the changes, but are part of the movement to make changes going forward.

I hope you’ve enjoyed this series of blog posts. I still have many more to come–at least one more that is inspired by my trip out to Portland, and I look forward to bringing more curated content and commentary to you!

The autograph from my copy of
Sarah O’Keefe’s book,
Content Strategy 101.
Awesome!
Posted in Uncategorized

Adobe Day Presentations: Part V – Mark Lewis and DITA Metrics

After Val Swisher spoke about being global ready in today’s tech comm market, the final speaker of the morning, Mark Lewis, took the stage to speak about DITA Metrics.

Mark literally wrote the book about how DITA metrics are done, titled, DITA Metrics 101. Mark explained that ROI (return on investment) and business value are being talked about a lot right now in the business and tech comm worlds, so it’s worth having a basic understanding of how DITA metrics work.

Now, I have to admit, I know NOTHING about how any tech comm metrics are done, let alone how DITA metrics are done, so I listened and interpreted the information as best as I could. (Mark, if you are reading this, please feel free to correct any information below in the comments!)

Mark began by explaining that content strategy applies to the entire ENTERPRISE of a business, not just the technical publications. There are lots of ways to measure tracking through various means, including XML. Traditional metrics involved measing the cost of page, and the type of topic would be gauged by guideline hours. For example, a document outlining step by step procedures would equal four to five hours per write up of this type of procedure. Traditional metrics looked at the cost of the project through the measure of an author or a team’s output of pages, publications per month. It doesn’t measure the quality of the documents, but it concerned more with quantity instead of quality.

Mark referenced several studies which he based the information in his book, especially a paper done by the Center for Information-Development Management, titled, “Developing Metrics to Justify Resources,” that helped to explain how XML-based metrics are more comprehensive. (Thanks, Scott Abel, for retweeting the link to the study!)

XML-based metrics, Mark pointed out, uses just enough DITA information, concerning itself instead with task, concept and reference within documentation. XML-based metrics can now track the cost of a DITA task topic, showing the relationship between occurrences, cost per element, and total number of hours. The cost of a DITA task topic is lower because referenced topics can be reused, up to 50%!  For comparision, Mark said that you can look at the measurement of an author by measuring the number of pages versus the amount of reusable content of a referenced component. The shift is now in the percentage of reused content rather than how many pages are being used. Good reuse of content saves money, and ROI goes up as a result!

Mark introduced another metric-based measurement, namely through the perceived value of documents as a percentage of the price of a product or R&D (research and development), as well as looking at the number of page views per visit.  Marked warned the audience to be careful of “metrics in isolation” as it can be an opportunity loss, a marketing window. He clarified that page hits are hard to determine, because hit statistics could either mean the reader found what they wanted, or didn’t want that information. We have no way of knowing for sure. If technical communicators are not reusing content, this can make projects actually last longer, hence producing more cost.

Mark emphasized that through metrics, we can see that reuse of content equals saving money and time. Productivity measures include looking at future needs, comparing to industry standards, how it affects costs, etc. He suggested looking at the Content Development Life Cycle of a project, and how using metrics can help to determine how reuse or new topics cost in this process. By doing this, the value of technical communications become much more clear and proves its value to a company or client.

I have to admit, as I said before, I don’t know or understand a lot about the analytical part of technical communication, but what Mark talked about made sense to me. I always thought that measuring the value of an author based on page output rather than the quality of the writing didn’t make sense. Part of that is because as a newer technical communicator, I might take a little longer to provide the same quality output as someone who is more experienced, but that doesn’t mean that the quality is any less. So measuring pages per hour didn’t make sense. However, if consistency in reusing content is measured instead throughout all documentation, then the quality, in a sense, is being analyzed and it can be measured on how often information is referred or used outside that particular use. Using DITA makes a lot of sense in that respect.

More information about DITA metrics can be found on Mark’s website, DITA Metrics 101.

I hope you’ve enjoyed this series of all the Adobe Day presenters. They all contributed a lot of food for thought, and provided great information about how we as technical communicators should start framing our thought processes to product better quality content and provide value for the work that we do. I gained so much knowledge just in those few hours, and I’m glad that I could share it with you here on TechCommGeekMom.