Posted in Uncategorized

Adobe Day @ Lavacon 2013: Scott Abel’s 5 Technologies Tech Comm Can’t Ignore

voodoodonutsignI realized as I was writing this post that this would be my 500th post on TechCommGeekMom. Who knew that so much information and thought could accumulate through original posts and curated content?  I’m also very close to my all-time 15,000 hits mark (only a few hits away at this writing). I wouldn’t have believed you if you told me that I’d hit these benchmarks when I started this blog, but of course, I’m going to keep going! I debated about what I should write for my 500th post–whether to finish my Adobe Day coverage or do something else, and in the end, it seems fitting to finish my Adobe Day coverage, because in many respects, knowing and writing about the presentation of Scott Abel, aka “The Content Wrangler”, shows how far I’ve come already in my tech comm journey from beginner to covering internationally known presenters.

Scott is one of the most prolific and vocal speakers out there on the conference circuit speaking about content–whether it be content management or other technical communication topics.  It also seems like he has written the forewords of many of the best tech comm books out there. He’s everywhere! To boot, he’s an accomplished DJ, and I found myself “bonding” with him over dance remixes and mash-ups while at Lavacon, because I always enjoy when he posts either his mash-ups or his favorite mash-ups on Facebook. (I’ll be writing a post about the relationship between tech comm and dance mash-ups in the near future.)  He is a person who is full of so much kinetic energy that you wonder when he’s going to explode, but he doesn’t. Even the time I saw him at the STC Summit last spring with a bad cold, he was still more on top of his game than a lot of people would be on a good day.  Much like Val Swisher, my love for all things Scott Abel also knows no bounds.  He knows how to stir things up at times, but there is no denying that in his frenetic pace of delivering a presentation, you learn SO much. I’m lucky that he’s so kind to be one of my cheerleaders!

ScottAbel
Scott Abel checking his files before his presentation

So when it came to thinking of a garden in Portland to use as an analogy to Scott, I had to deviate. In my mind, he’s the Voodoo Doughnuts shop located about four or five blocks away from the Chinese Garden. Scott’s talks always have lines going out the door, and like many of the Voodoo Doughnuts themselves, the unique flavors dispensed open your mind up to new and delicious possibilities and ideas, and you come back wanting more (hence, more long lines!).  They are both crazy and sweet at the same time. You can’t beat that combination.

Scott was the keynote speaker for Adobe Day as well as the moderator of the discussion panel later in the event. Scott’s topic for his talk was called, “Five Revolutionary Technologies Technical Communicators Can’t Afford To Ignore.”  If Joe Gollner went fast during his presentation, then Scott went at lightning speed, so my notes below are the highlights.

Scott started by telling us that translation is going to be an important part of automated content going forward. It’s important to understand that for the web, the World Wide Web (WWW) is equal to the “land of opportunity.” The WWW can reach a global market reaching new consumers. As American users, we forget that 96% of web users are not in the US. We don’t all speak English globally. In fact, less than 6% of the global population speaks English well, but don’t necessarily read or write it well.

Scott’s list of the five technologies the Tech Comm can’t ignore were as follows:

1) Automated Translation
Why would be need automated translation? We write for the *worldwide* web.  There are over 6000 languages in the world, so translation is a big deal for a global reach and global connection. We need to recognize that content is written for both machines and humans. Even though we write for both machines and humans, we need to write for machines first, as they are the “gatekeepers” of content, such as for searches. Everything goes through the machine first. We need to recognize that writing rules learned in elementary school are no longer sufficient for a world in which language science is needed.  We need to examine our content from the vantage point of a rules-processing engine and ensure it’s optimized for machine translation.

2) Automated Transcription
Automated transcription involves software that translates speech to text for machine use. Without transcription, content is locked and hidden from view. Transcription allows for better searchability of content.  Scott recommended Koemei as a good transcription software tool for video and general transcription, as it can help transform editable content into other languages.

3) Terminology Management
Terminology management controls words in a central place, namely the words used the most and used consistently for branding, products, etc. Terminology management is important for consistency as well as for regulatory reasons. This is an instance where seeking a global content strategist is needed to help standardize processes.  It’s best to adopt a terminology management system, such as Adobe partner and Scott’s suggestion, Acrolinx.

4) Adaptive content
Adaptive content is content that is structured and designed to adapt to the needs of your customer; it’s about substance of the content. Adaptive content adapts to the devices, e.g. laptops, GPS, and smartphones.  Customers are demanding exceptional experiences, so we need to meet their expectations, so it’s up to responsive designers to meet that challenge. Adaptive content makes it possible to publish to multiple platforms and devices.  It is content separated from formatting information. By allowing authors to focus on what they do best, adaptive content makes content findable and reuseable by others who need it. We need to rethink content, as the move to adaptive content involves work, but the ROI (return on investment) can be realized in months instead of years.

5) Component Content Management
Component content management systems are needed. They focus on the storing of content components that are used to assemble documents. Components can be in all sizes, and can be photos, video, and text. It’s about managing CONTENT not FILES.

Scott provided these slides as his example to show this:

ScottAbel_ExampleA ScottAbel_ExampleB

Structured content, combined with a component content management system, supports personalized content and  targeted marketing, which in turn increases response rates. In this end, this process can save money! The key is to remember that all customers are not the same! Reusing content without the “copy and paste” methods produce the best results. You can ensure that content is consistent by seeking a content strategist who understands content and is a technologist. Implement a component management system. Scott suggested checking out Astoria Software for a good component content management system. 

At this point, Scott’s talk had pretty much finished, but in answering audience questions, he pointed out that there’s a lot more than just these five technologies to watch. He suggested that we should look out for wireless electricity, flexible surfaces, more wireless devices, wearable computing, and augmented reality as well. He also said that in order to mature as a discipline, we need to be content craftspeople, content designers and content engineers. We need to leverage using content and code. We need to think more like engineers, and less like writers and editors. Even websites that are very localized still need to be written for global purposes to improve the English used for the native speakers as well. Controlled vocabulary helps all end users!

Scott covered a LOT of information in a short amount of time, and he set the tone for the rest of the session, as the presentations that followed repeated much of the same information. (This is a good thing, because then we know that the information is valid, coming from several experienced technical communicators!)

Scott posted on Twitter than his presentation was available on SlideShare, but I have it below.

And as always–Scott, if I misinterpreted or misquoted any of the information I summarized above, please let us know in the comments!

Posted in Uncategorized

What Does Knitting Have To Do With TechComm and m-Learning?

I’m so glad that this is now my 200th post on this blog! TechCommGeekMom has come a long way since it started out as a class project in grad school, now hasn’t it? For this particular post, I’d like to share my thoughts on something that I’ve been thinking about for the past two weeks or so.  It reveals one of my hobbies to you, but hopefully you’ll like the analogy.

As you’ve seen in the subject line of this post, I’m going to be talking about knitting and how it relates to tech comm and m-learning. Now, I know what you are going to say. Knitting is for grandmothers who make ugly stuff for everyone, and you are obligated to wear it when she comes over and visits. You couldn’t be wrong. Knitting has had a huge upsurge in the last ten or so years, and more and more people are adopting it as a hobby. It goes back to the 9-11 attacks, when people were trying to get back to a sense of security and home. I think with economic times, it’s also a relatively easy and inexpensive hobby to have (unless you are a true diehard like some of us).

Knitters these days make more than clothing items, accessories and toys, but it’s become an artform unto itself. (Ever hear of yarn bombing?)  I can’t remember when I started knitting exactly…but I’m guessing it was around 2004 or 2005. I know I was knitting in 2006 when I went to California for a convention for the now defunct Body Shop At Home businesses, and met Anita Roddick, as there’s photographic evidence of knitting in my hands during the event. In any case, it gave me a chance to learn something new that spoke a whole other language of its own, had a different vocabulary, and I got to work with all sorts of colors and textures in the process. For someone with sensory integration issues, it’s a great outlet for sight and touch. Even the rhythm of knitting up something has a calming effect, and following patterns forces my brain to focus.

So what does all this have to do with tech comm and m-learning? Well, as I thought about it,  there’s definitely an analogy that could be made about the benefits of knitting and how they lend themselves to these topics.  All this first came to me after I had spent the day at Adobe Day, and later took a small soujourn into the city of Portland with four fabulous technical communicators who also happen to be knitters. They had invited me to go on a yarn crawl with them (similar to a pub crawl, but in search of high quality yarn instead of libations), and I readily accepted. We only made it to one store, but we had a great time checking out all the high-end yarns and knitting notions available.

Pardon us drooling over the yarn!
Five technical communicators who
are also knitters! We know!
At KnitPurl, Portland, OR, during LavaCon
L to R: Sharon Burton, me, Sarah O’Keefe,
Marcia R. Johnston, and Val Swisher

As I reflected on Adobe Day, one of the big themes of the morning was the idea of using structured content. Without structured content, all of one’s content could fall apart and lose strength. An architecture needs to be created to make it work. Well, knitting is like that. If one doesn’t follow a pattern, and just knits in a freestyle, haphazard manner, instead of a nice jumper/sweater, one could end up with a garment with no neck hole and three sleeves. Without the structure of a pattern, and even reusing good content (or stitches, or groupings of stitches describing appropriate methods for structure), the whole thing falls apart. The beauty, too, of reusable content in content management, just as in a knitting pattern, content that is produced well, is solid, and the reader can understand clearly and concisely will produce good results, and can be recombined effectively in different instances without losing its meaning.  Take a look at a sweater or knit scarf you might have. You’ll find that each stitch makes sense, even when you look at different pattern of how the cuffs and collar differ from the sleeves and the body. But it all fits together.  In my mind, this is how reusable content can be used.  Very tight, well written content can be reused in different combinations without losing its context and form if done correctly.

The other way I thought of the analogy of knitting again had to do with how one learns how to knit, and how it relates to m-learning. Knitting fair-isle sweaters, Aran sweaters or lace shawls doesn’t come on the first day of learning how to knit. Heck, I’m even still learning how to do all these techniques! It comes with learning a foundation–namely the knit stitch and the purl stitch–and building upon that foundation. Any piece of knitting you see is all a matter of thousands of knit and purl combinations to make the item. But first, one has to master the simple knit and purl stitches by learning how to understand how to gauge the tension between the needles, the yarn and your fingers. Once that is mastered, then learning how to read the “codes” or the knitting language of K2, P2, S1 (that’s knit two, purl two, slip one), for example, then the real fun begins. Knitters have to pay attention to details in the directions, because knitting can be a long task. Except for tiny baby sweaters or sweaters for dolls or stuffed animals, I don’t know any sweater that could be hand knit in a single day, even if it was done from the time the knitter woke up until the time the knitter went to bed. It just couldn’t happen, even for a fairly experienced knitter.  So, each part of the knitted pattern must be learned or read in chunks so the knitter can understand where he or she left off.  Talk to me about lace patterns especially, and it’ll make more sense. But each technique takes time to master, and most knitters learn these techniques a little bit at a time. Whether a knitter is self-taught or taught in a conventional learning environment, nobody learns all there is to know about the most advanced knitting techniques on the first day. Just getting knitting and purling down takes a while. It’s an arduous task to learn to knit and knit well, and to be patient enough to see a pattern all the way through.

Just like in m-learning, things need to be learned in small chunks for comprehension. Information has to be short and to the point so that the reader, just like the knitting pattern reader, can take that information, mentally digest it, and then work out how to use the information. There is definitely trial and error in both m-learning and knitting; if one doesn’t succeed, then it’s possible to go back and try to re-learn the information and correct it, and in doing so, retains the information better.

Now, if one happens to be BOTH a technical communicator AND a knitter, then these are easy concepts. Reusing content, breaking down information into smaller portions for better learning retention, structuring the content appropriately and consistency comes both with our words and our stitches.

A variety of tools can be used in either case to create the content. For technical communicators, it’s the use of different software tools that help us achieve our goal. For knitters, different sized needles, different kinds of yarns and other tools can be used in the process. Is there only one way of doing things? Of course not. Is there any single tool that will do the job? Generally, no. This is the beauty of both technical communication and knitting. In the end, the most important tool is the mind, because without the individual mind, creativity and intellect cannot be expressed. When all of these tools and factors work together, it is possible to create a fantastic piece of work. When this combination of factors aren’t followed, it can look pretty disastrous.

So, next time you see someone with a pair of knitting needles in their hands, look carefully at the workflow that person is following. You might learn something from it.

Posted in Uncategorized

Adobe Day at LavaCon 2012 Roundup!

This post is just a quick summary of the Adobe Day at LavaCon 2012 series from this past week. As you see, there was so much information that it took six posts to try to summarize the event!

Being in Portland, Oregon was great. It was my first trip there, and being a native Easterner, my thoughts pushed me to that pioneer spirit of moving westward in this country. Once there, I saw a hip, young, modern city, continuing to look towards the future.  The information I gathered at Adobe Day was general information that was endorsement-free, and practical information that I can use going forward as a technical communicator, and that by sharing it, I hope that others in the field will equally take on that pioneering spirit to advance what technical communications is all about, and bring the field to the next level.

To roundup the series, please go to these posts to get the full story of this great event. I hope to go to more events like this in the future!

As I said, I really enjoyed the event, and learned so much, and enjoyed not only listening to all the speakers, but also enjoyed so many people who are renowned enthusiasts and specialists in the technical communications field and talking “shop”. I rarely get to do that at home (although it does help to have an e-learning developer in the house who understands me), so this was a chance for me to learn from those who have been doing this for a while and not only have seen the changes, but are part of the movement to make changes going forward.

I hope you’ve enjoyed this series of blog posts. I still have many more to come–at least one more that is inspired by my trip out to Portland, and I look forward to bringing more curated content and commentary to you!

The autograph from my copy of
Sarah O’Keefe’s book,
Content Strategy 101.
Awesome!
Posted in Uncategorized

Adobe Day Presentations: Part V – Mark Lewis and DITA Metrics

After Val Swisher spoke about being global ready in today’s tech comm market, the final speaker of the morning, Mark Lewis, took the stage to speak about DITA Metrics.

Mark literally wrote the book about how DITA metrics are done, titled, DITA Metrics 101. Mark explained that ROI (return on investment) and business value are being talked about a lot right now in the business and tech comm worlds, so it’s worth having a basic understanding of how DITA metrics work.

Now, I have to admit, I know NOTHING about how any tech comm metrics are done, let alone how DITA metrics are done, so I listened and interpreted the information as best as I could. (Mark, if you are reading this, please feel free to correct any information below in the comments!)

Mark began by explaining that content strategy applies to the entire ENTERPRISE of a business, not just the technical publications. There are lots of ways to measure tracking through various means, including XML. Traditional metrics involved measing the cost of page, and the type of topic would be gauged by guideline hours. For example, a document outlining step by step procedures would equal four to five hours per write up of this type of procedure. Traditional metrics looked at the cost of the project through the measure of an author or a team’s output of pages, publications per month. It doesn’t measure the quality of the documents, but it concerned more with quantity instead of quality.

Mark referenced several studies which he based the information in his book, especially a paper done by the Center for Information-Development Management, titled, “Developing Metrics to Justify Resources,” that helped to explain how XML-based metrics are more comprehensive. (Thanks, Scott Abel, for retweeting the link to the study!)

XML-based metrics, Mark pointed out, uses just enough DITA information, concerning itself instead with task, concept and reference within documentation. XML-based metrics can now track the cost of a DITA task topic, showing the relationship between occurrences, cost per element, and total number of hours. The cost of a DITA task topic is lower because referenced topics can be reused, up to 50%!  For comparision, Mark said that you can look at the measurement of an author by measuring the number of pages versus the amount of reusable content of a referenced component. The shift is now in the percentage of reused content rather than how many pages are being used. Good reuse of content saves money, and ROI goes up as a result!

Mark introduced another metric-based measurement, namely through the perceived value of documents as a percentage of the price of a product or R&D (research and development), as well as looking at the number of page views per visit.  Marked warned the audience to be careful of “metrics in isolation” as it can be an opportunity loss, a marketing window. He clarified that page hits are hard to determine, because hit statistics could either mean the reader found what they wanted, or didn’t want that information. We have no way of knowing for sure. If technical communicators are not reusing content, this can make projects actually last longer, hence producing more cost.

Mark emphasized that through metrics, we can see that reuse of content equals saving money and time. Productivity measures include looking at future needs, comparing to industry standards, how it affects costs, etc. He suggested looking at the Content Development Life Cycle of a project, and how using metrics can help to determine how reuse or new topics cost in this process. By doing this, the value of technical communications become much more clear and proves its value to a company or client.

I have to admit, as I said before, I don’t know or understand a lot about the analytical part of technical communication, but what Mark talked about made sense to me. I always thought that measuring the value of an author based on page output rather than the quality of the writing didn’t make sense. Part of that is because as a newer technical communicator, I might take a little longer to provide the same quality output as someone who is more experienced, but that doesn’t mean that the quality is any less. So measuring pages per hour didn’t make sense. However, if consistency in reusing content is measured instead throughout all documentation, then the quality, in a sense, is being analyzed and it can be measured on how often information is referred or used outside that particular use. Using DITA makes a lot of sense in that respect.

More information about DITA metrics can be found on Mark’s website, DITA Metrics 101.

I hope you’ve enjoyed this series of all the Adobe Day presenters. They all contributed a lot of food for thought, and provided great information about how we as technical communicators should start framing our thought processes to product better quality content and provide value for the work that we do. I gained so much knowledge just in those few hours, and I’m glad that I could share it with you here on TechCommGeekMom.

Posted in Uncategorized

Adobe Day Presentations: Part II – Sarah O’Keefe and Content Strategy

Sarah O’Keefe
of Scriptorium Publishing

After an energetic first presentation by Scott Abel, second presenter Sarah O’Keefe, author of Content Strategy 101 and founder of Scriptorium Publishing, talked about “Developing a Technical Communication Content Strategy.”

Sarah started by telling us that many companies don’t understand the value of technical communication, so technical communicators need to justify their approach. When writing up business cases for these justifications, technical communicators need to include what the current situation is, recommendations to improve the situation, costs associated with those recommendations, as well as the benefits and risks of taking the actions recommended.  If there are regulatory and legal requirements, then there is the need to build a case for more efficient compliance in order to avoid legal complications.

Sarah expounded on how technical communication departments should talk to management about how technical communications can control costs. She explained that there is a myth that cheap documentation can be done. She busted that myth by explaining that cheap documentation is actually more expensive, as it can be limited in availability making it useless, it can be hard to understand and out of date, and it may not be translatable into other languages. The cost of bad content is high customer service volume,  lost sales, content duplication, huge global costs, and it can contradict marketing communications.

The solution, she said, is efficient development involving the reuse of content, using single sourcing and cross-departmental reuse of content, only tweaking text that is already available. She stressed that formatting and production are important! Using templates and various structures are helpful. She encouraged using tools for creating the needed output.  Sarah also said that localization is important as well, that translations are needed component of communication documentation. All these can help bring costs down significantly! Sarah gave an example of how a common obstacle to efficient customer service or tech phone support is often a monster-sized PDF that the support representatives need to read before providing service while on the phone! The process of having to read the long document while online with a customer is time consuming and not cost efficient.

Sarah encouraged technical communicators to work on collaborating and creating better working relationships with other business departments such as tech support, training and marketing with technical content, as this will help to support those departments with pertinent information as well as help them to streamline information. Technical communication can be used to support sales–read documentation before you buy! Technical communication content also can help to increase visibility by creating searchable, findable and discoverable documentation,  especially for Google or SEO purposes. Sarah recommended building user communities with technical communication documentation, and making sure that technical communications aligns with business needs.

Sarah has further information which goes into greater detail both in her book, and on the book’s website, which is found at: http://www.contentstrategy101.com .

Sarah’s presentation was really good, in my opinion, because coming from my own experiences, much of what she explained was true, and as she said, the biggest battle is making management understand the value of having solid content strategy. One of my biggest issues at my last consulting job was exactly the scenario that Sarah described; marketing was not taking proper advantage of the technical communication documentation available, nor was it sharing resources and creating reuseable content. As a result, in-house documentation was long and overly customized when much of the information was the same or very similar (needed few tweaks), and the sales advisors that needed the information rarely looked at it because it was too long. When I made the recommendations about reuse or editing from a technical communications standpoint, I was ignored. Of course, I was only a consultant, and I wasn’t privy to understanding the departmental costs, but it did not feel good to know that some of the issues could be fixed with the kind of collaboration that Sarah described. In this respect, I could associate with what she was saying.

An aside note is that Sarah is a self-confessed chocoholic, and a fun part of her presentation was that she incorporated chocolate production into her presentation. To verify her chocoholic status, I was out with Sarah after the event, and caught her in the act of buying more chocolate at one of Portland’s chocolate boutiques:

Sarah O’Keefe buying more chocolate for inspiration!

I do think Sarah’s message is very clear. Technical communications has a lot of value, especially with structured content and reusable content, and as technical communicators, we need to push that agenda to management so that we can provide a bigger service to our clients and companies that they currently realize.

(Sarah–feel free to correct any of my interpretations in the comments below!)

Next post: Adobe Day Presentations: Part III – Joe Welinske and Multi-screen Help Authoring