Posted in Uncategorized

Adobe Day at LavaCon 2012 Roundup!

This post is just a quick summary of the Adobe Day at LavaCon 2012 series from this past week. As you see, there was so much information that it took six posts to try to summarize the event!

Being in Portland, Oregon was great. It was my first trip there, and being a native Easterner, my thoughts pushed me to that pioneer spirit of moving westward in this country. Once there, I saw a hip, young, modern city, continuing to look towards the future.  The information I gathered at Adobe Day was general information that was endorsement-free, and practical information that I can use going forward as a technical communicator, and that by sharing it, I hope that others in the field will equally take on that pioneering spirit to advance what technical communications is all about, and bring the field to the next level.

To roundup the series, please go to these posts to get the full story of this great event. I hope to go to more events like this in the future!

As I said, I really enjoyed the event, and learned so much, and enjoyed not only listening to all the speakers, but also enjoyed so many people who are renowned enthusiasts and specialists in the technical communications field and talking “shop”. I rarely get to do that at home (although it does help to have an e-learning developer in the house who understands me), so this was a chance for me to learn from those who have been doing this for a while and not only have seen the changes, but are part of the movement to make changes going forward.

I hope you’ve enjoyed this series of blog posts. I still have many more to come–at least one more that is inspired by my trip out to Portland, and I look forward to bringing more curated content and commentary to you!

The autograph from my copy of
Sarah O’Keefe’s book,
Content Strategy 101.
Awesome!
Posted in Uncategorized

Adobe Day Presentations: Part V – Mark Lewis and DITA Metrics

After Val Swisher spoke about being global ready in today’s tech comm market, the final speaker of the morning, Mark Lewis, took the stage to speak about DITA Metrics.

Mark literally wrote the book about how DITA metrics are done, titled, DITA Metrics 101. Mark explained that ROI (return on investment) and business value are being talked about a lot right now in the business and tech comm worlds, so it’s worth having a basic understanding of how DITA metrics work.

Now, I have to admit, I know NOTHING about how any tech comm metrics are done, let alone how DITA metrics are done, so I listened and interpreted the information as best as I could. (Mark, if you are reading this, please feel free to correct any information below in the comments!)

Mark began by explaining that content strategy applies to the entire ENTERPRISE of a business, not just the technical publications. There are lots of ways to measure tracking through various means, including XML. Traditional metrics involved measing the cost of page, and the type of topic would be gauged by guideline hours. For example, a document outlining step by step procedures would equal four to five hours per write up of this type of procedure. Traditional metrics looked at the cost of the project through the measure of an author or a team’s output of pages, publications per month. It doesn’t measure the quality of the documents, but it concerned more with quantity instead of quality.

Mark referenced several studies which he based the information in his book, especially a paper done by the Center for Information-Development Management, titled, “Developing Metrics to Justify Resources,” that helped to explain how XML-based metrics are more comprehensive. (Thanks, Scott Abel, for retweeting the link to the study!)

XML-based metrics, Mark pointed out, uses just enough DITA information, concerning itself instead with task, concept and reference within documentation. XML-based metrics can now track the cost of a DITA task topic, showing the relationship between occurrences, cost per element, and total number of hours. The cost of a DITA task topic is lower because referenced topics can be reused, up to 50%!  For comparision, Mark said that you can look at the measurement of an author by measuring the number of pages versus the amount of reusable content of a referenced component. The shift is now in the percentage of reused content rather than how many pages are being used. Good reuse of content saves money, and ROI goes up as a result!

Mark introduced another metric-based measurement, namely through the perceived value of documents as a percentage of the price of a product or R&D (research and development), as well as looking at the number of page views per visit.  Marked warned the audience to be careful of “metrics in isolation” as it can be an opportunity loss, a marketing window. He clarified that page hits are hard to determine, because hit statistics could either mean the reader found what they wanted, or didn’t want that information. We have no way of knowing for sure. If technical communicators are not reusing content, this can make projects actually last longer, hence producing more cost.

Mark emphasized that through metrics, we can see that reuse of content equals saving money and time. Productivity measures include looking at future needs, comparing to industry standards, how it affects costs, etc. He suggested looking at the Content Development Life Cycle of a project, and how using metrics can help to determine how reuse or new topics cost in this process. By doing this, the value of technical communications become much more clear and proves its value to a company or client.

I have to admit, as I said before, I don’t know or understand a lot about the analytical part of technical communication, but what Mark talked about made sense to me. I always thought that measuring the value of an author based on page output rather than the quality of the writing didn’t make sense. Part of that is because as a newer technical communicator, I might take a little longer to provide the same quality output as someone who is more experienced, but that doesn’t mean that the quality is any less. So measuring pages per hour didn’t make sense. However, if consistency in reusing content is measured instead throughout all documentation, then the quality, in a sense, is being analyzed and it can be measured on how often information is referred or used outside that particular use. Using DITA makes a lot of sense in that respect.

More information about DITA metrics can be found on Mark’s website, DITA Metrics 101.

I hope you’ve enjoyed this series of all the Adobe Day presenters. They all contributed a lot of food for thought, and provided great information about how we as technical communicators should start framing our thought processes to product better quality content and provide value for the work that we do. I gained so much knowledge just in those few hours, and I’m glad that I could share it with you here on TechCommGeekMom.

Posted in Uncategorized

Adobe Day Presentations: Part IV – Val Swisher asks, “Are You Global Ready?”

Val Swisher
of Content Rules, Inc.

Following a short break after Joe Welinske’s talk about Multi-screen Help Authoring, Val Swisher took to the stage.

Val is the founder of Content Rules, Inc., and she spoke about eight simple rules for technical communicators to follow to make content global-ready–now! Her specialty is doing translation work, so she knows a thing or two about making content ready for a global market. As she went through each rule, she would explain the impact of the rules and why the rules were in place, although some were self-explanatory.

The rule she listed were as follows:

Rule 1: Not all errors are created equal. Some can cost you thousands of dollars!
This is one of those obvious rules. Taking the time to write content carefully as well as making sure proper editing is done is a necessity. Even one small typo can make a difference.

Rule 2: Creative Writing is a myth. Standardize.
Val’s point with this rule is that superfluous writing is not necessary. Keeping content clear, concise, cogent and correct is especially important in translation, and allows for better reuse of content.

Rule 3: Real copy editors don’t do it without a terminology manager. 
It is vital to use the same terms for certain words, especially for translation purposes. For example, the words “puppy”, “dog”, and “canine” all refer to the same animal, but are clearly different words, even though they essentially mean the same thing. In translation, there are times that this much word variation for a single item isn’t available in a different language, so choosing one word as the referential term is recommended. It keeps terminology within the content–especially if reusing content–consistent.  Style guides are, unfortunately, not followed as often as they can be. A system is needed to manage terminology and help prevent problems like this example from occurring.

Rule 4: Have you got translation memory (a translation database)? Your vendors do. Use it. It keeps content standardized and saves money.
This is another fairly self-explanatory rule. I was not aware, since I’m not in the translation business, that there are such things as translation databases. From what I could understand how it works (and someone please correct me if I’m wrong), a translation database has features that when a specific turn-of-phrase is used on one language, there is a specific translation for that combination of words into another language. When a translation is done, the database looks for that word combination and translates it accordingly. This, again, allows for consistency in translations between the different language editions of content.  As a technical communicator who does translations, Val is saying that if you don’t have such a database in place, you should have one because in the long run, it will standardized content and save money.

Rule 5: Don’t complain about quality of your tech writers. You agreed to outsource docs to ___ in the 1st place.
Val pointed out that while there are good outsource resources for writing and translation out there, sometimes the quality is not as good as keeping it in house or closer to home, especially if the content is written by someone whose first language is not English. Good quality source material is key! Having good quality source material helps control costs, especially with translation!

Rule 6: If you write flabby copy, even the nicest vendors will email you a bill for localization that will astound you.
Again, this comes back to having quality content in place. Val’s point was that if you do write weak content that is difficult to translate because it is not quality content, even one’s best clients will send you a bill for the translation for localization purposes, and the bill will be VERY HIGH. Again, having quality content saves money!

Rule 7: Get rid of extra adjectives and superlative words! Delay this product launch, and there’s no next product launch.
This rule is a strong recommendation related again to how content should be written. The use of extra adjectives, adverbs and other superlative words do not enhance the content. Using such words that have to be rewritten or translated can delay a product going out, and for a client, that can be a deal-breaking move. By delaying the product due to not meeting a deadline due to overdue time for translation, and there will be no next time being able to help with a product launch. Obviously, that would be bad business.

Rule 8: Translation is a team sport. You want to work alone? Become an accountant.
While this rule elicited a laugh from the audience, it was a point well taken. Teamwork is KEY! A better source of English content will result between source writers and translators if they work together.

Val was asked the question at the end of her presentation, “What alternative tools for style guides are on the market?” She responded that there are lots of software tools out there, but to be careful about push technology within those software items.

More information can be found at Val’s website, http://www.contentrules.com  and her free e-book is available by e-mailing her at vals@contentrules.com.

I found this presentation rather fascinating, especially since Val presented it with a sense of humor. But her point was clear. Content needs to be as precise as possible when it will be reused and especially when used in translation for consistency. By following her basic rules, costs can be controlled, and the quality of the content can only get better.

I thought about what it takes to do translation, searching my own memory banks from when I almost minored in French during my undergrad years and had to do translations, to the present day watching my husband translate literature written in German to Spanish for a group he’s been involved with for years, to my own struggles to translate what I want to say to my in-laws into my broken Spanish. Translation is not an easy task, but when thinking about translating my English thoughts into another language, it can get tricky because of the turn of phrase or colloquialisms used from area to area. Even in talking to my husband about the topic, he will say that there are different idioms used between Spanish speaking countries, although Spanish will still be relatively “standard.” Being from Ecuador, he can still understand someone from Spain, Mexico or Argentina as much as an American can understand someone from the UK, Canada, or the Australia. But I’ve even found in my own teaching of a business and technical writing course to a corporate group in Asia is that English taught globally is not consistent due to the source English being from different countries, so I have to go and set the record straight.  I can certainly appreciate where consistency and choice of words can lead to better quality content and communication in the long term.

The next presentation, and the last in this series: Adobe Day Presentations: Part V – Mark Lewis and DITA Metrics.

Posted in Uncategorized

Adobe Day Presentations: Part I – Scott Abel and Structured Content

Scott Abel
The Content Wrangler

As I had mentioned in my first post about Adobe Day, there were several well-known tech comm authorities presenting, and Scott Abel was the first presenter. Scott is the founder and CEO of The Content Wrangler, Inc., and he has a great Twitter feed and blog, if you haven’t read them. Scott’s presentation was called, “More than Ever, Why We Need to Create Structured Content.” If you’ve never read Scott or seen Scott speak, he is a force to be reckoned with, as he’s definitely got strong opinions from his experiences, and he’s not afraid of letting you know what he thinks.

Scott explained that structure is in everything we do–including in nature–so it would make sense that content needs to be structured as well.  Structure formalizes a content model and provides authoring guidance. It can enhance the usability of content, providing visual cues and is the foundation for automatic delivery of content through syndication. Structure makes it possible to efficiently publish to multiple channels, outputs and devices from a single source, making it a critical component of transactional content and making business automation process possible. By providing structure, it is possible to adapt content and leverage responsive design techniques. Structure also allows us to leverage the power of content management systems to deliver content dynamically and increasingly in real time. By creating structured content, it is possible for us to move past “persona-ized content” and facilitate innovative reuse of content in known sets of related information as well as in the unknown needs as well.

Scott quoted author and technologist Guy Kawasaki by saying that innovators must allow time for the majority to catch up; new ideas take time to filter through. He followed up with the question, “How much time does it take to adopt?” He answered his question by explaining that technological innovation is getting faster, and the technological adoption rate is becoming shorter, which is good news. However, how long to adopt structured content? He explained that it’s actually not a new idea–the idea started in 1963! It started with the Sequential Thematic Organization of Publications created in the airline industry to standardize airline manuals.

With that in mind, Scott presented that the question now is to figure out where are we today with structured content. Scott concluded that right now, one of the major challenges is that old ideas are getting in the way because many technical communicators are still stuck with design concepts created in the print paradigm.  Other major challenges include the lack of knowledge and experiences, writers making manual updates, the lack of human resource support, and making tools work with configurable content. He did point out that lots of reuse content is going on! His point was that the tools work, but it’s the people and processes are the problem, as Sarah O’Keefe paraphrased him on Twitter.

So when asked why we should be technical communicators, Scott’s response was that as technical communicators, we know how to create structured content. Knowing how to create structured content increases our value and makes us marketable. “Our professional needs to change whether we like it or not,” he concluded, to keep up with these technological changes.

Much of what Scott talked about in his presentation was echoed again in the panel discussion later in the morning. Scott had provided a few examples during the presentation which showed what unstructured content versus structured content looked like, and it was very clear what the differences were. As technical writers, as Scott said, we understand the importance of structured content and how reused content can be used effectively. Those who don’t have that mindset tend to repeat the same processes and make more work for themselves, wasting time and money for a company. We have value, and we need to promote our skills in creating this kind of content organization.  I think technical communicators take this ability for granted, and by being proactive in showing how we can help create efficient and structured content, we can add value not only to ourselves, but also provide on a larger scale a true cost-saving service to our respective companies and clients.

(Scott–if you are reading this, please feel free to clarify anything that I’ve written if I didn’t interpret it quite correctly in the comments.)

Scott later offered his slideshow online, which is available with his permission.

Scott’s talk was a great way to start the morning, and lead smoothly into the next presentation by Sarah O’Keefe, titled, “Developing a Technical Communication Content Strategy.”

Next: Adobe Day Presentations: Part II – Sarah O’Keefe and Content Strategy

Posted in Uncategorized

I’ve hit the “Big Time” in Tech Comm!: I’m an Adobe Webinar Presenter now

It’s been rather exciting in the last week or so for me. Much like being in Times Square where there are so many lights and sights and sounds that one can’t possible keep up with it all in one outing.

Last week was a big week for me. My much-publicized webinar that was hosted by Adobe was presented last week. It went by so fast that it almost feels like a dream! But now I have evidence that it really happened, as Adobe just published the recording of the webinar presentation on its Technical Communications Suite -OnDemand Seminar  website today.  I’d been waiting all this time to comment about it, but wanted to have the link first.

You can find my webinar–now an Adobe OnDemand seminar here:

Transition from Content Consumer to Content Creator: Dual Viewpoints.

(There is a sign in at the Adobe site, but it’s free.)

I need to thank Maxwell Hoffman for his guidance in the process. He gave me a lot of fantastic advice and things to think about, as well as some great editing of the drafts for the slideshow that accompanied the talk.  If you ever have the chance to work with him, you will definitely enjoy yourself and learn from a master.

I also need to thank Adobe and especially Parth Mukharjee for the opportunity of a lifetime to do this. It was Parth who read my posting here and contacted me through Twitter to make it all happen.  Thank you, Parth! Another Adobe “shout out” to Saibal Bhattacharjee as well for his assistance in this process. I have to say, all I did was use my voice, and to know that people at Adobe were listening, well, that feels rather great, and again, I appreciate this fantastic opportunity. I was already an Adobe fan, but this experience made my loyalty to the brand even deeper. I would readily welcome the opportunity to do another webinar or any other opportunities that Adobe might bring my way. 🙂

I also can’t forget to thank Mr. Mobile himself, RJ Jacquez, blogger of The m-Learning Revolution blog. In the past few months, this former Adobe evangelist has become my friend and a mentor, and I felt that before I took on this endeavor, I needed his blessing. (I didn’t really need his blessing, but it felt right to talk to him about it first.) He definitely supported me and encouraged me to take advantage of this webinar opportunity, and I’m glad he did. So, thanks RJ. You da man. 😉

And then there are the other friends from all walks of my tech comm life that attended–many thanks for your support as well!

I’m proud of the work I did for this presentation, and I hope that anyone who takes the time to listen and watch it will get something helpful out of it, and learn something. I will never claim to be an expert on anything, but as this entire experience has taught me, it is worth it to try new things out by doing and not be afraid to use your own voice now and then to express yourself. You never know what good things might happen. 😀

(Update 9/17/2015 – The links to the webinar have been updated as Adobe has archived the presentation’s location on their website.)