After the confidence I had emerging from the Social Media Marketing module of my digital marketing course, I felt I could ride that wave of confidence into the next module about Mobile Marketing. From what I could tell, I wasn’t wrong to be confident. After all, I’ve been writing about m-learning and mobile topics on this blog since the beginning, so I figured that I would have a good handle on this topic. I did, but I was quickly reminded at how fast mobile technology has been growing even in three short years, and how I still need to do much more to keep up, if not catch up.
The module was taught by Christina “CK” Kerley, who is a very animated speaker on mobile marketing topics. She provided some great real life examples that I could easily related to. What struck me the most was how subtle mobile marketing can be and how it can be used in ways that we already take for granted, and the technology out there through mobile devices that are probably under-utilized by some, not only in marketing, but also in other mobile applications. One thing that I agreed with her about in regards to mobile is that at one point, everyone thought they needed an app for their service or product, and that’s not necessarily the case. I agree that websites need to be optimized for mobile–something that I need to do with my own e-portfolio when I get some free time in the next year. But an app has to have a purpose, and it doesn’t mean that it’s solely a glorified version of your website in tiny form.
The technologies that fascinated me the most had to do with geofencing, NFC, and RFID technologies. An example of this would be something like this: you had the Starbucks app on your phone, and as you passed by a Starbucks, your phone would send you a notification for a coupon off a drink–but only if you were in the vicinity of the Starbucks. My brain started to spin with the possibilities of how to use this, at least in m-learning. She also talked about how the proliferation of QR codes and augmented reality were coming about, and how wearables were going to be playing more of a part in mobile marketing. I knew all about these from Marta Rauch and her talks about Google Glass, and such, but I think there were some additional features that I hadn’t really thought about before this way.
All in all, it got me excited about mobile technology. Not that my interest in mobile had ever gone away–just sidetracked. We really do take our mobile tech for granted–I know I take mine for granted! I think that whatever my next stage is, I surely need to figure out how to get mobile technology into the mix, whether it’s writing or designing for mobile, or whatever. My passion for mobile has simmered over the years. I think the dark side of content strategy lured me over for the past year or two (not that it’s a bad thing), and I lost sight of where I wanted to go. If I end up starting my own business, then I need to think about incorporating those mobile skills again. Seriously, three years ago I talked about mobile in terms of m-learning mostly, but I knew it was the next big thing because mobile use was growing. My thinking was correct back then, and deep down, I know it’s only going to grow and get more complex in time. I feel like I’ve already fallen behind! So, I need to try to get up to speed on this technology again, and try to push forward, whether it’s in content marketing or something else. I appreciate CK lighting the fire under me again!
Moving on from there, the next module will be about content marketing. OK, folks, here’s the crux of it all, and I’m fearful of it. This is the topic that drove me to take this course because it’s all that I hear about in the content strategy world. We’ll see if I come out unscathed from this topic next week.
Today, I tuned in to listen to the Windows 10 event that was to promote more about the upcoming new OS that many are anticipating will be a big improvement over Windows 8 and 8.1. While I’m a huge fan of iOS products like iPad and iPhone, when it comes to my laptop, I’m a devoted PC gal who would much rather use the Microsoft operating system and tools. I suppose it’s because this is what I’ve been used to for 20-plus years, and it’s easier for me to adapt those changes, and more of the tools I like to use are available for PC use. Yet, while I’m usually a relatively early adopter with many things, I’ve been very hesitant to adopt Windows 8 or 8.1. There are some improvements with Win 8.1, but when I first encountered Win 8, I balked. So, time will tell what happens once we all get our free upgrades to Win 10 (which is great–it’s going to be free for the first year of availability to Win 7 users like me, and to Win 8/8.1 users). There was a set of business apps called Surface Hub that looked good that combined OneNote with a digital whiteboard and provided new sharing capabilities for workgroups and meetings. I could see the practical uses for that in my own work right now. The new browser called “Project Spartan” looks incredibly promising as well, based on some of the new functionality that will be forthcoming.
But what REALLY caught my attention in this event was the introduction of a new device that Microsoft introduced. I think it was a bit of a surprise to see this, but it is a sign that Microsoft means business, and to me, it’s a positive sign. Microsoft has created a new device called the HoloLens. And from what I could tell, HoloLens is everything that Google Glass wishes it could be. While the viewing apparatus used is certainly more…clunky looking…than Google Glass, everything else about it (and why it’s probably still clunky looking) is what it has going for it. There are no wires, no synching it with your phone–it is an autonomous device unto itself. The connection with today’s Win 10 event is that it will run Win 10, but it showed how people can interact with the world around them, and still use the holographic tools around them to merge reality and virtuality. It’s difficult for me to describe, but the 3D imagery used was fantastic, and they showed several applications of how it could be used more practically with other people–including those who don’t have a HoloLens.
I think the biggest difference of all–other than the fact that this is a device that acts on its own, with its own processors among other things, is that unlike Google Glass that was being promoted as a device that could be used as a tool and for everyday use, HoloLens seems to be promoted solely as a tool. Now, it can be used for gaming and such, but the tool applications were what really made it stand out more than anything. My husband and I were using Skype and exchanging comments while watching the live streaming video, and when we were commenting about the differences between HoloLens and Glass, his comment was, “…but this is built as a tool.. you can see the size. It’s not meant as an accessory.. it’s actually a tool.” He’s exactly right. This isn’t a novelty item with potential for greater capability. It has the greater capability, but it’s not an accessory.
I don’t think this is the type of thing that I need right now, even when it comes out. I don’t have any practical application. But I could see how this could be used in several years once the components do become smaller and I can use it as an accessory. (Give it time!)
What do you think? Do you think this is the next step of merging the virtual world with reality? Sure looks like it to me. Post your comments below.
Update: When I mentioned the new HoloLens to my son, he asked me, “Are you sure they aren’t just ripping off Oculus Rift?” Good question. I don’t think so, because HoloLens lets you see through to what’s actually around you, whereas Oculus Rift is contained in its view. Tell me what you think.
I apologize for my blog coverage of the 2014 STC Summit edition of Adobe Day being delayed–it’s been a busy month! But hopefully, you’ll feel it’s been worth the wait, and you had a chance to see my live Twitter feed as it happened.
The STC’14 Adobe Day felt a little bit different this year. One of the things I noticed was that as much as Adobe says that these Adobe Day events are Adobe-product free, lately, they haven’t been. HOWEVER, they are still not one big, in-person infomercial either. Adobe products are not brought up much, but if they are, it’s to show that they can be tools to use to create solutions to common tech comm issues. So, it might be an inadvertant infomercial in that respect, but it’s not done in a blatant way that screams, “YOU NEED TO BUY ME!!!!!! PLEASE BUY OUR PRODUCTS!!!” Adobe continues to do a good job in showing what tech comm issues are out there, and as leaders in the software field, they are tuned into these issues and are creating products that benefit the technical communicator. I think that’s fair enough. The talks, overall, were broader topics that in some instances used Adobe Tech Comm Suite tools to provide solutions. And you have to remember, while these talks are aimed to be product-free for the most part, it’d probably look pretty bad if you had someone declaring all the glories of a competitive product when Adobe is hosting the event. Y’know?
With that out of the way, I observed some other things that made this a little bit different. First, there were fewer speakers this year. I felt that was a good thing, because in the past with more speakers, each speaker would be racing to get his/her presentation completed in a very short amount of time, and there would be little time for questions or discussion. Since there were fewer speakers this year, each one could elaborate more on their topic, which allowed for more time for questions and discussion. More networking time during the breaks was also a benefit from having less speakers.
The other difference I saw dealt with the speakers themselves. While they were all familiar, established voices in the tech comm world, it wasn’t the same crowd that one usually sees at Adobe Day events. All of them have participated in Adobe events or other tech comm events before, but in the past, it usually is most of the same speakers up on the podium. While I like all the “usual suspects” very much, and consider them my mentors and have become friends with several of them, seeing these new “players” was actually refreshing to me. I hope that Adobe continues to change up the speaker lineups with future Adobe Days, as all the speakers I’ve heard have a clear voice that’s worth listening to, and hearing as many of those voices as possible provides both variety and fresh perspectives going forward. As I go through each presentation in forthcoming blog posts, hopefully you’ll see what I mean.
But as tradition in this blog dictates, I always start with the panel that capped off the Adobe Day event. I find that these panel talks bring an umbrella perspective to where we are as a profession through several points of view, and seeing where there are agreements and disagreements in the issues at hand.
Matt started with the point that tech comm is more than tech writing now, so what do we need to improve short-term and long-term? Kevin responded first, saying that we need to do more with less on smaller displays and adapting the content appropriately for mobile. Marcia added to that, saying that using less can mean writing tighter as well. (She has a technique she taught during the STC Summit, in fact!) Joe agreed with Marcia, adding that technical communicators need to put in the time to make concise content meaningful, and to look at simplified English as part of that objective. Bernard felt that attending workshops and demonstrations were important, because technical communicators need to continually learn and adapt in this industry! He added that SMEs (Subject Matter Experts) should contribute to content, but technical communicators should control it. Kevin also agreed with Bernard, saying that SMEs are writing content more often now, so teaching them to write tighter will help. Marcia chimed in that many people are now being required to write, but don’t have the skills. We need to help with that.
Moving onto topics about how technology affects technical communication, Kevin said that new technology, like Google Glass and other wearables, is emerging, and we need to understand how these work. Joe pointed out that the Pebble watch now is starting to have user docs now, and more will be emerging. Bernard added that gesture based technology similar to the Xbox Kinect will need documentation.
Matt then asked, “What should we look forward to in the next five years?” Bernard felt that less specialization will be needed so that the right people write the right content, such as an engineer who can write. Specialized writing will be very important. Joe added that we need to agree on taxonomy and terminology, and use style sheets more often for consistency. Marcia believed that topic-based writing will be emerging more as a growth area. Kevin explained that in e-learning, there is a need to develop learning for new devices that responds to user displays, thus accomodating multiple screens.
The next question asked about how to help educate and help with adapting certain generations adjust between print and digital writing/designing. The consenus was that we just need to adapt. The panel encouraged the audience to get to know your UX/UI people, as they will help you learn to adapt, especially if you aren’t as tech-adaptive.
The last question centered on customers customizing their content–is this a trend? Bernard leapt into a response with, “GOOD! DO IT!” He encouraged us to help customers to start doing personalized help, or personalizing any information, for that matter! Moderator Matt closed by saying that rich media that engages users is going to be about content strategy, but it will also be about content marketing. The group agreed that personalized, concise information going forward will be best!
And that was it! The session went by quickly, but as you can see, there was a lot of great information that many technical communicators can take and use going forward in their own work. While it might take some time to adapt, sure enough, it will bring the field forward as technology and the way we access it moves ahead.
Coming soon: The individual presentations at Adobe Day #STC14 Edition!
As disappointed as I was that I’d have to return my Google Glass because it really wasn’t in the budget, I knew there was a 30-day trial to use Glass, so my husband suggested that perhaps I should give the trial a whirl, and if I still liked it, I could purchase it again later when the price goes down. I wasn’t keen on the idea because I was afraid that if I liked the product enough, I’d be reluctant to return it. Despite his encouragement to try Glass first, my husband didn’t help the cause, as he’d constantly be emailing me negative articles about Glass.
Nonetheless, I decided that I’d forge ahead and give Glass a try. I didn’t even last one morning.
Upon receipt of my Google Glass, the Glass didn’t have enough charge to even set up my account on the device out of the box, so I had to charge it overnight. Even with an overnight charge, it was only at 88%. Something’s not right with that. You’d think that with such a small device that a) there would be just enough charge in it to set it up, at least, and b) that charging it overnight would put it at 100%. So, not a good start, but by the next morning, 88% was enough power to be able to set the device up.
Now, I hate to compare apples to oranges, but I couldn’t help but make mental notes of how much the experience was nothing like dealing with an Apple mobile device. Yes, I know that Glass is not a smartphone, but it does connect to one’s smartphone, after all. I’ve done the set-up of my son’s Android smartphone (and I will admit, I’m no Android expert) and set-up 4 iPhones and 2 iPads over the years, so I think I have a good idea of what a good out-of-the box experience should be. I’m also fairly adept at figuring out new technology, and have been the “tech person” in my family for decades, even before digital technology was mainstream. Add to those credentials that I am a technical communicator, so figuring out how to set a digital wearable device should be par for the course.
When I used an iPhone for the first time, I could figure out everything instantly. Apple walks you through set-up directly on the device, and nothing extra has to be done on another device paired up to it. Google Glass had some directions in the viewing screen (for lack of a better term), but it took me a while to set the device up so that it could connect to my phone and read the QR code that the app had to read to connect and activate the account. I had also read the Google Glass online help files on my laptop as I was doing this. It’s not a good sign, to me, if I have to read the website simultaneously while setting the device up. Even then, the directions weren’t that great. It assumed that everything would go smoothly, so set-up would be a snap. However, mine was not, and I couldn’t find any answers to problems I had.
Eventually, I did figure out how to get Glass set-up. I was connected to my Google account, and ready to go. It was early in the morning, and I decided to try it out, let me son see how it worked, and my husband was curious to see how it worked, too, even if he was the naysayer against it. Because I didn’t want to accumulate too much personal content on the device, I tried to be careful about not taking video or photos, as I needed to learn how to download apps and manuever the device first. My son liked what he saw, and I had him do the instruction, “OK, Glass, Google Minecraft,” and it did. He liked it. But, being that he is a rambunctious 12 year old boy, I didn’t want him wearing this expensive device for long. It was then my husband’s turn.
Before I could try to instruct him on how to manuever the device, my husband decided that he already knew how to use it based on viewing the Saturday Night Live skit from a while back. I scolded him for just trying to do things randomly, and I wanted him to give them back to me if he wasn’t going to give me a chance to explain how to use them properly–based on my limited knowledge at that point. The next thing I know, he proclaims, “OK, Glass, take a picture.” In full glory, in my rumpled NJIT pajamas, angry face, and Polish chicken hair because I had not gotten ready for the day, he had a photo of me. I was not happy about that. He then asked, “If I wanted to send this, what would I do?” At this point in the story, he and I differ on the account, but since it’s my blog, I’m telling it my way. He started to say, “Would you say, ‘OK, Glass, send an email…’?” and when he realized that he’d actually be sending an email, it opened up to THAT PHOTO. “Oops!” he claimed, and tried to back out of it. He did say, “Cancel,” a few times, but nothing happened. But somehow, the photo did get sent, and it was sent to the first person listed on my Google + list, whom I don’t know personally! How embarrassing! I had to get on my laptop later, and send a note to her in Google+ explaining the situation, that I wasn’t sending a photo of a crazy, angry bag lady on purpose, etc. By that time the Glass was confiscated, that was enough to get me on the wrong foot with the device even further.
After everyone had left for school and gone off to the office, I had a little time to myself to try to figure more about this. One of the biggest flaws I saw with this device is that it’s not intuitive. As I mentioned, the set-up was not smooth at all. I found that I couldn’t figure out for the life of me how to delete the photo or the Google search from the Glass unless I reset the device back to factory settings. That can’t be right. Additionally, I couldn’t figure out how to get to a screen to add apps. Again, that doesn’t make sense. So, I went back to the Google Glass Help online to try to figure that out. I couldn’t find any instructions on how to add apps. Additionally, I saw that there were less than two dozen apps available at all! Geez, that doesn’t seem like a lot. I know that this is a product that’s still in development, but you’d think that after a year, Google would have more apps than what I saw.
So, when I took into account how the product wasn’t intuitive, had very few apps, had no ability to delete things (I was able to delete the photo in my Google account via my laptop, but shouldn’t be), plus the exorbitant price, I succumbed to what my husband had been telling me all along. It wasn’t right for me. So, I called Google to ask for the return labels so I could send it back and get my full refund.
Believe me, I was really frustrated with this product. Although my family thought it was cool, they also felt that it wasn’t so easy to figure out how to use it seamlessly, and we’re all fairly technical–even my son. But for something that was the price of two new iPad Air devices or laptops that had much more functionality, I had one funny pair of electronic eyeglasses that didn’t do a whole lot. The experience was disappointing, and I didn’t want to pursue it further–that’s how frustrating it was in one morning. To quote my husband, “If Apple had come out with these instead of Google, it would be cheaper and it would be a completely different experience.” This is coming from a guy who’s very reluctant to use Apple products in the first place, and he even came to this conclusion. The sad thing is, he’s right. When watching that SNL skit again after this experience, my experience wasn’t too different, except the character in the skit got apps on his Glass, at least. The scary thing is, that skit was done a year ago, and nothing has changed since that time!
Despite this less than stellar experience with emerging technology, I think if the price came down significantly, the intuitiveness of the product was better–including understanding how to delete content and add apps, and there were more apps to use, then I’d definitely reconsider getting Glass again in the future. The product isn’t ready for primetime, in my opinion. Even the iPhone had more features on it when it first came out in the first year than this has. I initially got interested in Glass after seeing my friend, Marta Rauch of Oracle, using them, and seeing her presentations about the product’s many capabilities. I wouldn’t have rushed to purchase the product and have a chance to use them if I didn’t believe that there was a true potential in the product. I think Marta has more of a chance to play with them and see the potential because she uses them professionally as well as personally. Part of her job is seeing how Glass can be integrated in projects and products that she’s working on at Oracle. I don’t have any such projects or products I’m trying to develop. And as I said, I do think there is potential for a wearable smart device.
I don’t think Google Glass in its current state, however, is the product for me right now. Once some of these issues are fixed, I’ll see about giving it another try. Believe me, I’m really disappointed, but at least I can get my money back, and Google is being fairly cool about me returning it. And yes, I’ve given them this feedback–twice.
Do you think I didn’t give it a chance? Do you think I was crazy to even try it in the first place? What do you think about such devices? You can put your response in the comments section below.
When the one-day opportunity to order Google Glass came up recently, I jumped on it. I had tried on Marta Rauch‘s pair a couple months ago, and had seen her presentations about it, and fell in love with them. This was wearable technology I could use, as far as I was concerned! I was able to order the Glass I wanted, and was very excited about it…until I told my husband. I didn’t tell him how much it cost, but I did tell him that I bought them. He totally flipped out, but not in a good way. He felt that whatever I did spend on them, it was too much money for a “toy”. I’m earning some good money now, and I felt it was an investment–I’d like to explore how they are used, and how technical communication and m-learning would be part of the wearable technology experience for myself. But no. I cancelled the order, as he had a good point about the cost being too high. Even so, I’m really sad about missing out on this opportunity.
Financial considerations aside, it got me thinking about technological “toys”, and what’s truly a “toy” versus adopting early technology, albeit at a high price initially. I’ve heard Neil Perlin talk about how he had some of the earliest portable computers around–nothing like the laptops of today–that cost a small fortune even by today’s standards. Sure, it’s outdated and obsolete technology now, but so are a lot of other technologies that were around just a few years ago. Children today don’t know what a Walkman is, or that telephones used to actually have a cord and you actually used a dial mechanism to connect your phone to another phone. Heck, pay phones are pretty much obsolete now. What did people think when the first iPhone or the first flip phone came out? Those are obsolete now, too. So, sure, perhaps Google Glass is a very expensive “toy”, but how does anyone know if perhaps I was really an early adopter and I’d be ahead of the curve for knowing how to make it work and use it for practical reasons if I had actually gotten one?
I remember when I got my first iPad–it was an iPad 2. I had saved up, and asked anyone who was going to be getting me a gift for my birthday, holidays, etc. to give me gift cards to Best Buy so I could purchase it. I was so thrilled when I got it, and my husband thought that was a waste of money. He insisted that I already had a laptop, and didn’t need an iPad, that again–it was just a toy. I insisted that yes, there were “toy” elements to it, but I considered it “computing lite”, where I could do many tasks that I normally do, but the ones that didn’t necessarily need my laptop to be powered up. Then, about a year later, I was fortunate enough to win an iPad3 so I could upgrade. My husband had insisted that I sell my old one, but for all his moaning that I should get rid of it, guess who’s been using it for almost two years now? Yep, him. It’s still a little bit of a “toy” to him, but he’s a news junkie, and he loves to read different news sources and some light research on it when he’s not using his desktop (nope, he doesn’t even own a laptop). So, it’s not going anywhere. My iPad has gone with me all over the country–on vacation, to conferences, and has entertained me when I don’t need to be in front of my laptop. I’ve gotten my money’s worth out of mine multi-fold. And yet…I feel like this is the same situation.
Of the emerging techologies that are coming out, whether they are wearables or something else, what do you think is a tech “toy” and what do you think could be the next big thing, or a step towards the next big thing? 3-D printers and Google Glass have my attention–I would love to own both of them. What has your attention? Add your thoughts to the comments below.