Lately, I have been doing a bit of work designing products for smartwatches and wearables. It’s a challenge, but it is also a lot of fun. I’m just getting started, but I’ll try to describe what I have learned so far.
Designing for these devices has required a shift in my thinking. Here’s why: we have a long and rich history in User Experience (UX) and User Interface (UI) design for programs written for computers with screens. When we made the leap from a command line interface to graphical interface, this movement exploded. For years we have benefitted from the UX community. Whenever I am faced with a design problem when I’m working on a program or web site, I have tons of material to reach for to help design a great user interface.
That isn’t the case with wearables because they are fundamentally different when it comes to user interaction. For example, a smartwatch may have a small, low-fidelity screen, while an exercise bracelet may have no screen at all. It might just have a vibration motor inside and a blinking light on the outside to provide live feedback to the end user.
So where do you start when you are designing software experiences that integrate with wearables? The first thing I did was look at APIs for popular wearables, and for their guidance on how to interact with end users. I did what I always do, I tried to find similarities with computers or mobile devices and designed the experiences the way I would with them.
Trouble was, when we tested these software experiences on real devices, in the real world, they were sometimes really annoying. There were unintended consequences with the devices vibrating, blinking, and interrupting real world activities.
“AHHHH! Turn it off! TURN IT OFF!!”
Ok, back to the drawing board. What did we miss?
One insight I learned from this experience sounds simple, but it required a big adjustment in my design approach. I have been working on software systems that tried to make a virtual experience on a computer relatable to a real-world experience. With wearables devices that we literally embed into our physical lives, that model reverses. It can mess with your mind a bit, but it is actually very obvious once it clicks in your brain.
Simply put, when I don’t have a UI on a device, the world becomes my UI.
Let me expand on my emerging wearable design approach to help explain why.
Understand the Core Value Proposition of Your Product
If you’ve been developing software for computers and mobile devices already, this may sound simple, but it can actually be a difficult concept to nail down.
One approach I take is to reduce the current features. If we cut this feature, does the app still work? Does it prevent the end user from solving problems or being entertained? If we can cut it, it might be a supporting feature, not a core feature. Remember, wearables have less power and screen real estate, so we’ll have to reduce. When I had a group of core features remaining, now it is time to summarize. Can we describe what these features do together to create value and a great experience for users?
Another approach I use is to abstract our application away from computing technology altogether. I map out common user goals and workflows and try to repeat them away from the PC with a paper and pen. With an enterprise productivity application that involved a lot of sharing and collaboration, I was able to do this with different coloured paper (to represent different classes of information), folders (to represent private or shared files), post-its and different coloured pens for labelling, personalization.
In a video game context, I did this by reducing the game and mechanics down to a paper, pen, rule book and dice. I then started adding technology back until I had enough for the wearable design.
Now, how do you describe how you are different? Have you researched other players in this market? Who are your competitors, or who has an offering that is quite similar? How are you different? What sets you apart in a sea of apps and devices? This is vital to understand and express clearly.
How do I know if I am done, or close enough? As a team, we should be able to express what our product is and what it does in a sentence or two. Then, that should be relatable to people outside of our team, preferably people we know who aren’t technologists. If they understand the core offering, and express interest with a level of excitement, then we are on our way.
If you are starting out new, this can be a little simpler since it is often easier to create something new than to change what is established. However, even with a fresh, new product, it is easy to bloat it up with unneeded features, so have the courage to be ruthless about keeping things simple, at least at first.
Research and Understand the Device
With wearables and mobile devices in general, the technology is very different than what we are used to with PCs. I call them “sensor-based devices” since the sensors are a core differentiator from PCs and enable them to be so powerful and engaging to users. The technical capabilities of these devices are incredibly important to understand because it helps frame our world of possibilities when we decide what features to implement on wearables and smart watches. Some people prefer to do blue-sky feature generation without these restrictions in place, but I prefer to work with what is actually appropriate and possible with the technology. Also, if you understand the technology and what it was designed for, you can exploit its strengths rather than try to get it to do something it can’t do, or does very poorly.
This is what I do when I am researching a new device:
- Read any media reviews I can find. PR firms will send out prototypes or early designs, so even if the device hasn’t been released yet, there are often some information and impressions out there already.
- Read or at least skim the API documentation. Development teams work very hard to create app development or integration ecosystems for their devices. If you aren’t technical, get a friendly neighbourhood developer on your team to study it and summarize the device capabilities and how it is composed. You need to understand what sensors it has, how they are used, and any wireless integration that it uses to communicate to other devices and systems.
- If they have provide it, thoroughly read the device’s design/UX/HCI guidelines. If they don’t, read others that are offering similar. For example, Pebble smart watches have a simple but useful Pebble UX Guide for UI development. It also refers to the Android and Apple design guidelines and talks about their design philosophy. Pebble currently emphasize a minimalist design, and recommend creating apps for monitoring, notifications and remote control. That is incredibly helpful for narrowing your focus.
- Search the web – look for dev forums, etc. for information about what people are doing. You can pick up on chatter about popular features or affordances, common problems, and other ideas that are useful to digest. Dev forums are also full of announcements and advice from the technical teams delivering the devices as well, which is useful to review.
Determine Key Features by Creating an Impact Story
Now we can put together our core value proposition and the device’s capabilities. However, it’s important to understand our target market of users, and where they will use these devices, and why. I’ve been calling these types of stories different things over the years: technical fables, usage narratives, expanded scenarios and others, but nothing felt quite right. Then I took the course User Experience Done Right by Jasvir Shukla and Meghan Armstrong and I was delighted to find out that they use this approach as well. They had a better name: impact stories, so that is what I have adopted as well.
What I do is create an impact story that describe situations where this sort of technology might help. However, I frame them according to people going about their regular everyday lives. Remember that stories have a beginning, middle and end, and they have a scene, protagonists, antagonists, and things don’t always go well. I add in pressures and bad weather conditions that make the user uncomfortable, making sure they are things that actually occur in life, trying to create as realistic situations as I can. Ideally, I have already created some personas on the project and I can use them as the main characters.
Most people aren’t technology-driven – they have goals and tasks and ideas that they want to explore in their everyday lives and technology needs to enable them. I try to leave the technology we are developing out of the picture for the first story. Instead, I describe something related to what our technology might solve, and I explore the positives, negatives, pressures, harmonies and conflicts that inevitably arise. From this story, we can then look at gaps that our technology might fill. Remember that core value proposition we figured out above? Now we use this to figure out how we can use our technology platforms to address any needs or gaps in the story.
Next, we filter those ideas through the technical capabilities of the device(s) we are targeting for development. This is how we can start to generate useful features.
Once we get an idea on some core features, I then write three more short stories: a happy ending story (what we aspire to), a bad ending story (the technology fails them, and we want to make sure we avoid that) and a story that ends unresolved (to help us brainstorm about good and bad user experience outcomes.)
Impact stories and personas are great tools for creating and maintaining alignment with both business and technical stakeholders on teams. Stories have great hooks, they are memorable, and they are relatable. With experienced people, they remind them of good and bad project outcomes in the past, which help spur on the motivation for a great user experience. No one wants their solution to be as crappy as the mobile app that let you down last night at the restaurant and cost you a parking ticket.
Use the Real World as Your User Interface
UX experts will tell you that concrete imagery and wording works better than abstract concepts. That means if you have a virtual folder, create an icon that looks like a folder to represent what it is by using a cue from the physical world. What do we do if we have no user interface on a device to put any imagery on it at all? Or maybe it is just very small and limited, what then? It turns out the physical world around us is full of concrete imagery, so with a bit of awareness of a user’s context, we can use the real world as our UI, and enhance those experiences with a wearable device.
Alternate Reality Games (ARGs) are a great source of inspiration and ideas for this sort of approach. For a game development project I was working on, I also looked at Geocaching mechanics. Looking to older cellular or location-based technology and how they solved problems with less powerful devices is an enormous source of information when you are looking at new devices that share some similarities.
I talked to a couple of friends who used to build location-based games for cell phones in the pre-smartphone era, and they told me that one trick with this approach pick things that are universal (roads, trees, bodies of water, etc.) and add a virtual significance to them in your app experience. If I am using an exercise wearable, my exercise path and items in my path that I pass by might trigger events or significance to the data I am creating. If you run past significant points of interest on a path, information notifications to help cheer you on can be incredibly rewarding and engaging.
Enhance situational activities
One thing that bugs me about my smartphone is that it rarely has situational awareness. I have to stop what I am doing and inform it of the context I am in and go through all these steps to get what I want at that moment. I want it to just know. Yesterday I was on my way to a meeting in a part of town I am bit unfamiliar with. I had the destination on my smartphone map, without turn by turn directions turned on. I had to take a detour because of construction, so I needed to start a trip and get turn-by turn directions from the detoured area I was on. I pulled over to the side of the road, pulled out my smartphone, and I spent far too long trying to get it to plan out a trip. I had to re-enter the destination address, get the current location I was at and mess around with it before I could activate it. A better experience would be a maps app that would help and suggest once it senses you have stopped, and allow you to quickly get an adjusted trip going. While you have a an active trip, these devices are quite good at adjusting on the fly, but it would be even better if they knew what I was doing and suggested things that would make sense for me right now, in that particular situation.
It is easy to get irritating and over suggest and bug people to death about inconsequential things, but imagine you are walking past your favorite local restaurant, and a social app tells you your friends are there. Or on the day you usually stop in after work, your smartwatch or wearable alerts you to today’s special. If I leave my doctor’s office and walk to the front counter, a summary of my calendar might be a useful thing to have displayed for me. There are many ways that devices can use sensors and location services to help enhance an existing situation, and I see a massive amount of opportunity for this. Most of the experience takes place in real life, away from a machine, but the machine pops up briefly to help enhance the real life experience.
Rely on the Brain and the Imagination of Your User
If we create or extend a narrative that can make real world activities also have virtual meaning, that can be a powerful engagement tool. One mobile app I like is a jogging app that creates a zombie game overlay on your exercise routine. Zombies, Run! is a fantastic example of framing one activity into a context of another. This app can make my exercise more interesting, and gets your brain involved to help focus on what might become a mundane activity.
With a wearable, we can do this too! You just extend the narrative of what you created on your job and delay telling you what happened until you are complete, and have logged in to your account on a PC or smartphone/tablet. You have to reinforce the imagery and narrative a bit more on the supporting apps on devices with a screen.
ARGs really got me thinking about persisting a narrative. It is one thing to apply virtual significance to real-world objects, but what happens if we have no user interface at all? What are we left with? The most powerful tool we have access to is our human brains, so why not use those too? Sometimes as software designers I think we forget about how powerful this can be, and we almost talk down to our users. We dumb everything down and over praise them rather than respecting that they might have different interpretations or alternative ways of creating value for themselves with our software. Just because we didn’t think of it doesn’t mean it has no merit. It does require a shift towards encouraging overall experiences rather than a set of steps that have to be followed, which can be challenging at first.
Wearable Integration – Data Conversion
If you are working with a wearable that doesn’t have a screen or UI, and is essentially a measuring device, one option to tie in your app experience is to think of converting the data from one context into another. This can be done by tying into APIs for popular wearables. You don’t have an app on the device, but your device ties into the data that is gathered by it and used for something else. For example, convert effort from an exercise wearable into something else in your app. One example of this is Virgin Pulse, an employee engagement application that has a wearable that tracks exercise. Exercise with the wearable can be converted into various rewards within their system. The opportunities for this sort of conversion of data that is measured for one purpose to another experience altogether are endless.
One app I designed extended data generation activities to a narrative in an app. We extended the our app concepts to the physical activity and tapped into the creative minds and vivid imaginations of the humans using the devices with a few well placed cues. This was initially the most difficult app for me to design, but it turned out that this was the overwhelming favourite from a “fun to use” perspective. The delay between generating the data out in the real world, and then coming home and using your PC or tablet to discover what your data measured by the wearable had created in our app was powerful. Anticipation is a powerful thing.
However, be careful when you do this. Here are a couple of things to be aware of:
- Make sure the conversion rules are completely transparent and communicated to the users. Users need to feel like the system is fair, and if they feel taken advantage of, they will stop using your app. Furthermore, many consumer protection groups and laws could be broken in different jurisdictions if you don’t publish it, and change it without user consent.
- Study currency conversion for ideas on how to do this well. Many games use the US dollar as a baseline for virtual currencies in-game, mirroring the real world markets. These are sophisticated systems with a long history, so you don’t have to re-invent the wheel, you can build on knowledge and systems that are already there.
Add Variability Design Mechanics to Combat Boredom
It can be really boring to use a device that just does the same things over and over. Eventually, it can just fade into the background and users don’t notice it anymore, which causes you to lose them. If they are filtering out your app, they won’t engage with it. Now, this is a tricky area to address because the last thing you want to do is harass people or irritate them. I get angry if an app nags me too much to use it like some needy ex, or try hard salesman. However, a bit of design work here can help add some interest without being bothersome, and in many cases, add to the positive experience.
Here are some simple ideas on adding variation:
- Easter Eggs: add in navigation time savers that will be discovered by more savvy users and shared with friends to surprise and delight
- Variable Results: don’t do the same thing every time. Add in different screen designs for slightly different events. One trick is to use time and seasons as cues to update a screen with themes that fit daytime, night time, and seasons. Another is to use the current context of use to change the application behaviour or look. There are lots of things you can do here.
- Game Mechanics: levelling and progression can help people feel a sense of progress and accomplishment, and if there are rewards or features that get unlocked at different levels, it can be a powerful motivator. It also adds dimensions to something repetitive that changes the user’s perspective and keeps it from getting stale.
Provide for User Control and Error Correction
As we learned when designing notifications for a smartwatch, it can be incredibly irritating if it is going off all the time and buzzing on your wrist. Since wearables are integrated with our clothing, or worn directly next to our bodies, it is incredibly important to provide options and control for users. If your app is irritating, people will stop using it. However, one person’s irritating is another person’s delight, so be sure to allow for notifications and vibrations and similar affordances in your product to be turned on and off.
Conclusion
This is one of the most fun areas for me right now in my work, and I hope you find my initial brain dump of ideas on the topic helpful. Sensor-based devices are gaining in popularity, and all indications show that some combination of them will become much more popular in the future.