Scratching Your Own Itch
In my academic training in HCI design, the first (and really only) rule was “Don’t Design For Yourself.”
It was deemed as too easy, a cop out, if you did a project that used yourself as the target audience and built something to solve your own problem. It means you made no effort to get outside of your own experience and really understand the problems of your target user.
By contrast in web services today, it feels like all my favorite services are built by people just scratching their own itch. That’s why David built Tumblr, Marco built Instapaper. Ev has often cited that the failure of Odeo is that he wasn’t a podcaster, so he wasn’t building something he wanted. It’s the single #1 rule in The Cathedral and the Bazaar (seminal open source book) about what makes a good open source project:
- Every good work of software starts by scratching a developer’s personal itch.
I understand why my academic training differed so much from practical application. If every hacker/maker just scratched their own itch, then who would build the countless necessary tools and services for people who can’t build their own products? Doing a deep, contextual inquiry into the daily life of a 90 year old man and the problems he faces is an essential skill set. And building solutions to make that man’s life better is a rewarding use of time. Academia makes a good point.
Yet, all my favorite products have been built out of the passion that stems directly for personal need. Those are the products where using them feels like having an intimate conversation with the designer… they see they world the way I see it, and this is their answer to my niche problem.
The Mental Model of Verbs in App Design
When I talk to people that use web apps infrequently, they are often surprised by the way the “like” verb works inside Facebook. People don’t say they are surprised explicitly… but its clear there is confusion when you tease it out via conversation.
Like in Facebook: it is intuitive that “Like” should be a statement of appreciation because that’s how we all use the verb “Like” in every day language. Here is how most people first encounter the “like” link for the first time in Facebook:
[Some Facebook News Feed post goes here]
That simple three-link “bar” is affixed below all updates in the Facebook news feed is where most people first encounter “Like”. They fact that these three links are juxtaposed implies that they do materially different thinks from each other. Intuitively, if I want to share this post with my friends, I would click “Share”, and if I only want to show appreciation, I’d click “Like”.
But, a quick scan of your news feed (assuming you have enough active Facebook friends), shows you page after page of things people have “liked”. So if you tie this “liked” verb back to the “Like” action previously seen, then you start to unpack just what “Like”ing something does. It sends the content to your friends’ news feeds.
So, “Like” starts to take on a new meaning. It means now, I appreciate this, but I also want to send it to all my friends too. If you’ve arrived at this mental model, you now understand how the “Like” verb works in the news feed. But wait… there’s another “Like” elsewhere:
Go to any fan page. How about this one from Dove. The call to action on the page is to “Like” it. So, using our newly acquired mental model of “Like,” this means I will be both showing appreciation and sending it to my friends. But, this “Like” on a fan page is now going to do a third thing: it’s going to sign me up for a subscription to all of Dove’s updates in my news feed. Again, not intuitive… until you do it once and hopefully when you see Dove updates in your news feed, you then mentally tie it back to the action of liking the page.
So now we have three meanings: A) sign of appreciation B) send this to my friends and C) subscribe me to future updates (but only if I’m on a fan page).
In Other Apps: The various meanings of “Like” are made less intuitive in the social web in general because they vary between different social apps. In Twitter, the “heart” was ambiguous for years because some users used “heart” to bookmark items for themselves later. It used to be that when you “heart”ed a Twitter update that contained a link, you could get other Read Later apps to suck it up. But then Twitter flipped a switch so you could see when other users hit “heart” on your tweets, and suddenly “heart” became the “signal of appreciation” it always should have been.
Perhaps the best (worst?) verb choice of all was Last.fm, who was doing quite new, innovative stuff in the primordial days of the social web. They invented a verb to cover their key behavior: scrobble. As far as I’m aware, that one never made it into the OED.
Other Verbs in Facebook: For a brief period in Facebook about a year ago, other verbs were even trickier. ”Watch”ing a video or “Read”ing an article would syndicate it to all your friends’ news feeds… and they would not involving clicking a link. It was all implicit sharing. This is how Viddy and Socialcam blew up a year ago… and then came back down to earth when Facebook realized that implicit sharing from app-defined verbs like “Watch” or “Read” was causing users to inadvertently share things they didn’t want to share… which is a bad user experience.
The moral is this. If you’re building a web app, choose your verbs carefully. They bring prior meaning… both from how they work on other apps and how they are used in common language. To Facebook, the fact that the word “Like” implies “appreciation” but doesn’t imply “share this with my friends” is probably a feature not a bug… because it leads to more sharing, intentional or otherwise. But be careful about walking this slightly spammy line in your own app. Facebook gets to do things other apps can’t because of their sheer scale and network effect.
Game Controls Can’t Be Transplanted
I’ve played a few sessions of two games recently: XCOM on the Xbox 360 and FIFA 13 on the iPad (big boy, not mini). Both games are quite good, but what holds each of them back from being great is the physical control interface of their respective platforms.
Nothing about XCOM requires quick twitch controls. The game interface flows consists of either A) selecting between options or B) “pointing and clicking” units around without any time sensitivity (because the action pace is Turn-Based Strategy). An Xbox controller feels like an awkward pain for both modes A and B. Both modes on a Xbox controller requires arrowing around between locations on the screen; it’s kludgy and slow.
I really wish XCOM had been launched on the iPad. It launched on Console and PC only. I could have chosen the PC version of the game, but I dislike sitting in front of my PC at a desk when I want to play games. My couch is so much more comfortable. The tap anywhere mode of interaction of an iPad would be perfect for this game.
By contrast, FIFA 13 is nearly unplayable on an iPad due to the mode of control. It’s entirely quick twitch, like most sports games. The on screen control buttons are never right below your fingers where they need to be. The lack of tactile feedback from the screen feels like using a controller while wearing metal mesh mittens.
I say “nearly” unplayable because last night I led Reading to a decisive 4 - 0 stomping of Stoke City, despite tolerating the soupy controls.
The point isn’t that gaming on one platform is superior to another. Could you imagine Angry Birds (iOS native) or Starcraft (PC native) on an Xbox controller? Various platforms have game styles that are perfectly suited to their native controls. Some of these choices are a matter of taste: many people like FPS on a console controller, but I strongly prefer WASD and a mouse to control a FPS. But other transplanted control schemes are just wrong (on screen controls on a touch device) and significantly impair the enjoyment of the game.
Delightful Interface Design
I’ve noticed a couple visual design choices in mobile apps recently that I think are worth pointing out for their delightfulness.
In Moves, the UI elements seem to hover about the surface of the background because when the users scrolls through the app, the various UI element parallax as they move. The parallax effect creates a sense of depth and liveliness in what would otherwise be a flat list.
Most companies opening splash screen is a static image or logo. I’ve seen a couple apps recently that have started using the Ken Burns effect of letting this loading screen logo move slowly and the result is an app that feels more “alive” right from booting up.
Timehop’s (#investor) use of custom-made loading spinners that lever the company’s mascot Abe are wonderful. One is a Abe flying away in a jetpack, presumably to fetch the new content for you. The other is Abe using his tiny T-Rex arm to spin the loading spinner, like a Big Six wheel at a carnival.
None of these design choices affect the usability of the services, but they all provide delightful user experiences. It’s a small way to differentiate your app and make a lasting impression. It’s the modern equivalent of the Google Doodles. It’s a great way to help create a sense of Voice for your application.
My Favorite Software Limits Me Well
When I think about the software I enjoy using and return to frequently, one of their commonalities is their great execution of limitations. Some examples:
140 characters. That one is a gimme.
I love the way HotelTonight constraints my choice to only 3 – 9 hotels in a given city (depending on density of the city). Even better than HotelTonight’s attractive pricing, the constrained choice helps me save time, which is more valuable than the $15/night in savings for me. It saves me the endless optimizing of a hotel aggregator website.
The Wirecutter – my favorite gadget review website – just lists the best of any given product in a category. “Want the best laptop? Buy this one.” Don’t show me a myriad of options goverened by a parametric search interface (yes, you CNet). Just show me the right one. I bought my TV based on The Wirecutter’s recommendation and it is one of the best purchases I have made in recent memory.
Uber and Hailo. There are no different car options… no prices to consider. It pre-loads your location with GPS, so there’s typically no need to fuss with that either. It’s just a big honking button that says “Pick me up!” All the choices are hidden away behind intelligent defaults. It’s wonderful.
Sometimes even just two choices is one too many. I recall when we redesigned the homepage at Homestead (a website hosting and editing company – contemporary to Geocities and AngelFire) to give users the choice between either a lightweight web-based editing tool, or a heavyweight desktop software editing tool, most users just bounced and never returned. Our conversion rate plummeted back on giving a user a choice in how to proceed.
Back when I was playing Product Designer at Homestead in 2005, I often recited an expression defending my design choices that tightly constrained options: “When you give a user too many choices, many will choose nothing at all.”
The (Unfortunate) Rise of Zero Affordance Interfaces
I was talking to a compelling entrepreneur last week who, in an aside in the conversation, was lamenting the increasing popularity of zero affordance interfaces.
What do I mean? Well, an “affordance” is defined quite cleanly on Wikipedia:
An affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling.
Joel Spolsky does a wonderful job of outlining excellent examples of clear affordance and why they matter.
So, then a “zero affordance” interface is an object that offers no hint to how it is to be used.
A classic example of a “zero affordance” interface in the multi-finger gestures that are possible on the touchpads of modern laptops. Dragging two fingers often offers scrolling functionality in a window. Swiping with four-fingers left or right will offer alternative desktops for complete context switching in a small, quick gesture. These tricks are incredibly effective if you know then, but there are zero affordances that indicate they are available for use when looking down at your laptop.
I take advantage of these zero affordance interfaces often as a power user, once I discover them. But I wish we (collective, as a tech community) would design less of them. They can be very frustrating to users that encounter them by accident. Watching a novice use a modern touchpad is a painful sight at first; the learning curve on multi-touch is steep. It’s even worse for elderly users, many of whom use multiple fingers on a touchpad by mistake, due to poor dexterity or muscle motor control.
All interfaces should strive to be instantly approachable to their target users. And the increasing usage of zero affordance features in touch devices hurts the cause of approachability. When designing a feature that offers no affordance to your users, think twice if the savings in screen real estate is really worth the lack of intuitiveness this trade-off requires.