2012年2月11日土曜日

How Do You Hack

how do you hack

Josh Clark – Buttons Are a Hack A Virtual Seminar Follow-up » UIE Brain Sparks

[ Transcript Available ]

Touchscreen devices give you the ability to directly manipulate content. This allows designers to create interfaces where the content itself is the control. This lessens the need for buttons and can reduce the level of complexity within your design. The problem is making the user aware of the availability of gestures in your design. Gestures, especially multi-touch gestures, are powerful control mechanisms but useless if the users aren't aware of them.

Josh Clark, author of Tapworthy, says that touch interaction should revolutionize your approach to interface design. In his virtual seminar, Buttons Are a Hack: The New Rules of Designing for Touch, Josh offers techniques to make gestures more discoverable without overloading users, and experiences, with endless instruction. We ran out of time for all of the audience's questions during the seminar, so Josh joins Adam Churchill to tackle those remaining questions.

Here's an excerpt from the podcast.

"…buttons are an abstraction and I don't mean that just in the virtual world, I also mean that in the real world. If you look at the history of the button, which is really only about 100 years old with the introduction of electricity, even then buttons were a hack, a workaround.

If you think about a light switch, putting a switch over here to turn on a light over there is not particularly intuitive, right? It's a workaround because it's really inconvenient to walk into a dark room with a ladder and climb up to the light bulb to turn the thing on. We've used buttons, at times, when we didn't have the luxury of direct interaction. We had to insert this middle man…"

Tune in to the podcast to hear Josh answer these questions:

As always we want to know what you're thinking. Share your thoughts in our comments section.

Recorded: January, 2012
[ Subscribe to our podcast via ←This link will launch the iTunes application.]
[ Subscribe with other podcast applications.]

Full Transcript.


Adam Churchill: Welcome everyone the SpoolCast. Josh Clark recently joined us for a virtual seminar titled "Button Are a Hack: The New Rules of Designing for Touch". This seminar on mobile design spoke to the massive evolution in technology that is becoming increasingly tactile. Josh is joining me today to get to some of the questions that we didn't get to address in the seminar.

Now, if you didn't get to listen to the particular seminar, like all of our virtual seminars you can get access to the recordings in our UIE user experience training library. It's presently over 85 recorded seminars from wonderful topic experts just like Josh Clark that will give you the tips and techniques that you need to create great design.

Hey, Josh, welcome back.

Josh Clark: So happy to be here.

Adam: Josh, for the people that weren't with us for your seminar last week can you just give us an overview?

Josh: Sure, yeah. I was basically talking about touch screen design and the way that touch interfaces require really new thinking about the way that we as designers think about creating our interfaces. For that matter, we as users, the impact that it has on us sometimes in very subtle ways about accessing information in this really direct interaction rather than with this illusion of unmediated interaction with content instead of what we're used to, which is the desktop GUI that we've been using for 30 years with buttons and tabs and menus and so forth.

I guess a broad point that I was trying to make at the outset of the seminar was that direct interaction with content of tapping and stretching and pulling and dragging content and really using content as the interface instead of buttons and controls is really going to revolutionize or should revolutionize the way that we design our interfaces.

And so the title of the talk, "Buttons Are a Hack" is really talking about how buttons are an abstraction and I don't mean that just in the virtual world, I also mean that in the real world. If you look at the history of the button, which is really only about 100 years old with the introduction of electricity, even then buttons were a hack, a workaround.

If you think about a light switch, putting a switch over here to turn on a light over there is not particularly intuitive, right? I mean it's something that is a workaround because it's really inconvenient to walk into a dark room with a ladder and climb up to the light bulb to turn the thing on. We've used buttons, at times, when we didn't have the luxury of direct interaction that we had to insert this middle man. So it's a workaround, a hack.

An inspired hack and a necessary one in many cases. We've used that same approach in our interface design and I think with touch where we can create, again, this illusion of manipulating content directly or using content as the control and information as the interface rather than this middle man of buttons and switches that it actually helps us cut through complexity for our users.

The trick is as we explore all the possibilities of gestures, particularly these abstract gestures, multifinger gestures, three finger swipes, things that don't have, maybe, a corollary in the real world, we have this real challenge, both as designers and users, of how to teach those gestures. And that's really what the meat of the seminar was about. What are the techniques that you can use to make these gestures easy to discover without burdening people with lots of instruction?

Adam: Well great. Let's get back to some of the many questions that our audience fed us that day. Tim had a question about web applications on touch devices. How do you transport hover interactions from desktop to touch?

Josh: Yeah, you know, it's something that is a question that I get a lot because obviously there is no hover on a touch screen. If you're going to interact with something you literally do have to touch it. I guess I would back up first and say is hover a great idea anyway? I think there are other folks that have a talked about this at length, I know Luke Wroblewski has a very pointed perspective on this, which is that hover is kind of a crummy idea to use in interface anyway because it confuses proximity with intent.


Hack: Stories from a Chicago Cab (Chicago Visions and Revisions)
Learn more
Dmitry Samarov

That is, just because your mouse strays over an area sometimes it triggers an interaction that you don't always want. But I'll sort of leave that conversation and discussion to the side and recognize that we do have a history of tool tips and the thing of being able to try explore something and say, "What is that?" that we lose, in a sense, with touch screens.

For all the advantages that we have with touch, the things that we gain in these interfaces, we certainly lose other things too. I think that in terms of the interaction that I recommend for things like this is that sometimes you'll see two patterns. One is that a single tap triggers a hover for introductory information. You see this in most touch screen map apps, for example.

You touch a pin or a location, you see a little, essentially a hover element show up to show some cursory information about that, and then you can tap through again to get additional information. Before talking about the second pattern that I see there, one thing is that that obviously introduces additional taps. I think that's actually OK. I mean, we have a squeamishness about extra taps and clicks that come from the web, essentially, because of the history of network latency where, especially in the early days of the web, it was a real commitment, you know?

Clicking a link on the web, that could be a 30 or 45 seconds before you find out what's on the other side. But I think in issues where you already have the content cached and available, where there is no latency, that additional taps can be fine. The important thing here is not necessarily tap quantity but tap quality.

That is, as long as every tap shows you some useful piece of information or completes a task or even gives you some sort of delight additional taps and clicks are OK. They're necessary, in fact, if you want to embrace complexity in your application. The way that you handle complexity in the world and conversation and books and, in fact, in software isn't dump everything on someone all at once but to think about things in terms of progressive disclosure.

This sort of idea of getting a quick tap on something to find out what it is or what it does or get that quick information or preview I think is OK. And then as long as there's a quick tap to follow that that's sort of one pattern to follow to replace hover.

The second one is a slightly longer touch. So let's say that you have an element where tapping it will activate it in some way or take you to some other place for navigation and you don't want to insert sort of an intermediary step there. The other alternative is to do a slightly longer touch and that triggers a hover mode to explore. That's something that you can think about, too, for a series of… Say a toolbar that has a set of icons on it.

You might want to find out what does this icon do when I touch it and a long touch will reveal the tool tip. I'm not talking about a really long touch. Maybe a quarter of a second, just long enough to distinguish between that and an actual tap. It's also for that same toolbar or palette you can also just drag across and trigger an OS10-like dock experience where it doesn't necessarily have to be a long tap on one element but triggering a swipe across these things could trigger that OS10 hover effect.

It's obviously a different way to think about these things than hover but these are ways to get at previews or tool tip information.

Adam: Eric asks a question about swiping back on a smartphone screen. Should that be used as a back button type of command or strictly used for carousels?

Josh: I think that the swipe is really understood as moving through navigation and that means moving forward and back whether that's in a carousel context. And you see this. A great example of this on the web is on the New York Times home page right in the middle of the page they've got a carousel that on the desktop is triggered by forward and back buttons but if you go to it on the iPad or other touch devices you actually can swipe through it and those buttons aren't present as all.

Forward and back is just a natural understanding of what swipe does so you can use that for navigation, you can use that for moving back and forth with your history as you see with the iPad version of Twitter, for example, where you swipe through these panels, no back button required. I think swipe is flexible in that context is that it's something that is one of the fundamental, building block gestures that you can use, that across all platforms, swipe means forward and back.

Adam: The team at Excellis Interactive wants to know what you think about Android devices having multiple buttons versus Apple's one big home button?

Josh: It's funny, right, because the talk is called "Buttons Are a Hack" so obviously I'm a fan of thinking about do you even need buttons? This is not to say that buttons are bad or evil, but just as I was saying earlier, that they're workarounds and it's worth asking do we need all those buttons in the first place?

You know, I think it's useful in Apple's case to have at least sort of a single hardware button. It's something that if the screen locks up and you're not able to use your software buttons that having that hardware button is a good fallback. It's still a necessary fallback or workaround I would think.

Android is interesting because up until now Android phones have always had hardware buttons as part of the device and starting with the new version of Android, Android 4, Ice Cream Sandwich is its codeword, which is just starting to roll out but it's still on very few phones. Those phones are moving into the screen as virtual buttons, software buttons, rather than as physical hardware buttons.

I think in both cases, actually, those buttons complicate touch interfaces a bit. The ergonomics of these devices is something you have to think about when you're designing the interface for them because we are accustomed to thinking of interface design as sort of a visual pursuit. Certainly there's information architecture but it's about how does this thing look, what is the visual architecture of the page, and so forth.

When you are dealing with devices that are worked by finger and thumb you're really getting out of the realm of graphic and visual design and into the realm of industrial design and thinking about how do these things get used? One of the basic rules of industrial design is that you want to have your controls below the content, right? You think of that in terms of, I don't know, the iPod. It's got the scroll wheel below the display.


.hack, Part 1: Infection
Learn more

A scale for your weight. You put your feet below the display. Keep your fingers and thumbs out of the way. That's why you see navigation and buttons like the home button or like the Android buttons at the bottom of the screen. That's the appropriate place to do it. The trouble with controls at the bottom of the screen is that you don't want to stack controls at the bottom of the screen.

If you've got navigation and these Android buttons that are always fixed there that means that you get these sort of tap collisions. It's a really busy area of the screen and it invites mis-taps to have controls at the bottom which is why you often see navigation in Android apps at the top of the screen to separate them from that sort of flurry of system buttons at the bottom and it actually creates some problems because there is no great solution.

You have to have your navigation at the top of the screen which means that your hand is covering the screen every time that you use those controls and that's not ideal but it's better than stacking controls on top of these system buttons. So I would say Android system buttons create a few problems, not least of which are these ergonomic issues that I talk about, but also that they haven't really been consistently used.

There's this option menu on the Android buttons, think of it as contextual menus, sort of a right-click button that will show you options that you can take on the current screens. And, you know, a lot of app developers weren't using it consistently and it's actually getting dropped in the new Android 4. There's a lot of complications there that the Android system buttons create, not the least of which, by the way, is the back button which is very inconsistent and hard to figure out exactly where you're going to go back.

Because sometimes it takes you back in a temporal sense, where was I last, and other times it takes you back in sort of an application architecture sense which is take me up a level. Now Android is actually introducing a second button, an up button, that will try to separate those concerns so that one is about navigating the application and the other one is about navigating where you've been.

I'm concerned that that's going to be an additional level of complexity. It's just going to make people more confused because sometimes those buttons will do the same things and other times they will do something differently. I have a lot to say about the Android system buttons. I'm afraid it's sort of more bad than good.

Adam: The folks at Crate & Barrel would like to know what you suggest as a replacement for radio buttons.

Josh: Oh, I'm so glad. I thought they were going to ask me to suggest some new flatware for their 2012 season. I can sort of say more about that. Not everything necessarily needs to be replaced by gestures. My point is not to be particularly dogmatic here and say, "Get rid of all of your buttons, go fully gestural." We're always going to need buttons for some things, particularly for abstract actions, and, of course, they're great labels. They tell you where you are.

And certain kinds of buttons, here I'm thinking maybe more tabs than radio buttons specifically, which tabs and radio buttons function the same way. You have one active element. I think that when you're choosing options, when you're choosing text options and things like that touching the element that you want is fine. I don't know that we necessarily need to replace radio buttons.

The option there is here is a multiple choice section, choose the one that you want. I think that touching that content directly, touching that option, is fine, that that is still, itself, using content as the control. You maybe don't actually need the radio button, the little round button itself, that just touching the content and highlighting that in some way is just as effective.

Adam: The team at Uncorked Studios asks a pretty interesting question, I think, and it has to do with how do you get your instructions to your users. Their question is how much gravity do you think needs to be given to multiple audience types? The example they give is super savvy touch users versus say novices.

Josh: I think that it's important to remember that there are very few experts in this right now and, in general, that really all of us are novices. And here I mean there's things of teaching people how to zoom in by pinching or spreading your fingers to zoom in and out or what a tap means or what a swipe means but those things are very easily taught, usually in the very first few seconds of using a device, and so people get those.

The question is how do you teach people more abstract gestures, the things that maybe aren't totally obvious? This is really the realm of shortcut gestures. Theses are gestures that you use to do something quickly, you know, to use say a two finger swipe to skip ahead to the next section of the newspaper versus the next article, for example.

The reason I say we're all novices in this, this is true for both designers and users, is that there aren't conventions yet. Doing a two finger swipe in one magazine does something completely different in that magazine. So we're in this very exciting but also confusing and unsettled and I suppose unsettling era of interaction design which is that we don't yet have these conventions and we're all making it up through trial and error.

And hopefully when I say trial and error… Just, experimentation is what I'm getting at. That's true for users, too. I think a lot of people will touch around the screen to see if there's anything hidden there and that's not good enough. And so I guess I would say that we need to treat all of our users as new users because we really are. In the seminar one thing that I said is I think it's useful to think of yourself as a parent when you are designing and trying to explain these interfaces.

By that I don't mean to say that we should treat our audiences like children or to be patronizing or condescending about it, only that we should use the same kind of empathy and care with our audiences, the people that use our software, that we would when explaining something new to a child because, in the same way, we haven't see this stuff before.

Very much what I was talking about in the seminar was looking at how we teach through toys and games. I mean games are terrific at teaching new interactions. Many games, you know, you don't even know what your goal is when you start, you're dropped into this environment where you don't even know what your goal is let alone what your abilities or powers are or what obstacles that you're going to overcome.


And if that sounds familiar, it should because that's exactly the situation that we find with a lot of touch interfaces where they're beginning to be, and I suspect there will be more and more, these sort of invisible interactions that we have to be taught because we're not going to discover them on our own. And so when we talk about discoverability a lot of times some of these gestures are poo-pooed a little bit that it's like, oh nobody's going to ever be able to find them.

You know, and that's true about a lot of things in nature, too, and we learn things by being shown them. We have to do a little bit of demonstration and practice, which is exactly how video games teach you, sort of through levels and contextual help. I think that that's really the important thing to think about here for instruction is that it's not about providing a manual or a screencast or a whole long list of gesture instructions as your first view of the application because we won't read them and won't watch those screencasts.

Instead, we need to teach people gradually instead of teaching all at once. Sure, have that reference manual for advanced users. I think that's the way to think of it, as a reference manual not a learning guide. For software designers we need to do a lot more like what game designers do and keep an eye on the progress that people are making through our applications and give tips and help people move from novice to expert to master through this kind of contextual help.

Once they've learned the building blocks then you can show them the gesture shortcuts, which are essentially, you know, gestures are the keyboard shortcuts of touch. They are useful for anyone but especially useful in the hands of an expert, that is an expert not in touch screens in general but in your application, someone who knows enough about the application to say, "great, I've got it, I want to now apply those gestures to move through the application more quickly."

Those are things you don't need to teach right away. Those are things you should teach step by step. One thing, one approach that I mentioned there is allowing people to do things the slow way. If something takes three taps to get to let them do it that three tap way because that reinforces that mental model of where things live in the application and sort of the geography, if you will, of the application's information.

But once they've done that a few times they're like oh gosh, here comes this other three tap thing. The fifth, tenth time that they've done that three tap sequence in a row then you have a little overlay and a gesture animation to show you, oh, here's a gesture you can do instead of those three taps. Then you wait for them to actually do that gesture so it's "here's a demonstration", now, you know, encouraging practice.

Then their very first interaction with that gesture is a success and they've done it themselves. You don't have to show that tutorial anymore because the lesson's been learned and move on.

Adam: So I think this next question takes that thinking about instructions to your users, it takes it a little bit further. What thoughts do you have on how to access hints when they're needed in the future?

Josh: Right. So the thing I just mentioned there about doing this demonstration after somebody has already done it and you sort of show them that demonstration. I think that this is something that works really well for devices and applications that are used by a single user, which is generally true of our phones. For our phones we're usually the only ones who use them, except for if you have kids and man they're always trying to pull it away from you.

For more social devices like an iPad, for example, where it's often used by many people this idea of watching somebody's behavior and then giving them that tutorial and then not giving it again is obviously problematic because it may be missed by one of the other users in the session.

So, two things. For bringing those tips back I think that you want to sort of reset the counter to zero. If you trigger that tutorial, that hint, the tenth time somebody does it the long way, the slow way, and then you do that interaction to show them the fast way with the gesture then just reset the counter. Here it comes. Wow, you've done it again 10 times in a row. Here's a better way to do it that I think you're going to like, will hit that more social environment.

As I said also, I think it's important to have a reference so that if you're looking for wow I know there was something. How did I do that? You do have a reference guide as a back-up. Again, don't think of it as a learning method. People won't read the instructions unless they're looking for something. They won't read the full set of instructions just to learn how to use the app.

I think this combination of having contextual hints that recur when it seems the lesson hasn't sunk in to show people that shortcut, that power up, coupled with having a complete reference that people can go to if they're like, "Wow, wasn't there something that I could do?" that they can go back to refer to that.

Adam: Our next question has to do with a really cool demo that you sort of wrapped up the seminar with and that was the Uzu demo you showed us or the little video. The question are there thoughts that you have or can you say a bit more between the division between the IOS interactions like the application selection and the menus when you're using something like that Uzu.

Josh: Right. Uzu, for people who don't know about it, is this thing that's called a kinetic particle visualizer but it's really more like a lava lamp, like a little tool to hypnotize stoners. It's super fun and trippy and you know, it has all these multitouch gestures that essentially draw visual effects. The varying number of fingers on the screen changes what the app does or the patterns that it makes.

So three fingers on the screen does something different than seven fingers on the screen. And so by lifting and raising your fingers and drawing with these you can really play this almost like a visual instrument. The question though sort of goes to, "well, what about when there are collisions with these multitouch gestures that you're doing with an app like Uzu or anything that's using these multitouch gestures, what happens when they collide with operating system gestures?"

For example, in IOS there are now gestures for three and four finger pinches to close an app and you can swipe using, you know, a four finger swipe and move back and forth through apps.


The problem is sometimes you'll be using Uzu or some other app and suddenly it will trigger that IOS gesture and will send you right out of the app which is really disconcerting and obviously bad. I'm not sure there's much that app developers can do about it and I've written about this a little bit. I was really disappointed, by the way, that Apple implemented these gestures. Because, wow, a full finger pinch, that's a gesture that we could really put together as app developers.

A three or four finger swipe that's really useful to be able to take your whole hand and slap at the screen to do something, right? Unfortunately, Apple has taken these really useful gestures not only and sort of, kind of locked them up in ways that now we can't use them as app designers, but I think more important, it confuses things because you have operating system gestures happening in the app canvas, in the area that's supposed to be dedicated in the app now doing things where it seems like you're touching the application content but also doing things at a more abstract operating system level and that's confusing.

I think that Apple, frankly, did this the wrong way. You look at other operating system like Palm's Web OS which look like it's probably defunct now, we'll see, or Windows 8, the forthcoming Windows 8, and others. I should say MeeGo also, the Nokia Intel operating system. All these things had operating system gestures that worked from the edge, so-called edge gestures.

That is, if you wanted to flip through the current applications you would swipe across the screen but starting off-screen, on the bezel, on the frame, and push into the app. I think that that's much more useful because that preserves the app canvas so that you can do whatever you want within that frame, but if you do gestures from the frame then that's something that's more reserved for operating system things.

What's great about that, too, is it fits the metaphor. The operating system is the frame for applications so somehow if you start gestures outside of the application at the frame level it rhymes with the way we kind of understand how an operating system and applications work. Unfortunately, I think Apple blew it with these operating system level gestures and they do cause collisions, just like the question hinted at, where that's bad.

Creating uncertainty for your users where it's like, wow, if I use this gesture what am I affecting? That's a really kind of critical problem that we have. Our job as designers, and I would say Apple's job as an operating system designer is to reduce that uncertainty and give our users confidence.

Adam: Josh, there was also a question about the value of affordances or lack of them.

Josh: Affordances, or signifiers as Don Norman likes to call them, are visual hints that suggest how an interface works, whether that's a door handle or a button on the app. And so we need affordances. That's how we figure out how stuff works. Things can't be a mystery. We do need visual hints and references in order to be able to figure out how to do gestures.

Especially these abstract gestures which don't necessarily have a corresponding action in the real world. Again, the two, three finger swipe for example. That's not visible and it's something that we need to create a visible hint at how to do it. Understand that people can learn these invisible gestures but only if we tell them about them, only if we show them.

I think it really does mean that we need to use animation, we need to use overlays, we need to use all kinds of different hints to suggest to people. Color, little pulsing brightness, things that draw attention to elements that can be touched or manipulated. We have to provide affordances and I think that the way to do it, though, is to do it in context and we have to be better at it.

And I don't want to paper over the challenge of this. I think this is a real development challenge as well as a design challenge of observing and being thoughtful about where people are in the app, what they've done before, what they've yet to learn, and providing instruction in that moment, which is exactly what games do so well.

I think that all designers should be playing a lot more video games to see how software can teach you. We've had a lot of bad experiences with this, right? I mean I think everyone remembers Clippy, the little talking paperclip. The concept wasn't bad, having an assistant to pop in and tell you, hey, this is how you do this. It's just the content was terrible.

Every time, every time, you started to write a letter Clippy would say, "Hey, do you need help writing a letter? It looks like you're writing a letter." That's not helpful. It was distracting and also redundant. Maybe the first time you could say no thanks, I know how to write a letter, but he kept coming back every time you wrote the word 'dear'. Dear Adam, then here it comes.

That's not the way to do it. You can't have overbearing help and it has to be contextual and it has to be meaningful and it has to be based on what you've already observed that the user knows. Affordances and help and hints, absolutely critical. You think about the way that we learn to do things in the world and it is through, as I said earlier, demonstration and then practice so we have to demonstrate how this stuff works.

I've alluded to this before, the way we typically do it is through some sort of mountain of instruction, usually before we even start using the app or through some sort of screencast that's going to take me five minutes to watch. We just don't do that as human beings. We don't like to read instructions because it feels like a diversion. Even if it's something that will help us get our job done more quickly, it feels like it's not because we want to get to it.

Yeah, we need to have that kind of instruction but we have to do it carefully.

Adam: Josh, awesome stuff.

Josh: Thank you, sir.

Adam: Thanks for circling back with us and to our audience thanks for listening in and for your support of the virtual seminar program. Goodbye for now.



These are our most popular posts: how do you hack

How to Hack Your Facebook Profile Photo with Timeline

If you are looking for Nintendo Wii hacks you have come to the right place. Everything from console hacks to unique ... read more

How to hack Siri with iPhone 4S jailbreak

Just over a week ago, Boston Police became the target of an attack by the online collective Anonymous, defacing its community policing website BPDNews.com and posting a YouTube video of the nineties hip-hop track ... read more

Josh Clark – Buttons Are a Hack A Virtual Seminar Follow-up » UIE ...

Science, Software Carpentry, and the Discipline to Hack. by Audrey Watters on 09 Feb, 2012. Im helping Software Carpentry think through its strategy and curriculum for a blended learning model of teaching software engineering to scientists. read more

How do you respond to an Anonymous hack? Boston Police shows ...

We will also explain the many security holes in facbooks privacy, facebook hacks and private profile settings and also ... read more

Related Posts



0 コメント:

コメントを投稿