Top

Design for Fingers, Touch, and People, Part 1

Mobile Matters

Designing for every screen

A column by Steven Hoober
March 6, 2017

People have now read and referred to my 2013 column How Do Users Really Hold Mobile Devices? almost too much for my comfort. Why? Because, since I wrote that column, I have continued to do research, put my findings into practice for real products, written additional articles, and presented on that topic. In the years since then, I’ve learned a lot more about how people hold and touch their phones and tablets—a lot of which I did’t expect. And that’s the problem with my old columns. I made some assumptions that were based on observations of the usage of desktop PCs, standards for older types of interactions, and anecdotes or misrepresented data. However, through my later research and better analysis, I’ve been able to discard all of those erroneous assumptions and reveal the truth.

All too often, I see people referring to my oldest, least-accurate columns on this topic. Sometimes readers combine my obsolete data with other out-of-date information, then draw their own incorrect conclusions. I hope put a stop to that now with this updated overview of everything I know about how people interact with touchscreen devices and how you can use that information to design better digital products.

Champion Advertisement
Continue Reading…

Trust Data, Not Your Gut

We all fall prey to our biases when analyzing our observations—unless we’re very, very careful not to do so.

You almost certainly own a smartphone, use it to browse the Web, have a bunch of favorite mobile apps, and think you understand how everyone uses their phone. But you are probably wrong! Most designers have just one phone, and, as a designer, you’re much more likely to have an iPhone—even though most of the world uses Android devices.

Plus, we’re bad at self-reporting, and there’s a great deal of rumor and misunderstanding about cognitive psychology, physiology, and design patterns and standards.

Touch is not a natural paradigm for interacting with mobile devices. Thus, we must learn about the methods of users and new design paradigms for touch. By trading scroll-and-select phones—or the mouse and keyboard—for touch, we’ve just ended up with a new set of interaction problems to solve—and, in some cases, more intractable problems because touch interactions are still quite new. We are still developing interaction patterns for touch. Plus, the scope and depth of our understanding of how touchscreens work remains limited. All too often, we make design decisions based on anecdote, opinion, personal bias, hearsay, and rumor.

The Technology of Touch

Before I get back to my research observations, I want to discuss technology briefly. I’ll share some key things you should know about how touchscreens work—as well as the history of touchscreens—that will help you to understand the human behavior we see today and explain some of the problems we encounter in our data.

Light Pens

First, there were pens. The pen, or stylus, preceded the mouse as a pointing device for computers—and never really went away. Early incarnations of the stylus were called light pens. The first production application using a stylus was for SAGE—the giant, networked Semi-Automatic Ground Environment system for the US Air Force.

The Nintendo Duck Hunt gun used the same principle: the pen was not an indicator per se, but a reader that was closely coupled to the timing of the display, so it could tell what part of the screen it was pointing to.

By the late 1960s, the light pens that were available for desktop workstations looked not too dissimilar to those we use today. They let us do all the familiar interactions, including pointing, selecting, copying, pasting, and gesturing.

Digital pens are still in common use today, but no longer use the same technology as early pens. Now, some digital pens do additional cool things such as detecting pressure and angle.

IR Grids

One of the first touchscreens used a grid of infrared beams—along one side and across the top or bottom of the screen—to detect the position of the user’s finger. In the 1980s, these touchscreens were used for ATMs and other devices for public use such as museum kiosks.

As you might imagine, the IR beams could detect anything on the screen, so these devices employed some simple human-factors practices to keep users’ sleeves and papers off the screen—for example, they had a thick bezel that inset the screen deeply. The IR beams were coarse, so detected the user’s whole finger. Although these screens were eventually able to calculate finer positions, most still assumed the user had selected a fairly large area. Thus, the entire screen was a simple grid of selectable areas. Because of all this, buttons had to be large. But, for their applications, these systems worked well overall and were reliable.

Resistive Touch

Touch came into broader use when resistive screens came on the market, and people started perceiving touch as a natural form of interaction. The term resistive means a screen physically resists movement. The top layer is a flexible plastic. When the user applies pressure with a pen or finger, a grid of conductive wires makes contact with another conductive grid beneath it.

These screens can be very responsive and highly precise. But, as with most things, there’s a tradeoff for the flexible top layer between responsiveness and ruggedness. It’s possible to scratch, wear through, or even tear the top layer of plastic. Highly responsive systems are more fragile; rugged systems are harder to use and may require a passive stylus to tap the screen hard enough.

Until quite recently, this was the go-to touchscreen for lower-end devices—and in certain environments—but the demand for more responsive touch and better materials has made them mostly a thing of the past now.

Capacitive Touch

Now, in 2017, when someone talks about touchscreens, they mean capacitive touch. This is the type of touchscreen on all mobile phones, tablets, entertainment systems, cars, kiosks, and increasingly, other small devices that are currently in production.

Capacitive touch uses the electrical properties of the body. That’s why it doesn’t work with any old pen as a stylus, when wearing gloves, or even when your skin is too dry. A finger acts as a capacitor, whose presence on the screen is measured by nodes on a grid—comprising layers on X and Y axes—between the display screen and the protective plastic or glass cover, as shown in Figure 1.

Figure 1—Simplified diagram of the layers of a capacitive touchscreen
Simplified diagram of the layers of a capacitive touchscreen

While high-resolution detectors exist, they are used only for special devices such as fingerprint sensors. Most touchscreens use very coarse grids, as on the Casio mobile phone shown in Figure 2, then calculate the precise position of the finger.

Figure 2—Vertical, capacitive touch sensors, visible in sunlight
Vertical, capacitive touch sensors, visible in sunlight on a Casio mobile phone

This is not a perfect system. There are obstacles to increased precision, ranging from complex mathematical calculations, to electrical interference, and tradeoffs between thickness, weight, cost, and optical clarity. If there’s too much precision, the screen can sense tiny amounts of finger pressure or tiny stylus taps, ambient electrical noise becomes overwhelming, and it’s hard to use your phone in the real world at all.

A few years ago, Motorola put a handful of devices in a little jig, so they could robotically control the pressure, angle, and speed of touch sensing precisely. You can try this yourself by drawing parallel, diagonal lines in a drawing tool, using a ruler and a stylus, as shown in Figure 3. The lines you draw probably won’t be perfectly straight.

Figure 3—Demonstrating touch interpretation inaccuracy
Demonstrating touch interpretation inaccurac

While the irregular spacing of the lines in Figure 3 is my fault—I’m not a robot—the other issues demonstrate the limits of the touchscreen. Discontinuities in the lines are sensing-precision failures. (The small stylus tip is likely the cause of those, so that issue probably wouldn’t occur with a finger.) The swoops and gaps at the edges are artifacts of the screen’s construction and interference. The waves occur where the calculations between grid lines are a bit off.

Size, Pressure, and Contact Patches

The contact patch is the area of the user’s finger that is in contact with a capacitive touchscreen. As Figure 4 shows, this area can vary a lot, depending on how the user touches the screen—for example, with the tip or pad of a finger or thumb—or if the user presses harder.

Figure 4—The contact patch can vary in size and shape
The contact patch can vary in size and shape

I won’t share everything I’ve found out from my research about contact-patch sizes for different user types because it doesn’t vary as much as you might expect, and it doesn’t matter. Capacitive touchscreens report only a single point of contact at the centroid, or geometric center, of the contact patch. It doesn’t matter how big the contact patch is, and there’s no need to detect pressure, size, or anything else. While many devices support multi-touch and a few can detect pressure, these capabilities are not supported consistently, so it’s hard to implement them usefully. Therefore, unless you’re creating a drawing tool or a game, pretend, for now, that touchscreens don’t detect pressure. While you may find this counterintuitive, it’s important to recognize that the size of the user’s finger is totally irrelevant to touch accuracy and touch sensing.

Plus, because old touch standards were based on finger size, they’re no longer relevant. For example, because IR Grid systems were the dominant touchscreen technology at the time the ISO standards were written, and they worked by detecting the user’s finger, these standards specify that targets must be 22x22 millimeters to accommodate larger fingers. They didn’t do a lot of research on pointing accuracy, and this standard worked fine for them then.

When you employ standards, you should be sure that you understand the basis of specific recommendations. As touchscreen technologies have evolved, standards have not always kept pace with them. The research on which these standards were based may be wrong, out of date, or apply only to a specific situation or technology.

Obsolete Standards

The ISO is not the only group promoting obsolete standards. Every mobile operating–system developer and some OEMs promote their own touch-target sizes. Nokia borrowed a version of my old standards and has never updated them. Microsoft does a slightly better job, suggesting the spacing between touch targets, but overly small target sizes are still an issue.

Google and Apple use other sizes that seem to be based more on platform convenience than human factors. Any standard that uses pixels instead of physical dimensions is useless because even device-independent pixels vary a lot from screen to screen, and they bear no relationship to human dimensions.

It’s not just touch-target sizes that have become obsolete, but many other standards that relate to mobile devices. I usually refer to the W3C’s WCAG standards, because they are clear, simple, and universally accepted. However, I am never totally comfortable in doing so because these standards do not apply to mobile. The W3C more or less ignores mobile devices, especially when it comes to accessibility standards. They assume all computers are desktop PCs with a keyboard and mouse and sit at arm’s reach from the user’s eyes. Their standards define pixel sizes using the old 72/96 ppi (pixels per inch) standard and make no reference to viewing angles, glare, distance, or other issues with which mobile users must contend.

Hopefully, the inadequacy of mobile standards will soon be a thing of the past, as we continue doing research and promote better standards. The usual computer is no longer a desktop with a keyboard and mouse, but a touchscreen smartphone or tablet. Many billions more users have mobile devices than PCs, and their technologies, contexts of use, and users’ needs differ greatly.

Defining New Standards

I hate it when our patterns, heuristics, and usability data get confused with people’s opinions and gut instincts. We’re not artists, but UX researchers and designers. At our best, we’re engineers and scientists. I take this very seriously.

Five or six years ago, I started seeing data that didn’t feel right, but I couldn’t prove was wrong. So I started researching primary user behaviors myself. Soon, I had observed over 1,300 people using their mobile phones on the street, at bus stops, on the train, in airports, and in coffee shops, in several different countries.

I also did meta-research, using my ACM Digital Library account to read dozens of reports on touch and gesture, normalizing the most relevant, and correlating their research to my own. All of the reports agreed with my findings, so I realized I was onto something. My favorite research had recorded over 120 million touch events, so was statistically valid.

In concert with the eLearning Guild, I made another 651 observations in schools, offices, and homes, adding more data on tablets and types of users, and reconfirming my data on phone use. I’ve done intercepts and remote unmoderated testing to get data on how people use touch, depending on types of input and the tasks users are trying to perform.

Now, others are doing more research. We’re getting usage data from other countries and for other devices. Researchers have gathered this data in different ways and captured data on mobile-device use when people are doing other things such as carrying their shopping.

All of the information in this series is based on this huge body of research. If I don’t yet know something, I’ll tell you. But we now know a lot about how to design mobile apps for people, for their many different devices, and the varying ways in which people use them. Now, we need to document and use these new standards instead of relying on obsolete standards and biases.

The Science of Touch

Most designers who think about people’s use of mobile phones at all still seem to assume that all mobile phones are small iPhones, grasped in one hand, and tapped with the thumb. They still believe in the thumb-sweep charts shown in Figure 5, believe all taps should be at the bottom of the screen, and that no one can reach the upper-left corner of the screen.

Figure 5—The well-known, but incorrect thumb-sweep chart
The well-known, but incorrect thumb-sweep chart

However, when I do field research, I see people use the Back button all the time. In fact, it’s usually the most-used button on the screen, even when it’s in the upper-right corner. Something else must be happening, so let’s start with the fundamentals. What do we know about the human thumb?

As Figure 6 shows, the bones of the thumb extend all the way down to the wrist. Plus, the thumb’s joints, tendons, and muscles interact with the other digits—especially the index finger. If the fingers are grasping a handset, the range of motion that is available to the thumb is more limited. But by moving their fingers, users can change the area of the screen their thumb can reach.

Figure 6—How the bones of the thumb move in extension and flexion
How the bones of the thumb move in extension and flexion

Basically, the thumb moves in a sweeping range—of extension and flexion—not from the point at which it connects with the rest of the hand, but at the carpometacarpal (CMC) joint way down by the wrist. The other joints on the thumb let it bend toward the screen, but provide no additional sweep motion. The ability to bend the thumb is important because, while the thumb’s free range of movement is in three-dimensional space, touchscreens are flat. Therefore, only a limited portion of the thumb’s range of movement maps onto the phone’s single-axis screen.

Basic Observations on Touch

The thumb is the hand’s strongest digit, so using the thumb to tap means holding the handset with the weaker fingers. People realize this, so in the real world, where they may encounter or even expect jostling or vibration, they tend to cradle their mobile device, using one hand to hold it and securing the device with their non-tapping thumb.

Do people hold their phone with two hands? No. People hold their phone in many ways, while shifting their hold a lot. This should not be a surprise, because what we’ve learned from studying users in all sorts of contexts is that people vary, and we have to account for all of that variation. But I didn’t expect this finding, which surprised me enough that I had to revise my data-gathering methods after the first dozen or so observations. Figure 7 shows the six most common methods people use to hold and touch their mobile phone.

Figure 7—Common ways people hold and touch their mobile phone
Common ways people hold and touch their mobile phone

Over time, I’ve obtained solid rates of use for the various methods of holding and touching a mobile phone. I’ve observed these over and over again, with each study I’ve conducted or read about. Here are my fundamental findings:

  • People hold phones in multiple ways, depending on their device, their needs, and their context.
  • They change their methods of grasping their phone without realizing it, which also means people cannot observe themselves well enough to predict this behavior.
  • 75% of users touch the screen only with one thumb.
  • Fewer than 50% of users hold their phone with one hand.
  • 36% of users cradle their phone, using their second hand for both greater reach and stability.
  • 10% of users hold their phone in one hand and tap with a finger of the other hand.

But these are just the basics. There are other methods of holding mobile phones, using devices that users have set on surfaces, differences in methods for using tablets, and behaviors that adapt depending on what else the user is doing—in life or on the screen.

Perhaps the most surprising and most critical observation I’ve made is that, on mobile touch devices people do not scan from the upper left to the lower right as on the desktop. Nor do they touch the screen in the opposite direction—from the lower right to the upper left—because of the limitations of their thumb’s reach. Instead, as Figure 8 shows, they prefer to view and touch the center of the screen. Figure 8 shows touch accuracy for the various parts of a mobile phone’s or tablet’s screen.

Figure 8—Chart showing touch accuracy for specific parts of the screen
Chart showing touch accuracy for specific parts of the screen

People can read content best at the center of the screen and often scroll content to bring the part they’re reading to the middle of the screen if they can. People are better at tapping at the center of the screen, so touch targets there can be smaller—as small as 7 millimeters, while corner target sizes must be about 12 millimeters.

Something I’d known perfectly well from a life of observation and data analysis took me a while to understand and internalize. People never tap precisely where they mean to. There is always inaccuracy. In Figure 9, you can see the actual tap points from one study of mine.

Figure 9—Taps on a target and the ideal circle size for that target
Taps on a target, overlaid with the ideal circle size for a target

After dozens of observations, not a single user had tapped the exact center of the menu icon, and many taps were quite far from the target. Some users even missed it entirely. The misses are the key point. Target sizes can never be perfect, so record all taps. Misses are on a continuum with no end, so just pick a rate and go with it. All of my suggested target sizes contain only 95% of all observed taps.

We also need to avoid having misses cause problems, so accept the fact that failures, mistakes, and imprecision exist. Account for mistakes by placing dangerous or unrelated items far from other items, thus eliminating or reducing the consequences of accidental taps.

Touch-Friendly Information Design

Over time, I have discovered and proven second- and third-order effects of these basic human behaviors on mobile touchscreens.

The user’s focus on the center of the screen is why we use so many list and grid views. They work well, and people focus on and interact with them just fine, by tapping and scrolling. So always place the primary content at the center of the screen, designing with real content from the first. Think about your most-used applications. When you land on a page, you get a list of content—for example, text messages, email messages, stories, videos, photos, or articles—and select the one you want to view or interact with.

Place secondary actions along the top and bottom edges. Tabs along the top or bottom edge of the content area let users switch views or sections. Action buttons let users compose or search for content. Hide tertiary functions behind menus, which users usually launch from one of the corners. Figure 10 summarizes this hierarchy of information design.

Figure 10—Touch-friendly information-design framework
Touch-friendly information design framework

You may have heard that the hamburger menu is wrong and must be eliminated, but this advice goes way too far. As with many design patterns we’re told to avoid, some designers hold this opinion only because the menu icon is sometimes used poorly.

If you rely on users’ navigating to a subsection of your site, hiding the navigation on the menu will, indeed, work poorly. In this case, using tabs to move to a subsection is more effective—though still not terrific because tabs are not the primary, but secondary content. Placing the key content at the center—or, instead, architecting an app so it’s not necessary to drill down through categories at all—is a much better solution.

I have been using these standards to design mobile apps and Web sites for a couple years now and have tested several products that employ this layout, with several types of users. The options and functions I’ve placed on the menu work fine—100% of usability test participants found an option on the menu within a few seconds, even users with no mobile experience at all.

Next: Guidelines and Tactics for Mobile Touch Design

In this column, I’ve just touched on my research findings and how I’ve validated them. In my next column, Part 2 of this series, I’ll continue reviewing what we now know about people and their use of touch devices. I’ll explain a simple series of ten common user behaviors and provide design tactics that let you take advantage of each one. Then, in Part 3, I’ll cover five more heuristics for designing for touch in the real world, on any device. 

President of 4ourth Mobile

Mission, Kansas, USA

Steven HooberFor his entire 15-year design career, Steven has been documenting design process. He started designing for mobile full time in 2007 when he joined Little Springs Design. Steven’s publications include Designing by Drawing: A Practical Guide to Creating Usable Interactive Design, the O’Reilly book Designing Mobile Interfaces, and an extensive Web site providing mobile design resources to support his book. Steven has led projects on security, account management, content distribution, and communications services for numerous products, in domains ranging from construction supplies to hospital record-keeping. His mobile work has included the design of browsers, ereaders, search, Near Field Communication (NFC), mobile banking, data communications, location services, and operating system overlays. Steven spent eight years with the US mobile operator Sprint and has also worked with AT&T, Qualcomm, Samsung, Skyfire, Bitstream, VivoTech, The Weather Channel, Bank Midwest, IGLTA, Lowe’s, and Hallmark Cards. He runs his own interactive design studio at 4ourth Mobile.  Read More

Other Columns by Steven Hoober

Other Articles on Human Factors Research

New on UXmatters