Waldorf News

The Scientists Who Make Apps Addictive

Tech companies use the insights of behaviour design to keep us returning to their products. But some of the psychologists who developed the science of persuasion are worried about how it is being used.

By Ian Leslie

In 1930, a psychologist at Harvard University called B.F. Skinner made a box and placed a hungry rat inside it. The box had a lever on one side. As the rat moved about it would accidentally knock the lever and, when it did so, a food pellet would drop into the box. After a rat had been put in the box a few times, it learned to go straight to the lever and press it: the reward reinforced the behaviour. Skinner proposed that the same principle applied to any “operant”, rat or man. He called his device the “operant conditioning chamber”. It became known as the Skinner box.

Skinner was the most prominent exponent of a school of psychology called behaviourism, the premise of which was that human behaviour is best understood as a function of incentives and rewards. Let’s not get distracted by the nebulous and impossible to observe stuff of thoughts and feelings, said the behaviourists, but focus simply on how the operant’s environment shapes what it does. Understand the box and you understand the behaviour. Design the right box and you can control behaviour.

201610_FE_STA_091-RT-HEADER-V2

Skinner turned out to be the last of the pure behaviourists. From the late 1950s onwards, a new generation of scholars redirected the field of psychology back towards internal mental processes, like memory and emotion. But behaviourism never went away completely, and in recent years it has re-emerged in a new form, as an applied discipline deployed by businesses and governments to influence the choices you make every day: what you buy, who you talk to, what you do at work. Its practitioners are particularly interested in how the digital interface – the box in which we spend most of our time today – can shape human decisions. The name of this young discipline is “behaviour design”. Its founding father is B.J. Fogg.

Earlier this year I travelled to Palo Alto to attend a workshop on behaviour design run by Fogg on behalf of his employer, Stanford University. Roaming charges being what they are, I spent a lot of time hooking onto Wi-Fi in coffee bars. The phrase “accept and connect” became so familiar that I started to think of it as a Californian mantra. Accept and connect, accept and connect, accept and connect.

I had never used Uber before, and since I figured there is no better place on Earth to try it out, I opened the app in Starbucks one morning and summoned a driver to take me to Stanford’s campus. Within two minutes, my car pulled up, and an engineering student from Oakland whisked me to my destination. I paid without paying. It felt magical. The workshop was attended by 20 or so executives from America, Brazil and Japan, charged with bringing the secrets of behaviour design home to their employers.

Fogg is 53. He travels everywhere with two cuddly toys, a frog and a monkey, which he introduced to the room at the start of the day. Fogg dings a toy xylophone to signal the end of a break or group exercise. Tall, energetic and tirelessly amiable, he frequently punctuates his speech with peppy exclamations such as “awesome” and “amazing”. As an Englishman, I found this full-beam enthusiasm a little disconcerting at first, but after a while, I learned to appreciate it, just as Europeans who move to California eventually cease missing the seasons and become addicted to sunshine. Besides, Fogg was likeable. His toothy grin and nasal delivery made him endearingly nerdy.

In a phone conversation prior to the workshop, Fogg told me that he read the classics in the course of a master’s degree in the humanities. He never found much in Plato, but strongly identified with Aristotle’s drive to organise and catalogue the world, to see systems and patterns behind the confusion of phenomena. He says that when he read Aristotle’s “Rhetoric”, a treatise on the art of persuasion, “It just struck me, oh my gosh, this stuff is going to be rolled out in tech one day!”

In 1997, during his final year as a doctoral student, Fogg spoke at a conference in Atlanta on the topic of how computers might be used to influence the behaviour of their users. He noted that “interactive technologies” were no longer just tools for work, but had become part of people’s everyday lives: used to manage finances, study and stay healthy. Yet technologists were still focused on the machines they were making rather than on the humans using those machines. What, asked Fogg, if we could design educational software that persuaded students to study for longer or a financial-management programme that encouraged users to save more? Answering such questions, he argued, required the application of insights from psychology.

Fogg presented the results of a simple experiment he had run at Stanford, which showed that people spent longer on a task if they were working on a computer which they felt had previously been helpful to them. In other words, their interaction with the machine followed the same “rule of reciprocity” that psychologists had identified in social life. The experiment was significant, said Fogg, not so much for its specific finding as for what it implied: that computer applications could be methodically designed to exploit the rules of psychology in order to get people to do things they might not otherwise do. In the paper itself, he added a qualification: “Exactly when and where such persuasion is beneficial and ethical should be the topic of further research and debate.”

Fogg called for a new field, sitting at the intersection of computer science and psychology, and proposed a name for it: “captology” (Computers as Persuasive Technologies). Captology later became behaviour design, which is now embedded into the invisible operating system of our everyday lives. The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalise on its instincts, quirks and flaws. The techniques they use are often crude and blatantly manipulative, but they are getting steadily more refined, and, as they do so, less noticeable.

Fogg’s Atlanta talk provoked strong responses from his audience, falling into two groups: either “This is dangerous. It’s like giving people the tools to construct an atomic bomb”; or “This is amazing. It could be worth billions of dollars.”

The second group has certainly been proved right. Fogg has been called “the millionaire maker”. Numerous Silicon Valley entrepreneurs and engineers have passed through his laboratory at Stanford, and some have made themselves wealthy.

Fogg himself has not made millions of dollars from his insights. He stayed at Stanford, and now does little commercial work. He is increasingly troubled by the thought that those who told him his ideas were dangerous may have been on to something.

At the workshop, Fogg explained the building blocks of his theory of behaviour change. For somebody to do something – whether it’s buying a car, checking an email, or doing 20 press-ups – three things must happen at once. The person must want to do it, they must be able to, and they must be prompted to do it. A trigger – the prompt for the action – is effective only when the person is highly motivated, or the task is very easy. If the task is hard, people end up frustrated; if they’re not motivated, they get annoyed.

One of Fogg’s current students told me about a prototype speech-therapy program he was helping to modify. Talking to its users, he discovered that parents, who really wanted it to work, found it tricky to navigate – they were frustrated. Their children found it easy to use, but weren’t bothered about doing so – they were merely annoyed. Applying Fogg’s framework helped identify a way forward. Parents would get over the action line if the program was made simpler to use; children if it felt like a game instead of a lesson.

Frustration, says Fogg, is usually more fixable than annoyance. When we want people to do something our first instinct is usually to try to increase their motivation – to persuade them. Sometimes this works, but more often than not the best route is to make the behaviour easier. One of Fogg’s maxims is, “You can’t get people to do something they don’t want to do.” A politician who wants people to vote for her makes a speech or goes on TV instead of sending a bus to pick voters up from their homes. The bank advertises the quality of its current account instead of reducing the number of clicks required to open one.

When you get to the end of an episode of “House of Cards” on Netflix, the next episode plays automatically unless you tell it to stop. Your motivation is high, because the last episode has left you eager to know what will happen and you are mentally immersed in the world of the show. The level of difficulty is reduced to zero. Actually, less than zero: it is harder to stop than to carry on. Working on the same principle, the British government now “nudges” people into enrolling into workplace pension schemes, by making it the default option rather than presenting it as a choice.

When motivation is high enough, or a task easy enough, people become responsive to triggers such as the vibration of a phone, Facebook’s red dot, the email from the fashion store featuring a time-limited offer on jumpsuits. The trigger, if it is well designed (or “hot”), finds you at exactly the moment you are most eager to take the action. The most important nine words in behaviour design, says Fogg, are, “Put hot triggers in the path of motivated people.”

If you’re triggered to do something you don’t like, you probably won’t return, but if you love it you’ll return repeatedly – and unthinkingly. After my first Uber, I never even thought of getting around Palo Alto any other way. This, says, Fogg, is how brands should design for habits. The more immediate and intense a rush of emotion a person feels the first time they use something, the more likely they are to make it an automatic choice. It’s why airlines bring you a glass of champagne the moment you sink into a business-class seat, and why Apple takes enormous care to ensure that a customer’s first encounter with a new phone feels magical.

Such upfront deliveries of dopamine bond users to products. Consider the way Instagram lets you try 12 different filters on your picture, says Fogg. Sure, there’s a functional benefit: the user has control over their images. But the real transaction is emotional: before you even post anything, you get to feel like an artist. Hence another of Fogg’s principles: “Make people feel successful” or, to rephrase it, “Give them superpowers!”

201610_FE_STA_092-web-LARGE

Fogg took ambivalent satisfaction from the example of Instagram, since he felt distantly responsible for it and perhaps distantly guilty. In 2006, two students in Fogg’s class collaborated on a project called Send the Sunshine. Their insight was that one day mobile phones (this was the pre-smartphone era) would be used to send emotions: if your friend was in a place where the weather wasn’t good and you were standing in sunshine, your phone could prompt you to take a picture and send it to them to cheer them up. One of the two students, Mike Krieger, went on to co-found Instagram, where over 400m users now share sunrises, sunsets and selfies.

Fogg built his theory in the years before social media conquered the world. Facebook, Instagram and others have raised behaviour design to levels of sophistication he could hardly have envisaged. Social-media apps plumb one of our deepest wells of motivation. The human brain releases pleasurable, habit-forming chemicals in response to social interactions, even to mere simulacra of them, and the hottest triggers are other people: you and your friends or followers are constantly prompting each other to use the service for longer.

Fogg introduced me to one of his former students, Noelle Moseley, who now consults for technology companies. She told me that she had recently interviewed heavy users of Instagram: young women who cultivated different personas on different social networks. Their aim was to get as many followers as possible – that was their definition of success. Every new follow and every comment delivered an emotional hit. But a life spent chasing hits didn’t feel good. Moseley’s respondents spent all their hours thinking about how to organise their lives in order to take pictures they could post to each persona, which meant they weren’t able to enjoy whatever they were doing, which made them stressed and unhappy. “It was like a sickness,” said Moseley.

B.J. Fogg comes from a Mormon family, which has endowed him with his bulletproof geniality and also with a strong need to believe that his work is making the world a better place. The only times during our conversations when his tone darkened were when he considered the misuse of his ideas in the commercial sphere. He worries that companies like Instagram and Facebook are using behaviour design merely to keep consumers in thrall to them. One of his alumni, Nir Eyal, went on to write a successful book, aimed at tech entrepreneurs, called “Hooked: How to Build Habit-Forming Products”
.
“I look at some of my former students and I wonder if they’re really trying to make the world better, or just make money,” said Fogg. “What I always wanted to do was un-enslave people from technology.”

When B.F. Skinner performed further experiments with his box, he discovered that if the rat got the same reward each time, it pulled the lever only when it was hungry. The way to maximise the number of times the rat pulled the lever was to vary the rewards it received. If it didn’t know whether it was going to get one pellet, or none, or several when it pulled the lever, then it pulled the lever over and over again. It became psychologically hooked. This became known as the principle of variable rewards.

In “Hooked”, Eyal argues that successful digital products incorporate Skinner’s insight. Facebook, Pinterest and others tap into basic human needs for connection, approval and affirmation, and dispense their rewards on a variable schedule. Every time we open Instagram or Snapchat or Tinder, we never know if someone will have liked our photo, or left a comment, or written a funny status update, or dropped us a message. So we keep tapping the red dot, swiping left and scrolling down.

Eyal has added his own twists to Fogg’s model of behavioural change. “BJ thinks of triggers as external factors,” Eyal told me. “My argument is that the triggers are internal.” An app succeeds, he says, when it meets the user’s most basic emotional needs even before she has become consciously aware of them. “When you’re feeling uncertain, before you ask why you’re uncertain, you Google. When you’re lonely, before you’re even conscious of feeling it, you go to Facebook. Before you know you’re bored, you’re on YouTube. Nothing tells you to do these things. The users trigger themselves.”

Eyal’s emphasis on unthinking choices raises a question about behaviour design. If our behaviours are being designed for us, to whom are the designers responsible? That’s what Tristan Harris, another former student of Fogg’s, wants everyone to think about. “BJ founded the field of behaviour design,” he told me. “But he doesn’t have an answer to the ethics of it. That’s what I’m looking for.”

Harris was Mike Krieger’s collaborator on Send the Sunshine in Fogg’s class of 2006. Like Krieger, Harris went on to create a real-world app, Apture, which was designed to give instant explanations of complex concepts to online readers: a box would pop up when the user held their mouse over a term they wanted explaining. Apture had some success without ever quite taking off, and in 2011 Google acquired Harris’s startup.

The money was nice but it felt like a defeat. Harris believed in his mission to explain, yet he could not persuade publishers that incorporating his app would lead to people spending more time on their sites. He came to believe that the internet’s potential to inform and enlighten was at loggerheads with the commercial imperative to seize and hold the attention of users by any means possible. “The job of these companies is to hook people, and they do that by hijacking our psychological vulnerabilities.”

Facebook gives your new profile photo a special prominence in the news feeds of your friends, because it knows that this is a moment when you are vulnerable to social approval, and that “likes” and comments will draw you in repeatedly. LinkedIn sends you an invitation to connect, which gives you a little rush of dopamine – how important I must be! – even though that person probably clicked unthinkingly on a menu of suggested contacts. Unconscious impulses are transformed into social obligations, which compel attention, which is sold for cash.

After working for Google for a year or so, Harris resigned in order to pursue research into the ethics of the digital economy. “I wanted to know what responsibility comes with the ability to influence the psychology of a billion people? What’s the Hippocratic oath?” Before leaving, he gave a farewell presentation to Google’s staff in which he argued that they needed to see themselves as moral stewards of the attention of billions of people. Unexpectedly, the slides from his talk became a viral hit inside the company, travelling all the way to the boardroom. Harris was persuaded to stay on and pursue his research at Goo­gle, which created a new job title for him: design ethicist and product philosopher.

After a while, Harris realised that although his colleagues were listening politely, they would never take his message seriously without pressure from the outside. He left Google for good earlier this year to become a writer and advocate, on a mission to wake the world up to how digital technology is diminishing the human capacity for making free choices. “Behaviour design can seem lightweight, because it’s mostly just clicking on screens. But what happens when you magnify that into an entire global economy? Then it becomes about power.”

Harris talks fast and with an edgy intensity. One of his mantras is, “Whoever controls the menu controls the choices.” The news we see, the friends we hear from, the jobs we hear about, the restaurants we consider, even our potential romantic partners – all of them are, increasingly, filtered through a few widespread apps, each of which comes with a menu of options. That gives the menu designer enormous power. As any restaurateur, croupier or marketer can tell you, options can be tilted to influence choices. Pick one of these three prices, says the retailer, knowing that at least 70% of us will pick the middle one.

Harris’s peers have, he says, become absurdly powerful, albeit by accident. Menus used by billions of people are designed by a small group of men, aged between 25 and 35, who studied computer science and live in San Francisco. “What’s the moral operating system running in their head?” Harris asks. “Are they thinking about their ethical responsibility? Do they even have the time to think about it?”

The more influence that tech products exert over our behaviour, the less control we have over ourselves. “Companies say, we’re just getting better at giving people what they want. But the average person checks their phone 150 times a day. Is each one a conscious choice? No. Companies are getting better at getting people to make the choices they want them to make.”

In “Addiction by Design”, her remarkable study of machine gambling in Las Vegas, Natasha Dow Schüll, an anthropologist, quotes an anonymous contributor to a website for recovering addicts. “Slot machines are just Skinner boxes for people! Why they keep you transfixed is not really a big mystery. The machine is designed to do just that.” The gambling industry is a pioneer of behaviour design. Slot machines, in particular, are built to exploit the compelling power of variable rewards. The gambler pulls the lever without knowing what she will get or whether she will win anything at all, and that makes her want to pull it again.

The capacity of slot machines to keep people transfixed is now the engine of Las Vegas’s economy. Over the last 20 years, roulette wheels and craps tables have been swept away to make space for a new generation of machines: no longer mechanical contraptions (they have no lever), they contain complex computers produced in collaborations between software engineers, mathematicians, script writers and graphic artists.

The casinos aim to maximise what they call “time-on-device”. The environment in which the machines sit is designed to keep people playing. Gamblers can order drinks and food from the screen. Lighting, decor, noise levels, even the way the machines smell – everything is meticulously calibrated. Not just the brightness, but also the angle of the lighting is deliberate: research has found that light drains gamblers’ energy fastest when it hits their foreheads.

But it is the variation in rewards that is the key to time-on-device. The machines are programmed to create near misses: winning symbols appear just above or below the “payline” far more often than chance alone would dictate. The player’s losses are thus reframed as potential wins, motivating her to try again. Mathematicians design payout schedules to ensure that people keep playing while they steadily lose money. Alternative schedules are matched to different types of players, with differing appetites for risk: some gamblers are drawn towards the possibility of big wins and big losses, others prefer a drip-feed of little payouts (as a game designer told Schüll, “Some people want to be bled slowly”). The mathematicians are constantly refining their models and experimenting with new ones, wrapping their formulae around the contours of the cerebral cortex.

Gamblers themselves talk about “the machine zone”: a mental state in which their attention is locked into the screen in front of them, and the rest of the world fades away. “You’re in a trance,” one gambler explains to Schüll. “The zone is like a magnet,” says another. “It just pulls you in and holds you there.”

A player who is feeling frustrated and considering quitting for the day might receive a tap on the shoulder from a “luck ambassador”, dispensing tickets to shows or gambling coupons. What the player doesn’t know is that data from his game-playing has been fed into an algorithm that calculates how much that player can lose and still feel satisfied, and how close he is to the “pain point”. The offer of a free meal at the steakhouse converts his pain into pleasure, refreshing his motivation to carry on.

Schüll’s book, which was published in 2013, won applause for its exposure of the dark side of machine gambling. But some readers spotted opportunities in it. Schüll told me that she received an approach from an online education company interested in adopting the idea of “luck ambassadors”. Where is the pain point for a student who isn’t getting the answers right, and what does she need to get over it instead of giving up? Schüll found herself invited to speak at conferences attended by marketers and entrepreneurs, including one on habit formation organised by Nir Eyal.

Las Vegas is a microcosm. “The world is turning into this giant Skinner box for the self,” Schüll told me. “The experience that is being designed for in banking or health care is the same as in Candy Crush. It’s about looping people into these flows of incentive and reward. Your coffee at Starbucks, your education software, your credit card, the meds you need for your diabetes. Every consumer interface is becoming like a slot machine.”

These days, of course, we all carry slot machines in our pockets.

Natasha Dow Schüll accepted her invitation to speak at Eyal’s conference. “It was strange. Nobody in that room wanted to be addicting anyone – they were hipsters from San Francisco, after all. Nice people. But at the same time, their charter is to hook people for startups.” Tristan Harris thinks most people in the world of technology are unwilling to confront the inherent tension in what they do. “Nir and BJ are nice guys. But they overestimate the extent to which they’re empowering people, as opposed to helping to hook them.”

Silicon Valley is bathed in sunshine. The people who work there are optimists who believe in the power of their products to extend human potential. Like Fogg, Eyal sincerely wants to make the world better. “I get almost religious about product design. Product-makers have the ability to improve people’s lives, to find the points when people are in pain, and help them.” He rejects the idea that trying to hook people is inherently dubious. “Habits can be good or bad, and technology has the ability to create healthy habits. If the products are getting better at drawing you in, that’s not a problem: that’s progress.”

The gambling executives Schüll interviewed were not evil. They believe they are simply offering customers more and better ways to get what they want. Nobody was being coerced or deceived into parting with their money. As one executive put it, in a coincidental echo of Fogg, “You can’t make people do something they don’t want to do.” But the relationship, as Schüll points out, is asymmetric. For the gamblers, the zone is an end in itself; for the gambling industry, it is a means of extracting profit.

Tristan Harris sees the entire digital economy in similar terms. No matter how useful the products, the system itself is tilted in favour of its designers. The house always wins. “There is a fundamental conflict between what people need and what companies need,” he explained. Harris isn’t suggesting that tech companies are engaged in a nefarious plot to take over our minds – Google and Apple didn’t set out to make phones like slot machines. But the imperative of the system is to maximise time-on-device, and it turns out the best way of doing that is to dispense rewards to the operant on a variable schedule.

It also means shutting the door to the box. Things that aren’t important to a person are bound up with things that are very important: the machine on which you play games and read celebrity gossip is the one on which you’ll find out if your daughter has fallen ill. So you can’t turn it off or leave it behind. Besides, you might miss a magic moment on Instagram.

“There are people who worry about ai [artificial intelligence],” Harris said. “They ask whether we can maximise its potential without harming human interests. But ai is already here. It’s called the internet. We’ve unleashed this black box which is always developing new ways to persuade us to do things, by moving us from one trance to the next.”

In theory, we can all opt out of the loops of incentive and reward which encircle us, but few of us choose to. It is just so much easier to accept and connect. If we are captives of captology, then we are willing ones.

Ian Leslie works in advertising. He has written two books, “Born Liars” and “Curious”.

From 1843magazine.com.