brain hacking
a.k.a. addiction coding, brain hack, hacking the brainDerived from hacking and hijacking, brain hacking refers to the way Silicon Valley is engineering smartphones, apps and social media to get users hooked. Also known as "addiction coding," big tech knows what they're doing and they are trying to make it a habit for you and your family to feel the need to check in 24/7.
Historical perspective: This term was deemed "Internet Word of the Year for 2018" due to the dawning realization that even though not all tech bells-and-whistles are designed to "brain hack," they may end up that way. You know when you're on a mobile social media site and you pull down on the news feed to get it to refresh... and you see that little circle scrolling clockwise...? That's called the pull-down refresh and the guy who invented it... he's sorry he did so because people are now addicted. Please read the full story below as seen in The Week.
By 2017, tech engineers who helped make Facebook and Twitter so addictive are unplugging from the internet, said journalist Paul Lewis of The Guardian, concerned that their creations are brain hacking users’ minds. This excerpted article is how Silicon Valley hooks us and why we may be the last generation that can remember life before social media.
Justin Rosenstein had tweaked his laptop’s operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. But even that wasn’t enough. In August, the 34-year old tech executive took a more radical step to restrict his use of social media and other addictive technologies. Rosenstein purchased a new iPhone and instructed his assistant to set up a parental-control feature to prevent him from downloading apps.
He was particularly aware of the allure of Facebook “likes,” which he describes as “bright dings of pseudo-pleasure” that can be as hollow as they are seductive. And Rosenstein should know: He was the Facebook engineer who created the Like button in the first place. A decade after he stayed up all night coding a prototype of what was then called an “awesome” button, Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called attention economy: an internet shaped around the demands of advertising.
These refuseniks are rarely
founders or chief executives, who have little incentive to deviate from
the mantra that their companies are making the world a better place.
Instead, they tend to have worked a rung or two down the corporate
ladder: designers, engineers, and product managers who, like Rosenstein,
several years ago put in place the building blocks of a digital world
from which they are now trying to disentangle themselves. Rosenstein,
who also helped create Gchat during a stint at Google and now leads a
San Francisco–based company that improves office productivity, appears
most concerned about the psychological effects on people because, research
shows:
People touch, swipe, or tap their phone 2,617 times a day.
There is concern that as well as addicting users, technology is contributing to so-called "continuous partial attention," severely limiting people’s ability to focus, and possibly lowering IQ. One study showed that the mere presence of smartphones damages cognitive capacity— even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.” But those concerns are trivial compared with the devastating impact upon the political system that some of Rosenstein’s peers believe can be attributed to the rise of social media and the attention-based market that drives it. Drawing a straight line between addiction to social media and political earthquakes like Brexit and the rise of Donald Trump, they contend that digital forces have upended the political system and, left unchecked, could even render democracy as we know it obsolete.
Rosenstein’s colleague, Leah Pearlman, then a product manager at Facebook, announced the Facebook “like” in a 2009 blog post. Now 35 and an illustrator, Pearlman, too, has grown disaffected with “likes” and other addictive feedback loops. “One reason I think it is particularly important for us to talk about this now is that we may be the last generation that can remember life before,” Rosenstein says. In April 2017, designers, programmers, and tech entrepreneurs gathered at a conference center on the shore of the San Francisco Bay to pay $1,700 on a course by Nir Eyal to learn how to manipulate people into habitual use of their products.
Eyal, 39, the author of Hooked: How to Build Habit-Forming Products, has spent several years consulting for the tech industry, teaching techniques he developed by closely studying how the Silicon Valley giants operate. “The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended.”
Eyal wanted to address the concern that technological manipulation was somehow harmful or immoral. He was dismissive of those who compare tech addiction to drugs. “We’re not freebasing Facebook and injecting Instagram here,” he said. He flashed up a slide of a shelf filled with sugary baked goods. “Just as we shouldn’t blame the baker for making such delicious treats, we can’t blame tech makers for making their products so good we want to use them,” he said. Without irony, Eyal finished his talk with some personal tips for resisting the lure of technology. He told his audience he uses a Chrome extension, called DF YouTube, “which scrubs out a lot of those external triggers” he writes about in his book, and he recommended an app called Pocket Points that “rewards you for staying off your phone when you need to focus.”
Finally, Eyal confided the lengths he goes to [[child tech addiction|protect his own family]]. He has installed in his house an outlet timer that cuts off access to the internet at a set time every day. “The idea is to remember that we are not powerless,” he said. “We are in control.” But are we? If the people who built these technologies are taking such radical steps to wean themselves free, can the rest of us be expected to exercise our free will?
Not according to Tristan Harris, a 33-year old former Google employee turned vocal critic of the tech industry. “All of us are jacked into this system,” he says. “All of our minds can be hijacked. Our choices are not as free as we think they are.” Harris, who has been branded “the closest thing Silicon Valley has to a conscience,” insists that billions of people have little choice over whether they use these now ubiquitous technologies, and are largely unaware of the invisible ways in which a small number of people in Silicon Valley are shaping their lives.
A graduate of Stanford University, Harris studied under B.J. Fogg, a behavioral psychologist revered in tech circles for mastering the ways technological design can be used to persuade people. Harris is the student who went rogue, lifting the curtain on the vast powers accumulated by tech companies and the ways they are using that influence. “I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.”
It all began in 2013, when he was working as a product manager at Google and circulated a thought-provoking memo, “A Call to Minimize Distraction & Respect Users’ Attention,” to 10 close colleagues. It struck a chord, spreading to some 5,000 Google employees, including senior executives who rewarded Harris with an impressive-sounding new title: He was to be Google’s in-house design ethicist and product philosopher. Looking back, Harris sees that he was promoted into a marginal role. Still, he adds, “I got to sit in a corner and think and read and understand.” He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix auto-play videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging near-constant communication between its mostly teenage users.
The techniques these companies use are not always generic: They can be algorithmically tailored to each person. An internal Facebook report leaked in 2017 revealed that the company can identify when teens feel “insecure” and “worthless.” Such granular information, Harris adds, is “a perfect model of what buttons you can push. "Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder. “There’s no ethics,” he says.
Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident. A friend at Facebook told Harris that designers initially decided the notification icon, which alerts people to new activity such as “friend requests,” should be blue. “But no one used it,” Harris says. “Then they switched it to red, and of course everyone used it.” That red icon is now everywhere. When smartphone users glance at their phones, dozens or hundreds of times a day, they are confronted with small red dots beside their apps, pleading to be tapped. “Red is a trigger color,” Harris says. “That’s why it is used as an alarm signal.” The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. When we tap those apps with red icons, we don’t know whether we’ll discover an interesting email, an avalanche of “likes,” or nothing at all.
It’s this that explains how the pull-down refresh mechanism, whereby users swipe down, pause, and wait to see what content appears, rapidly became one of the most addictive and ubiquitous design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says. “You don’t know what’s coming next.” The designer who created the pull-to-refresh mechanism is Loren Brichter, 32, who says he never intended the design to be addictive. But he did not dispute the slot machine comparison. “I agree 100 percent,” he says. “I have two kids now, and I regret every minute that I’m not paying attention to them because my phone has sucked me in.”
In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so-called [[fake|fake news]] on Facebook; Russian-created Twitter bots; or the data-centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But James Williams, who built the metrics system for Google’s global-search advertising business and is now doing a Ph.D. at Oxford in persuasive design, sees those factors as symptoms of a deeper problem. It was not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterful at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.
In a blog published a month before the U.S. election, Williams sounded the alarm on an issue he argued was a “far more consequential question” than whether Trump reached the White House. The reality-TV star’s campaign, he said, had heralded a watershed in which “the new, digitally supercharged dynamics of the attention economy have finally crossed a threshold and become manifest in the political realm.” Williams saw a similar dynamic unfold months earlier, during the Brexit campaign, when the attention economy appeared to him biased in favor of the emotional, identity-based case for the U.K. leaving the European Union. He stresses these dynamics are not unique to the political right: They also play a role in the popularity of left-wing politicians, such as Bernie Sanders, and the frequent internet outrage over issues that ignite fury among progressives.
All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves to a perpetual cognitive style of outrage,” he says. Williams argues the recent fixation on the surveillance state fictionalized by George Orwell is likely misplaced. It was another English science fiction writer, Aldous Huxley, who provided the more prescient observation when he warned that Orwellian-style coercion was less of a threat to democracy than the subtle power of “man’s almost infinite appetite for distractions.”
“The dynamics of the attention economy are structurally set up to undermine the human will,” Williams says. “If politics is an expression of our human will, then the attention economy is undermining the assumptions that democracy rests on.” If Facebook, Google, and Twitter are chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions? "Will we be able to recognize it, if and when it happens?” Williams replies. “And if we can’t, then how do we know it hasn’t happened already?”
Edit Word