Top Quotes: “The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World” — Max Fisher

Austin Rose
47 min readMay 4, 2024

Introduction

In Myanmar, the United Nations had formally accused Facebook of allowing its technology to help provoke one of the worst genocides since World War II.”

“A mother who accepts that vaccines are safe has little reason to spend much time discussing the subject online. Like-minded parenting groups she joins, while large, might be relatively quiet. But a mother who suspects a vast medical conspiracy imperiling her children, DiResta saw, might spend hours researching the subject. She is also likely to seek out allies, sharing information and coordinating action to fight back. To the, A.I. governing a social media platform, the conclusion is obvious: moms interested in health issues will come to spend vastly more time online if they join anti-vaccine groups. Therefore, promoting them, through whatever method wins those users’ notice, will boost engagement. If she was right, DiResta knew, then Facebook wasn’t just indulging anti-vaccine extremists. It was creating them.”

Origins of Silicon Valley

“Less than a century ago, the Santa Clara Valley, in central California, was a sleepy expanse of fruit orchards and canneries, specked by the occasional oil derrick. That began to change in 1941, when the Japanese navy struck Pearl Harbor, setting in motion a series of events that remade this backwater into one of the greatest concentrations of wealth the world has ever known.

The story of that transformation, which bears little resemblance to the hacker legends or dorm-room tales that pass for Silicon Valley’s mostly self-invented lore, instilled the Valley with cultural and economic traits that were built into the products that increasingly rule our world. And it began with a wave of pioneers who played a role as crucial as any of the engineers or CEOs who came after them: the military-industrial complex.

After Pearl Harbor, the Pentagon, preparing to push into the Pacific but fearing another surprise attack, dispersed military production and research across parts of the West Coast that still had a touch of the frontier to them. One such location was Moffett Field, a largely disused air-base on a protected bay, shielded by the Santa Cruz Mountains. When the war ended, the war machine stayed, repurposed for the ever-escalating standoff with the Soviet Union. Planning for nuclear war, the Pentagon encouraged contractors to shift vital projects away from major population centers. The aerospace giant Lockheed complied, moving its missiles and space division to the quiet Santa Clara Valley, just behind hangar three on Moffett Field. Much of the Cold War arms race was conducted from its campus. Apple co-founder Steve Wozniak, like many of his era, grew up watching a parent head to Lockheed every morning.

Equally important was an unusual new academic research center, just a few miles away. Frederick Terman, the son of a psychology professor at then-unremarkable Stanford University, spent World War II at Harvard’s labs, overseeing joint military-academic research projects. He returned home with an idea: that this model continue into peacetime, with university scientists cooperating instead with private companies. He established the Stanford Research Park, where companies could work alongside academic researchers.

With Cold War contractors already next door, there were plenty of takers. The arrangement drew talented scientists and graduate students from back East, offering them the chance to get in on a lucrative patent or startup. University research departments usually toil, at least in theory, on behalf of the greater good. Stanford blurred the line between academic and for-profit work, a development that became core to the Silicon Valley worldview, absorbed and propagated by countless companies cycling through the Research Park. Hitting it big in the tech business and advancing human welfare, the thinking went, were not only compatible, they were one and the same.

These conditions made 1950s Santa Clara what Margaret OMara, a prominent historian of Silicon Valley, has called a silicon Galápagos. Much as those islands’ peculiar geology and extreme isolation produced one-of-a-kind bird and lizard species, the Valley’s peculiar conditions produced ways of doing business and of seeing the world that could not have flourished anywhere else.”

“The little industry thrived thanks to the mass of engineers already in town for Lockheed, ensuring top-flight recruits for any promising startup. And the Stanford Research Park put cutting-edge research within easy reach.

That pool of talent, money, and technology — the three essential ingredients — would be kept in the Valley, and the rest of the world kept out, by an unusual funding practice: venture capitalism. Wall Street money mostly stayed away. The products were too esoteric and the market too opaque for outside financiers. Seemingly the only people able to identify promising ideas, the engineers themselves, provided startup funding. Someone who’d made some money on their own project would hear about a new widget getting designed across town and grant seed money — venture capital — for a percentage stake.

The arrangement went beyond money. An effective venture capitalist, to safeguard an investment, would often take a seat on the company’s board, help select the executive team, even personally mentor the founder. And venture capitalists tended to fund people whom they trusted — which meant people they knew personally or who looked and talked like them. This meant that each class of successful engineers reified their strengths, as well as their biases and blind spots, in the next, like an isolated species whose traits become more pronounced with each subsequent generation.”

“Denying employees pulling all-nighters a chance to retire rich in their twenties left Zuckerberg under tremendous pressure not only to turn Facebook around but to succeed so wildly that Yahoo’s billion would seem small.

Part two of his two-part plan was to eventually open up Facebook to anyone. But the failed expansion to workplaces made it uncertain that would succeed, and might even be counterproductive if it drove out college kids, which was why so much rested on part one. He would overhaul Facebook’s homepage to show each user a personalized feed of what their friends were up to on the site. Until then, you had to check each profile or group manually for any activity. Now, if one friend changed her relationship status, another posted about bad pizza in the cafeteria, and another signed up for an event, all of that would be reported on your homepage.

That stream of updates had a name: the news feed. It was presented as a never-ending party attended by everyone you knew. But to some users it felt like being forced into a panopticon, where everyone had total, unblinking visibility into the digital lives of everyone else. Facebook groups with names like “Students Against Facebook News Feed” cropped up. Nothing tangible happened in the groups. Joining signaled your agreement; that was it. But because of the site redesign, each time someone joined, all of that person’s friends got a notification on their feed alerting them. With a tap of the mouse, they could join, too, which would be broadcast in turn to their friends. Within a few hours, the groups were everywhere. One attracted 100,000 members on its first day and, by the end of the week, nearly a million.

In reality, only a minority of users ever joined. But proliferating updates made them look like an overwhelming majority. And the news feed rendered each lazy click of the “join” button as an impassioned shout: “Against News Feed” or “I HATE FACEBOOK.” The appearance of widespread anger, therefore, was an illusion. But human instincts to conform run deep. When people think something has become a matter of consensus, psychologists have found, they tend not only to go along, but to internalize that sentiment as their own.

Soon, outrage became action. Tens of thousands emailed Facebook customer service. By the next morning, satellite TV trucks besieged Facebook’s Palo Alto office, as did enough protesters that police asked the company to consider switching off whatever had caused such controversy. Some within Facebook agreed. The crisis was calmed externally with a testy public apology from Zuckerberg-“Calm down. Breathe. We hear you” — and, internally, with an ironic realization: the outrage was being ginned up by the very Facebook product that users were railing against.

That digital amplification had tricked Facebook’s users, and even its leadership, into misperceiving the platform’s loudest voices as representing everyone, growing a flicker of anger into a wildfire. But, crucially, it had also done something else: driven engagement up. Way up. In an industry where user engagement is the primary metric of success, and in a company eager to prove that turning down Yahoo’s billion-dollar overture had been more than hubris, the news feed’s distortions were not just tolerated, they were embraced. Facebook soon allowed anyone to register for the site. User growth rates, which had barely budged during the prior expansion round, exploded by 600 or 700 percent.”

“Eyal describes a hypothetical woman, Barbra, who logs on to Facebook to see a photo uploaded by a family member. As she clicks through more photos or comments in response, her brain conflates feeling connected to people she loves with the bleeps and flashes of Facebook’s interface. “Over time,” Eyal writes, “Barbra begins to associate Facebook with her need for social connection.” She learns to serve that need with a behavior — using Facebook — that in fact will rarely fulfill it.”

Myanmar

“WHAT HAPPENS WHEN an entire society goes online at once, transitioning overnight from life without social media to one dominated by it? Such an experiment might sound impossible, but it happened. Its name is Myanmar.

“I’m convinced that you all are in for the ride of your life right now,” Eric Schmidt, Google’s longtime CEO, said.”

“A paranoid military junta imposed near-total bans on the internet, cell phones, foreign media, and international travel. Torture and repressive violence were executed with the worst combination of incompetence and cruelty. In 2011, the aging leader was replaced with yet another dour-faced general, Thein Sein, but Sein turned out to have reformist leanings. He urged exiles to return home, eased media restrictions, and released political prisoners. He distanced himself from China, Myanmar’s increasingly imperious northern neighbor, and opened talks with the United States. Sanctions were lifted and elections scheduled; in 2012, Barack Obama became the first sitting U.S. president to visit.

A supporting but highly visible player in the country’s opening, welcomed by both Myanmar and American leaders, was Silicon Valley. Rapidly bringing the country online would, they promised, modernize its economy and empower its 50 million citizens, effectively locking in the transition to democracy.”

“Myanmar’s leaders believed in Silicon Valleys vision as well. A state-run newspaper admonished its citizens that “a person without a Facebook identity is like a person without a home address.” The country moved online almost instantaneously. From 2012 to 2015, internet-adoption rates exploded from 0.5 percent to 40 percent, mostly through cheap smartphones. SIM card prices dropped from $1,500 to $1.50.

Facebook played a prominent role. Through deals with local companies, it arranged for smartphones to come preloaded with a stripped-down Facebook app. In poorer countries like Myanmar, where average incomes are around $3 a day, cell-phone data can be prohibitively expensive. To overcome this obstacle and thereby win the race to capture the world’s poorest two or three billion customers, Facebook and other American tech companies began “zero-rating” — essentially, subsidizing the entire population by striking deals with local carriers to waive charges for any data used via those companies’ apps. Myanmar was an early test case and, for Facebook, a staggering success. A huge proportion of the country learned to message and browse the web exclusively through Facebook, so much so that many there remain unaware that any other way to communicate or read news online exists.”

“Shuttling between interviews with politicians and activists, I came to see Myanmar’s future as shakier than it had been portrayed. The military still held vestiges of power that it seemed reluctant to surrender. Among the clergy, an extremist fringe was rising. And its newly available social media was filling with racism and conspiracies. Online, angry talk of traitorous minorities felt ubiquitous.

A worrying name kept coming up in my conversations: Wirathu. The Buddhist monk had been imprisoned for his hate-filled sermons for the past decade and had just been released as part of a general amnesty. He’d immediately joined Facebook and YouTube. Now, rather than traveling the country temple by temple to spread hate, he used the platforms to reach much of the country, perhaps multiple times per day. He accused the country’s Muslim minority of terrifying crimes, blending rumor with shameless fabrication. On Facebook especially, his posts circulated and recirculated among users who took them as fact, creating an alternate reality defined by conspiracy and rage, which propelled Wirathu to a new level of stardom.

A Stanford researcher who had worked in Myanmar, Aela Callan, met with senior Facebook managers in late 2013 to warn them that hate speech was overrunning the platform, she later told the reporter Timothy McLaughlin. For a country with hundreds of thousands of users, and soon millions, Facebook employed only one moderator who could review content in Burmese, Myanmar’s predominant language, leaving the platform effectively unsupervised. The managers told Callan that Facebook would press forward with its Myanmar expansion anyway.

In early 2014, Callan relayed another warning to Facebook: the problem was worsening, and with it the threat of violence. Again, little changed. A few months later, Wirathu shared a post falsely claiming that two Muslim tea shop owners in the city of Mandalay had raped a Buddhist woman. He posted the names of the tea sellers and their shop, calling their fictitious assault the opening shot in a mass Muslim uprising against Buddhists. He urged the government to raid Muslims’ homes and mosques in a preemptive strike — a common demand of genocidaires, whose implied message is that regular citizens must do what the authorities will not. The post went viral, dominating feeds across the country. Outraged users joined in the froth, urging one another to wipe out their Muslim neighbors. Hundreds rioted in Mandalay, attacking Muslim businesses and owners, killing two people and wounding many more.

As the riots spread, a senior government official called someone he knew at the Myanmar office of Deloitte, a consulting firm, to ask for help in contacting Facebook. But neither could reach anyone at the company. In desperation, the government blocked access to Facebook in Mandalay. The riots cooled. The next day, officials at Facebook finally responded to the Deloitte representative, not to inquire after the violence but to ask if he knew why the platform had been blocked. In a meeting two weeks later with the government official and others, a Facebook representative said that they were working to improve their responsiveness to dangerous content in Myanmar. But if the company made any changes, the effect was undetectable on its platform. As soon as the government lifted its virtual blockade, hate speech, and Wirathu’s audience, only grew.”

Social Media

The first games, designed by 1970s Silicon Valley shops like Atari, launched alongside personal computers and were presumed to have the same universal appeal. That changed with what the industry calls the North American video game crash. From 1983 to 1985, sales collapsed by 97 percent. Japanese firms sought to revive the market by rebranding this now-tarnished computer product, sold in electronics stores to adults, as something simpler: toys.

Toy departments were, at that moment, sharply segmenting by gender. President Reagan had lifted regulations forbidding TV advertising aimed at children. Marketers, seized by a neo-Freudianism then in vogue, believed they could hook kids by indulging their nascent curiosity about their own genders. New TV programming like My Little Pony and GI Joe delivered hyper-exaggerated gender norms, hijacking adolescents’ natural gender self-discovery and converting it into a desire for molded plastic products. If this sounds like a strikingly crisp echo of social medias business model, it’s no coincidence. Tapping into our deepest psychological needs, then training us to pursue them through commercial consumption that will leave us unfulfilled and coming back for more, has been central to American capitalism since the postwar boom.

Toy departments polarized between pink and blue. Japanese game-makers had to pick a side, so they selected the one on which parents spent more: boys. Games increasingly centered on male heroes rescuing princesses, fighting wars, playing in male sports leagues. Marketers, having long positioned games as childhood toys, kept boys hooked through adolescence and adulthood with — what else? — sex. Games were filled with female characters who were portrayed as hypersexualized, submissive, and something to which men should feel entitled. Plenty of gamers understood that the portrayal was fantasy, albeit one with troubling values. But enough grew up on the fantasy to absorb it as truth. Amid culture wars of the 1990s and 2000s, game marketers seized on these tendencies as an asset, presenting games as refuges from a feminizing world, a place where men were still men and women kept in their place. Gaming became, for some, an identity, one rooted in reaction against evolving gender norms.”

Twitter’s founders had launched their product in 2006 as essentially a group-texting service. Users would text a toll-free number, which would then forward the message to their friends. If you went out, you might post the name of the bar; friends who followed your updates could come join. Messages were capped at about the length of a sentence. A simple website also logged the updates.”

“A Black student at Smith College posted on Facebook that a school janitor and security guard had harassed her for “eating while Black.” They had questioned her while she lunched in a dorm lounge, she said, treating her as an interloper because of her race. The post was shared angrily across Smith, then other colleges, then the wider world, bringing a firestorm of attention to the tiny campus. The janitor was placed on paid leave. Students walked out of class in protest. The student posted a follow-up on Facebook accusing two cafeteria workers of calling security during the incident. She included the name, email address, and photograph of one of them, writing, “This is the racist person.” The cafeteria worker was inundated with angry calls at her home, some threatening to kill her. The student also posted a photo of a janitor, accusing him of “racist cowardly acts,” though, in an apparent mistake, she had identified the wrong janitor.

The truth turned out to be a combination of honest misunderstanding and youthful hyperbole. She had been eating in a closed-down dormitory attached to a cafeteria reserved for a program for young children. The janitor had followed rules requiring security to be called if anyone not in that program showed up. The guard had a polite exchange with the student and did not ask her to leave. The cafeteria workers were not involved at all.

But by the time the truth came out, in a lengthy report by a third-party law firm the university had hired, the Facebook version of events, calcified by an outpouring of profound collective emotion, had long since hardened in people’s minds. The university, likely fearful of provoking more anger, refused to exonerate the workers, announcing that it was “impossible to rule out the potential role of implicit racial bias” in their behavior. One was transferred, another kept on leave, another quit. One of the cafeteria workers was later denied a restaurant job when the interviewer recognized her from Facebook as “the racist.” The student, once praised online as a hero, was now condemned as a villain. Few thought to blame the social media platforms that had empowered a teenager to destroy the livelihoods of low-income workers, incentivized her and thousands of onlookers to do it, and ensured that all would experience her outrage-provoking misimpression as truer than the truth.”

“THE MYSTERY OF moral outrage — why are we so drawn to an emotion that makes us behave in ways we deplore? — was ultimately unraveled by a seventy-year-old Russian geneticist holed up in a Siberian research lab, breeding thousands of foxes. Lyudmila Trut arrived at the lab in 1959, fresh out of Moscow State University, to search for the origins of something that had seemed unrelated: animal domestication.

Domestication was a mystery. Charles Darwin had speculated that it might be genetic. But no one knew what external pressures turned wolves into dogs or how the wolf’s biology changed to make it so friendly. Darwin’s disciples, though, had identified a clue: domesticated animals, whether dog or horse or cow, all had shorter tails, softer ears, slighter builds, and spottier coats than their wild counterparts. And many had a distinctive, star-shaped spot on their foreheads.

If Trut could trigger domestication in a controlled setting, she might isolate its causes. Her lab, attached to a Siberian fur factory, started with hundreds of wild foxes. She scored each on its friendliness to humans, bred only the friendliest 10 percent, then repeated the process with that generation’s children. On the tenth generation, sure enough, one fox was born with floppy ears. Another had a star-shaped forehead mark. And they were, Trut wrote, “eager to establish human contact, whimpering to attract attention, and sniffing and licking experimenters like dogs.” Darwin had been right. Domestication was genetic. Subsequent generations of the foxes, as they grew friendlier still, had shorter legs and tails and snouts, smaller skulls, flatter faces, spottier fur coloring.

Trut studied the animals for half a century, finally discovering the secret to domestication: neural crest cells. Every animal starts life with a set. The cells migrate through the embryo as it grows, converting themselves into jawbones, cartilage, teeth, skin pigment, and parts of the nervous system. Their path ends just above the animal’s eyes. That’s why domesticated foxes had white forehead marks: the neural crest cells passed on to them by their friendlier parents never made it that far. This also explained the floppy ears, shorter tails, and smaller snouts.

Further, it unlocked a change in personality, because neural crest cells also become the glands that produce the hormones responsible for triggering fear and aggression. Wild foxes were fearful toward humans and aggressive with one another, traits that served them well in the wild. When Trut bred the friendliest foxes, she was unknowingly promoting animals with fewer neural crest cells, stunting their neurological development in a very specific and powerful way.

Of the many revelations to flow from Trut’s research, perhaps the greatest was resolving a long-standing mystery about humans. About 250,000 years ago, our brains, after growing larger for millions of years, started shrinking. Strangely, it occurred just as humans seemed to be getting smarter, judging by tools found with their remains. Humans simultaneously developed thinner arm and leg bones, flatter faces (no more caveman brow ridges), and smaller teeth, with male bodies more closely resembling those of females. With Trut’s findings, the reason was suddenly clear. These were the markers of a sudden drop in neural crest cells — of domestication.

But Trut’s foxes had been domesticated by an external force: her. What had intervened in the evolutionary trajectory of humans to suddenly favor docile individuals over aggressive ones? The English anthropologist Richard Wrangham developed an answer: language. For millions of years, our ancestors who would eventually become Homo sapiens formed small communities led by an alpha. The strongest, most aggressive male would dominate, passing on his genes at the expense of the weaker males.

All great apes despise bullies. Chimpanzees, for instance, show preferential treatment toward peers who are kind to them and disfavor those who are cruel. But they have no way of sharing that information with one another. Bullies never suffer from poor reputations because there is, without language, no such thing. That changed when our ancestors developed language sophisticated enough to discuss one another’s behavior. Aggression went from an asset — the means by which alpha males dominated their clan — to a liability that the wider group, tired of being lorded over, could band together to punish.

“Language-based conspiracy was the key, because it gave whispering beta males the power to join forces to kill alpha-male bullies,” Wrangham wrote in a pathbreaking 2019 book. Every time an ancient human clan tore down a despotic alpha, they were doing the same thing that Lyudmila Trut did to her foxes: selecting for docility. More cooperative males reproduced, the aggressive ones did not. We self-domesticated.

But just as early humans were breeding one form of aggression out, they were selecting another in: the collective violence they’d used both to topple the alphas and to impose a new order in their place. Life became ruled by what the anthropologist Ernest Gellner called “tyranny of the cousins.” Tribes became leaderless, consensus-based societies, held together by fealty to a shared moral code, which the group’s adults (the “cousins”) enforced, at times violently. “To be a nonconformist, to offend community standards, or to gain a reputation for being mean became dangerous adventures,” Wrangham wrote. Upset the collective and you might be shunned or exiled — or wake up to a rock slamming into your forehead. Most hunter-gatherer societies live this way today, suggesting that the practice draws on something intrinsic to our species.

The basis of this new order was moral outrage. It was how you alerted your community to misbehavior — how you rallied them, or were yourself rallied, to punish a transgression. And it was the threat that hung over your head from birth until death, keeping you in line. Moral outrage, when it gathers enough momentum, becomes what Wrangham calls “proactive” and “coalitional” aggression — colloquially known as a mob. When you see a mob, you are seeing the cousins’ tyranny, the mechanism of our self-domestication. This threat, often deadly, became an evolutionary pressure in its own right, leading us to develop ultrafine sensitivities to the group’s moral standards — and an instinct to go along. If you want to prove to the group that it can trust you to enforce its standards, pick up a rock and start throwing. Otherwise, you might be next.

In our very recent history, we decided that those impulses are more dangerous than beneficial. We replaced the tyranny of cousins with the rule of law (mostly), banned collective violence, and discouraged moblike behavior. But instincts cannot be entirely neutralized, only contained. Social networks, by tapping directly into our most visceral group emotions, bypass that containment wall — and, in the right circumstances, tear it down altogether, sending those primordial behaviors spilling back into society.”

When Congress passed a stimulus package in 2020, for example, the most-shared posts on Twitter reported that the bill siphoned $500 million meant for low-income Americans to Israel’s government and another $154 million for the National Art Gallery, that it funded a clandestine $33 million operation to overthrow Venezuelas president, that it slashed unemployment benefits, and that $600 Covid-relief checks were really just loans that the IRS would take back on the following year’s taxes.”

“As the Valley expanded its reach, this culture of optimization at all costs took on second-order effects. Uber optimizing for the quickest rideshare pickups engineered labor protections out of the global taxi market. Airbnb optimizing for short-term rental income made long-term housing scarcer and more expensive. The social networks, by optimizing for how many users it could draw in and how long it could keep them there, may have had the greatest impact of all. “It was a great way to build a startup,” Chaslot said. “You focus on one metric, and everybody’s on board [for] this one metric. And it’s really efficient for growth. But it’s a disaster for a lot of other things.””

The Election

“At the end of November, Guillaume Chaslot published his results from tracking YouTube’s algorithm in the runup to the vote. Though it represented just a slice of YouTube’s billions of video recommendations, the results were alarming. “More than 80 percent of recommended videos were favorable to Trump, whether the initial query was ‘Trump’ or ‘Clinton,” he wrote. “A large proportion of these recommendations were divisive and fake news.” Of those, some of the most popular promoted Pizzagate: the FBI had exposed Hillary Clinton’s “pedophile satanic network” (1.2 million views), evidence of Bill Clinton sexually assaulting a child had surfaced (2.3 million views), and on and on.

Chaslot and DiResta were circling around a question being asked more directly by the public: Had social media platforms elected Trump? Taken narrowly, it was easy to answer. Fewer than 80,000 votes, out of 138 million, had swung the decisive states.”

“The theory suggested that social contact led distrustful groups to humanize one another. But subsequent research has shown that this process works only under narrow circumstances: managed exposure, equality of treatment, neutral territory, and a shared task. Simply mashing hostile tribes together, researchers repeatedly found, worsens animosity.”

Reading an article and then the comments field beneath it, an experiment found, leads people to develop more extreme views on the subject in the article. Control groups that read the article with no comments became more moderate and open-minded. It wasn’t that the comments themselves were persuasive; it was the mere context of having comments at all. News readers, the researchers discovered, process information differently when they are in a social environment: social instincts overwhelm reason, leading them to look for affirmation of their side’s righteousness.”

Myanmar, Part 2

“BY THE TIME I landed in Myanmar, the soldiers were already throwing babies into fires. For weeks, the military had waged unrestrained war on the thatched-roof villages that dotted the country’s westernmost province. Whole battalions pushed from paddy to paddy as gunships roared overhead. They claimed to hunt insurgents. In reality, they were setting upon a community of one and a half million Muslim farmers and fishermen who called themselves Rohingya.

The soldiers, sent to exterminate the impoverished minority that many of Myanmar’s leaders and citizens had come to see as an intolerable enemy within, would arrive at a village, then begin by setting rooftops afire. They lobbed grenades through hut doorways and sent rockets slamming into the walls of longhouses. They fired into the backs of peasants fleeing across the surrounding fields. As the houses burned, the men of the village would be arrayed in a line and shot to death. Families streamed by the hundred thousand toward the border. The soldiers attacked these too. They hid land mines in the refugees’ paths. Survivors who made it to relative safety in Bangladesh detailed horror after horror to journalists and aid workers who picked their way through the overcrowded camps.

“People were holding the soldiers’ feet, begging for their lives,” one woman told my colleague Jeffrey Gettleman. “But they didn’t stop, they just kicked them off and killed them.” When soldiers came to her village, she said, they demanded she surrender the infant she was cradling. When she refused, they beat her, ripped her son from her arms, and threw him into an open fire. Then they raped her.

Her story was typical. A twenty-year-old woman told a Human Rights Watch investigator that soldiers had killed her infant daughter in the same way. The soldiers then raped her and her mother. When her sister resisted, they killed her with bayonets. While this was happening, a group of villagers arrived and beat her three teenage brothers to death. Local men often accompanied the soldiers as eager volunteers, swinging hatchets and farm implements. They were Rakhine, the region’s other major ethnic group, who, like most in Myanmar, are Buddhist. Their presence hinted at the communal nature of the violence, as well as the groundswell of public pressure that had occasioned it.”

“The head of Myanmar’s first real media collective, a jittery reporter back from years in exile, said the country’s long-suppressed journalists, finally unfettered, faced a new antagonist. Social media platforms were doing what even the dictatorship’s trained propagandists couldn’t: producing fake news and nationalist fanfare so engaging, so flattering to readers’ biases, that people chose it voluntarily over real journalism. When reporters tried to correct the misinformation flowing online, they became the target of it instead, accused of abetting foreign plots.

Civic leaders told me that social media platforms were pumping the national bloodstream with conspiracies and ultranationalist rage. Citizens who’d marched for an open, inclusive democracy now spent hours posting in groups dedicated to vilifying minorities or to glorifying the country’s leaders. The chief of the military, once a reviled symbol of the dictatorship who had stepped down only a few years earlier, now had 1.3 million Facebook fans.

People from all walks of life breathlessly recounted, as unvarnished fact, crazed and hateful conspiracies that they inevitably traced to social media. Buddhist monks insisted the Muslims were plotting to steal Myanmar’s water, old ladies that they would not be safe until minorities were purged from their midst, young students that humanitarian groups were arming the Rohingya on behalf of foreign powers. All of them backed the military’s campaign — grateful, sometimes gleeful, for the violence being committed on their behalf.

No algorithm could generate hatred this severe out of nothing. The platforms drew on a crisis that had been building since 2012 in the nation’s west, where most Rohingya lived. A handful of incidents between Rohingya and Rakhine — a rape, a lynching, a spree of murders — had spiraled into communal riots. Troops intervened, herding civilians who’d been displaced from their homes, mostly Rohingya, into camps. The Rohingya languished. In 2015 thousands attempted to flee, describing growing persecution from neighbors and soldiers alike.

Anti-Rohingya sentiment dated back at least a century, to the early 1900s, when British overlords imported thousands of colonial subjects from the Indian Raj, many of them Muslim. The effort was playbook divide and rule; the newcomers, who filled out the urban merchant class, relied on the British for safety. After the British left, in 1948, independence leaders sought to consolidate their new nation around shared ethnic and religious identity. But Myanmar’s diversity made this difficult; they needed an enemy to rally against. Political leaders promoted colonial-era suspicions of Muslims as alien interlopers sponsored by foreign empires. In truth, however, merchant-class Indians imported by the British had mostly fled in 1948 or shortly thereafter, so leaders sublimated national ire to an unrelated group of Muslims: the Rohingya. To sell the ruse, the Rohingya were classified as illegal immigrants.”

“In 2017, when sporadic violence between soldiers and a handful of Rohingya rebels culminated in a midnight insurgent attack on several police posts, much of the country was screaming for blood. A few days later, the military complied, launching their genocide.”

“A Washington DC think tank analyzed a sample of 32,000 Myanmar Facebook accounts, everyday users, finding their pages awash in hate speech and misinformation. One popular meme showed graphic bestiality covered in Arabic script, another of the prophet Mohammad being orally penetrated. Another claimed to show evidence of Rohingya committing cannibalism; the image was in fact taken from a video game marketing stunt. It was shared nearly 40,000 times. Another, falsely claiming that Rohingya were smuggling weapons into Myanmar, was shared 42,700 times. “It’s time to kill all kalars,” one user wrote, using a slur for Rohingya. Another responded, “We will behead ten thousand kalars’ heads.” Another: “For the next generation, burn all Muslim villages nearby.””

Extremism

“Eventually, the sunny view of the Arab Spring came to be revised. “This revolution started on Facebook,” Wael Ghonim, an Egyptian programmer who’d left his desk at Google to join his country’s popular uprising, had said in 2011. “I want to meet Mark Zuckerberg someday and thank him personally.” Years later, however, as Egypt collapsed into dictatorship, Ghonim warned, “The same tool that united us to topple dictators eventually tore us apart.” The revolution had given way to social and religious distrust, which social networks widened by “amplifying the spread of misinformation, rumors, echo chambers, and hate speech,” Ghonim said, rendering society “purely toxic.””

“During that night’s dinner rush, a customer began yelling in Sinhalese about something he had found in his beef curry. Farsith, the twenty-eight-year-old brother running the register, ignored him. He didn’t speak Sinhalese. And drunk customers, he’d learned, were best ignored. He wasn’t aware that, the day before, a viral Facebook rumor had claimed, falsely, that police had seized 23,000 sterilization pills from a Muslim pharmacist here. If he had, Farsith might’ve understood why, as the customer grew more agitated, a crowd began to form.

The men circled Farsith, slapping his shoulders, yelling a question that Farsith couldn’t quite understand. He grasped only that they were asking about a lump of flour in the customer’s curry, using the phrase “Did you put?” He worried that saying the wrong thing might turn the crowd violent, but so would saying nothing. “I don’t know,” Farsith said in broken Sinhalese. “Yes, we put?” The mob, hearing confirmation, collapsed onto Farsith and beat him. They had been asking if he’d put sterilization pills in the food, as they’d all seen on Facebook. Leaving him bloody on the floor, they pulled down shelves, smashed furniture, ripped appliances from the walls. Dozens of men from the neighborhood, having heard that the Facebook rumors were true, joined in. They marched to the local mosque, which they set on fire while the imam hid in his smoldering office, waiting to die.

In an earlier time, this calamity might have ended in Ampara. But someone in the mob had taken cell-phone video of Farsith’s admission: “Yes, we put.” Within hours, it was shared to a Sri Lankan Facebook group called the Buddhist Information Center, which had won a fervent following by claiming to provide true information about the Muslim threat. The page published the shaky, eight-een-second clip as proof of the Islamophobic memes it had hosted for months. Then the video spread.

As in Myanmar, social media had been initially received as a force for good in Sri Lanka. It kept families in touch even as many worked abroad to send money home. Activists and elected leaders credited it with helping to usher in democracy. And thanks to zero-rating programs, the same strategy Facebook had used in Myanmar, millions of people could access the services for free.

Zero-rating had grown out of a peculiarity of Silicon Valley economics: the mandate for perpetual user growth. Poorer countries are not particularly lucrative for platforms; advertisers pay little to reach consumers making a few dollars a day. But by spending aggressively now, the companies could preemptively dominate a poor country’s media and internet markets, where they would face few competitors. They could tell investors that revenue was primed to explode in ten or twenty years as customers there entered the middle class.”

“Everyone knew what was coming. The first Molotov cocktails flew that evening. For three days, mobs ruled the streets. Going house to house, wherever Muslims lived, they smashed through the front doors, ransacked floor to ceiling, then set the homes afire. They burned mosques and Muslim-owned businesses. They beat people in the street.

In Digana, the town where Weerasinghe had walked in his video, one of those homes belonged to the Basith family. They sold slippers from the first floor and lived on the second. Most had fled. But an elder son, Abdul, had stayed behind and was trapped upstairs. “They have broken all the doors in our house,” Abdul said in an audio message he sent to his uncle on WhatsApp. “There are flames coming inside.” After a few moments, he pleaded, his voice rising, “The house is burning.” His family could not reach the house. Police did not retake Digana until the next morning. They found Abdul dead upstairs.

The country’s leaders, desperate to stem the violence, blocked all access to social media. It was a lever they had resisted pulling, reluctant to block platforms that some still credited with their country’s only recent transition to democracy, and fearful of appearing to reinstate the authoritarian abuses of earlier decades. Two things happened almost immediately. The violence stopped; without Facebook or WhatsApp driving them, the mobs simply went home. And Facebook representatives, after months of ignoring government ministers, finally returned their calls. But not to ask about the violence: they wanted to know why traffic had zeroed out.”

“One thing stuck out. Towns with higher-than-average Facebook use reliably experienced more attacks on refugees. This held true in virtually any sort of community: big or small, affluent or struggling, liberal or conservative. The uptick did not correlate with general web usage; it was particular to Facebook. Their data boiled down to a breathtaking statistic: Wherever per-person Facebook use rose by one standard deviation above the national average, attacks on refugees increased by about 35 percent. Nationwide, they estimated, this effect drove as much as 10 percent of all anti-refugee violence.

Experts whom I asked to review the findings called them credible and rigorous.”

“When refugees had first arrived here a few years earlier, in 2015, so many locals had volunteered to help that Anette Wesemann, who’d taken over the local refugee-integration center after giving up her home in bustling Hanover for quiet village life, couldn’t keep up. She would find Syrian or Afghan families attended by whole entourages of self-appointed life coaches and German tutors. “It was really moving,” she said. But when she set up a Facebook page to organize volunteer events, it filled with anti-refugee vitriol of a sort she’d never encountered offline. Some posts were threatening, mentioning local refugees by name. Over time, their anger proved infectious, dominating the page.”

“In an experiment in rural Mexico, researchers produced an audio soap opera whose story discouraged domestic violence against women. In some areas, people had the soap played for them privately in their homes. In others, it was broadcast on village loudspeakers or at community meetings. Men who listened at home were just as prone to domestic violence as they had been before. But men who listened in group settings became significantly less likely to commit abuse. And not out of perceived pressure. Their internal beliefs had shifted, growing morally opposed to domestic violence and supportive of gender equality. The difference was in seeing their peers absorb the soap opera. The conformity impulse – the same one that had led Facebook’s first users to trick themselves into fuming over the news feed – can soak all the way to the moral marrow of your innermost self.

Most of the time, deducing our peers’ moral views is not so easy. So we use a shortcut. We pay special attention to a handful of peers whom we consider to be influential, take our cues from them, and assume this will reflect the norms of the group as a whole. The people we pick as moral benchmarks are known as “social referents.” In this way, morality is “a sort of perceptual task,” Paluck said. “Who in our group is actually popping out to us? Who do we recruit in our memories when we think about what’s common, what’s desirable?”

To test this, Paluck had her team fan out to fifty-six schools, identifying which students were influential among their peers as well as which students considered bullying to be morally acceptable. Then she picked twenty or thirty students at each school who seemed to fit both conditions: these were, presumably, the students who played the greatest role in instilling pro-bullying social norms in their communities. They were asked to publicly condemn bullying – not forced, just asked. The gentle nudge to this tiny population proved transformative. Psychological benchmarks found that thousands of students became internally opposed to bullying, their moral compasses pulled toward compassion. Bullying-related disciplinary reports dropped by 30 percent.”

“If the police could show how the platforms distorted reality, Guske believed, people would be persuaded to reject what they’d seen there. But he also knew that, on social media, a sober fact-check would never rise as high as a salacious rumor. So his team identified locals who had shared the rumor early in its spread, then showed up at their homes with evidence that they had gotten it wrong. He urged them to publicly disavow their claims, hoping to turn the platforms’ own promotion systems against the misinformation. All but one removed or corrected their posts as he’d requested. But they could never keep up with the platforms, whose poisonous output, he feared, was only accelerating. And he lamented that Facebook, at that point a $500 billion company, left it to overworked police departments to manage the risks they created.”

“Whenever internet access went down in an area with high Facebook use, attacks on refugees dropped significantly. The same drop did not occur, however, when areas with high internet usage but only average Facebook usage suffered an outage, suggesting that the violence-provoking effect was specific to social media, rather than from the internet itself. And violence dropped by the same rate – 35 percent – at which the study had suggested Facebook boosted such attacks.”

One of the online alt right’s most important gateways is the YouTube page of Jordan Peterson, a Canadian psychology professor. In 2013, Peterson began posting videos addressing, amid esoteric Jungian philosophy, youth male distress. He offered life advice (clean your room, sit up straight) amid exhortations against racial and gender equality as imperiling “the masculine spirit.”

YouTube searches for “depression” or certain self-help keywords often led to Peterson. His videos’ unusual length, sixty minutes or more, align with the algorithm’s drive to maximize watch time. So does his college-syllabus method of serializing his argument over weeks, which requires returning for the next lecture and the next. But most of all, Peterson appeals to what the sociologist Michael Kimmel calls “aggrieved entitlement.” For generations, white men expected and received preferential treatment and special status. As society inched toward equality, those perks, while still substantial, declined. Some white men acclimated. Some rebelled. Others knew only that they felt something being taken away. Peterson et al. give them a way to explain those feelings of injustice – feminists and leftists are destroying the masculine spirit – and an easy set of answers. Clean your room. Sit up straight. Reassert traditional hierarchy.”

“Data suggest this promotional sequence is converting users at scale. Users who comment on Peterson’s videos subsequently become twice as likely to pop up in the comments of extreme-right YouTube channels, a Princeton study found.”

“The social platforms had arrived, however unintentionally, at a recruitment strategy embraced by generations of extremists. The scholar J. M. Berger calls it “the crisis-solution construct.” When people feel destabilized, they often reach for a strong group identity to regain a sense of control. It can be as broad as nationality or narrow as a church group. Identities that promise to re-contextualize individual hardships into a wider conflict hold special appeal. You’re not unhappy because of your struggle to contend with personal circumstances; you’re unhappy because of Them and their persecution of Us. It makes those hardships feel comprehensible and, because you’re no longer facing them alone, a lot less scary.

Crisis-solution: there is a crisis, the out-group is responsible, your in-group offers the solution. If that sense of conflict escalates too far, it can reach the point of radicalization, in which you see the out-group as an immutable threat over which only total victory is acceptable. “The scale of the crisis becomes more extreme, and the prescribed solution becomes more violent,” Berger wrote, until destroying the out-group becomes the core of the in-group’s shared identity. “The current generation of social media platforms,” he added, “accelerates polarization and extremism for a significant minority,” enabling and encouraging exactly this cycle.”

Moderation

“Corner-cutting appears to be rampant. At Jacob’s agency, if moderators encountered a post in a language that no one on hand could read, they were instructed to mark it as approved, even if users had flagged the post as dangerous hate speech. It was a shocking revelation. Not only had those monitoring groups in Sri Lanka and Myanmar been right that Facebook was actively upholding outright incitements to genocide, but it was a matter of policy at some outsourcing firms to do so.”

“The changes were dramatic. People who deleted Facebook became happier, more satisfied with their life, and less anxious. The emotional change was equivalent to 25 to 40 percent of the effect of going to therapy— a stunning drop for a four-week break. Four in five said afterward that deactivating had been good for them. Facebook quitters also spent 15 percent less time consuming the news. They became, as a result, less knowledgeable about current events — the only negative effect.

But much of the knowledge they had lost seemed to be from polarizing content; information packaged in a way to indulge tribal antagonisms. Overall, the economists wrote, deactivation “significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” Their level of polarization dropped by almost half the amount by which the average American’s polarization had risen between 1996 and 2018 — the very period during which the democracy-endangering polarization crisis had occurred. Again, almost half.

As evidence mounted throughout 2018, action began following. That year, Germany mandated that social media platforms remove any hate speech within twenty-four hours of its being flagged, or face fines.

“This turned out to represent a trend, and a revealing one, uncovered by Erica Chenoweth, a scholar of civil resistance at Harvard. The frequency of mass-protest movements had been growing worldwide since the 1950s, she found, and had accelerated lately. Between the 2000s and the 2010s, average episodes per year had jumped nearly 50 percent. Their success rate had been growing, too, year after year, for decades. Around 2000, 70 percent of protest movements demanding systemic change succeeded. But then, suddenly, that trend reversed. They began failing — just as they were getting more frequent. Now, Chenoweth found, only 30 percent of mass movements succeeded. “Something has really shifted,” she told me, calling the drop “staggering.” Virtually every month, another country would erupt in nationwide protests: Lebanon over corruption, India over gender inequality, Spain over Catalan separatism. Many at a scale exceeding the most transformative movements of the twentieth century. And most of them fizzling.

To explain this, Chenoweth drew on an observation by Zeynep Tufekci, the University of North Carolina scholar: social media makes it easier for activists to organize protests and to quickly draw once-unthinkable numbers — but this may actually be a liability. For one, social media, though initially greeted as a force for liberation, “really advantages repression in the digital age much more than mobilization,” Chenoweth said. Dictators had learned how to turn it to their advantage, using their superior resources to flood platforms with disinformation and propaganda.

The effect in democracies was subtler but still powerful. Chenoweth cited, as a comparison, the Student Nonviolent Coordinating Committee, a civil rights-era student group. Before social media, activists had to mobilize through community outreach and organization-building. They met almost daily to drill, strategize, and confer. It was agonizing, years-long work. But it made the movement durable, built on real-world ties and chains of command. It allowed movements like SNCC to persevere when things got hard, respond strategically to events, and translate street victories into political change.

Social media allows protests to skip many of those steps, putting more bodies on the streets more quickly. “That can give people a sense of false confidence,” Chenoweth said, “because it’s lower commitment.” Without the underlying infrastructure, social media movements are less able to organize coherent demands, coordinate, or act strategically. And by channeling popular energy away from the harder kind of organizing, it preempts traditional movements from emerging.

Facebook held a meeting to consider retooling its algorithm to elevate serious news outlets. This might restore trust in Facebook, some executives argued. But it was opposed by Joel Kaplan, the former Bush administration official and lobbyist. Since Trump’s election, Kaplan seemed to act, with Zuckerberg’s blessing, as the GOP’s representative at Facebook, a job that carried the title of vice president for global public policy. He argued that the change would invite GOP accusations that Facebook promoted liberals, effectively turning Trump’s view, that mainstream journalists were Democratic agents, into Facebook company policy. He prevailed.

Also that year, Kaplan successfully pushed to shelve one of the company’s internal reports finding that the platform’s algorithms promoted divisive, polarizing content. He and others objected that addressing the problem would disproportionately affect conservative pages, which drove an outsized share of misinformation. Better to let users be misinformed. It was not the last time that the public interest would be sacrificed to avoid even hypothetical Republican objections, however groundless.

Facebook’s courtship of Republicans, who retained control of the levers of federal oversight throughout 2018 and 2019, was exhaustive. It hired Jon Kyl, a former Republican senator, to produce a report on any anti-conservative bias in the platform. The report largely repackaged Trump’s #StopTheBias accusations, allowing Facebook to tell GOP critics it was studying the issue and following Kyl’s recommendations. Zuckerberg hosted off-the-record dinners with influential conservatives, including Fox News host Tucker Carlson, who had accused Facebook of seeking “the death of free speech in America.” The platform recruited the Daily Caller, the right-wing news site Carlson founded, to participate in its fact-checking program, granting it power over adjudicating truth on the platform. Facebook announced it would allow politicians to lie on the platform and grant them special latitude on hate speech, rules that seemed written for Trump and his allies.

“I’d been at FB for less than a year when I was pulled into an urgent inquiry — President Trump’s campaign complained about experiencing a decline in views,” Sophie Zhang, a Facebook data scientist, recalled on Twitter, “I never was asked to investigate anything similar for anyone else.” This sort of appeasement of political leaders appeared to be a global strategy. Between 2018 and 2020, Zhang flagged dozens of incidents of foreign leaders promoting lies and hate for gain, but was consistently overruled, she has said. When she was fired, she refused a $64,000 non-disparagement severance so that she could release her 7,800-word exit memo chronicling what she saw as a deliberate practice of allowing politicians to misuse the platform, including in countries where the stakes extended to sectarian violence and creeping authoritarianism. “I know that I have blood on my hands by now,” she wrote.

In 2019, Vietnam’s communist dictatorship privately conveyed a message to Facebook: the platform needed to censor government critics or the Vietnamese government might block it in the country. Zuckerberg agreed, employees later revealed, allowing Facebook — “the revolution company,” as he’d called it — to secretly become a tool of authoritarian repression. Though he argued that Vietnamese citizens were better served by accessing a partly free Facebook than none at all, his initial secrecy, and Facebook’s history, cast doubt on the purity of his intentions. One group estimated that Facebook’s Vietnam presence brings in $1 billion annually.

The company also announced that year that Facebook would no longer screen political advertisements for truth or accuracy. Only extreme rule-breaking, like calls for violence, would be enforced. Trump, who had spent lavishly on Facebook in the past, was presumed to be the main beneficiary, as well as anyone like him. About 250 employees signed an open letter — an exceedingly rare show of public dissent —pleading with Zuckerberg to roll back the policy, which would “increase distrust in our platform” and “undo integrity product work” intended to protect elections.”

“For some people, pedophilic impulses form early in life and remain more or less innate. But Kathryn Seigfried-Spellar and Marcus Rogers, Purdue University psychologists, have found that child-pornography consumers often developed that interest, rather than being born with it. People who undergo this process begin with adult pornography, then move to incrementally more extreme material, following an addiction-like compulsion to chase increasingly deviant sexual content, pornography a degree more taboo than what they’d seen before. “As they get desensitized to those pictures, if they’re on that scale,” Rogers said, “then they’re going to seek out stuff that’s even more thrilling, even more titillating, even more sexualized.”

Their compulsions were shaped by whatever content they happened to encounter, akin to training. It was not at all inevitable that these people would follow their urge past any moral line. Nor that this compulsion, if they did follow it, would lead them toward children. But on You Tube, the second-most-popular website in the world, the system seemed to have identified people with this urge, walked them along a path that indulged it at just the right pace to keep them moving, and then pointed them in a very specific direction.”

“Quayle and others warned, YouTube risked eroding viewers’ internal taboo against pedophilia by showing videos of children alongside more mainstream sexual content, as well as displaying the videos high view counts, demonstrating that they were widely viewed and thus presumably acceptable. As Rogers had said: “You normalize it.”

Immediately after we notified YouTube of what we had found, a number of the videos we’d sent them as examples were removed. (Many we had not sent them, but which were part of the same network, remained online.) The platform’s algorithm also immediately changed, no longer linking the dozens of videos together. When we asked YouTube about this, the company insisted that the timing was a coincidence. When I pushed, a spokesperson said it was hypothetically possible that the timing was related but that she could not say either way. It seemed as if YouTube was trying to tidy up without acknowledging there had been anything to tidy.”

Covid and QANON

“All that spring, as the Covid lies and rumors spread, the social media giants insisted that they were taking every available measure. But internal documents suggest that Facebook executives, by April, realized that their algorithms were boosting dangerous misinformation, that they could have stemmed the problem dramatically with the flip of a switch, and that they refused to do so for fear of hurting traffic.

“That summer, ninety-seven professed Anon believers would run in congressional primaries; twenty-seven of them would win. Two ran as independents. The other twenty-five were in-good-standing Republican nominees for the House of Representatives.”

“A month out from the election, Twitter announced the most substantial changes of any platform. High-follower accounts, including politicians, would be put under tighter rules than others — the opposite of Facebook’s special dispensations. Rule-breaking posts would be removed or hidden behind a warning label. Trump already had fourteen such labels, which functioned as both fact-checks and speed bumps, slowing the ease with which users could read or share them. Twitter later barred users from retweeting or liking the offending Trump posts at all. With those social elements gone, the impact of his tweets seemed to drop considerably.

Twitter also added an element long urged by outside experts: friction. Normally, users could share a post by hitting “retweet,” instantly promoting it onto their own feeds. Now, pressing “retweet” would instead bring up a prompt urging the user to add some message of their own. It forced a pause, reducing the ease of sharing. The scale of the intervention was slight but its effect was significant: retweets declined 20 percent overall, the company said, and the spread of misinformation with it.

Twitter had deliberately slowed engagement, violating its own financial self-interest along with decades of Silicon Valley dogma insisting that more activity online could only be beneficial. The result, seemingly, was to make the world less misinformed and therefore better off.

Most surprising, Twitter temporarily switched off the algorithm that pushed especially viral tweets into users’ news feeds even if they did not follow the tweet’s author. The company called the effort to “slow down” virality — a “worthwhile sacrifice to encourage more thoughtful and explicit amplification.” This was, as best I could tell, the first and only time that a major platform had voluntarily shut down its own algorithm.”

Facebook and Instagram imposed total bans on the Q movement in October, with Twitter gradually culling linked accounts. YouTube’s CEO, Susan Wojcicki, said only that YouTube would remove videos that accused people of involvement in Q-related conspiracies in order to harass or threaten them. The narrow rule tweak was YouTube’s only significant policy change leading up to the election.”

“Dominic Pezzola, a Proud Boys member later charged with breaking through a Capitol window with a police shield, said that “anyone they got their hands on they would have killed, including Nancy Pelosi,” an FBI informant reported. The informant said Pezzola and his friends planned to come to Washington for the inauguration and “kill every single ‘m-fer’ they can.””

“On Telegram — a social app that had grown popular with QAnon as Twitter had applied greater friction — he urged followers to respect Biden’s legitimacy. He added, “As we enter into the next administration please remember all the friends and happy memories we made together over the past few years.” Watkins was effectively telling the movement, suspected to number in the millions, all bracing for a final battle against the evils responsible for every ill in their lives, to stand down. After that, posts from Q, who had already gone mysteriously silent since December 8, ceased.

The sense of an ending pervaded. An 8kun moderator purged the site’s “Research” archives, writing, “I am just performing euthanasia to something I once loved very much.” Some began posting their goodbyes. Others tried to come to grips: “Mods please explain why Biden isn’t arrested yet.” One compared watching Biden’s inauguration to “being a kid and seeing the big gift under the tree… only to open it and realize it was a lump of coal the whole time.” Without mainstream platforms to accelerate their cause or interlink it with the wider social web, the remaining believers had few places to apply their once-mighty energies. They spun and spun, looking for validation they never got, pining for resolution to the psychological crisis that years of radicalization had opened in them.”

The companies largely reverted to their old ways. Enforcement against election disinformation dropped precipitously over the course of 2021, the watchdog group Common Cause found, as lies that undermined democracy were “left to metastasize on Facebook and Twitter.” Movements born on social media continued to rise, seeping into the fabric of American governance. By early 2022, one study found, more than one in nine state lawmakers nationwide belonged to at least one far-right Facebook group. Many promoted the conspiracies and ideologies that had first grown online into law, passing waves of legislation that curbed voting rights, Covid policies, and LGBT protections. Amid an internet-fueled panic over teachers supposedly “grooming” schoolchildren into homosexuality, some pushed legislation encouraging kids to record their teachers for proof — a disturbingly crisp echo of the “gay kit” YouTube conspiracy that had caused such chaos in Brazil. The Texas GOP, which controlled the state senate, state house, and the governor’s mansion, changed its official slogan to “We are the storm,” the Anon rallying cry. In two separate instances, in Colorado and Michigan, election officials loyal to QAnon were caught tampering with voting systems. By the next election, in 2022, QAnon-aligned candidates were on the ballot in 26 states.”

The Future

“Australian regulators had moved to target the Valley’s greatest vulnerability: revenue. As of February 2021, Facebook and Google would be required to pay Australian news outlets for the right to feature links to their work. The platforms, after all, were siphoning the news industry’s ad revenue by trading on their journalism. The new rule included a powerful provision. If the tech firms and news agencies couldn’t agree on a price by the imposed dead-line, government arbiters would set it for them. In truth, the rules favored News Corp, the mega-conglomerate run by Australian-born Rupert Murdoch, who in 2016 had threatened Zuckerberg with just this sort of action.

But whatever the merits, as a test case of governments’ power over social media platforms, the results were telling. Google, just short of the deadline, struck a deal with News Corp and others, bringing it into compliance. Facebook refused. Instead, one morning, Australians woke to find Facebook had blocked all news content. An entire nation’s preeminent information source — 39 percent of Australians said they got their news there — suddenly included no news. Much more had gone dark, too: politicians running for re-election, groups that worked with victims of domestic violence, the state weather service. Even, in the middle of a pandemic, government health offices. The company had finally done what it wouldn’t in Myanmar, over a months-long genocide it had been credibly accused of abetting. Or in Sri Lanka or India. In no case had spiraling violence, however deadly, led the company to flip the “off” switch, even on just one component of the platform. But the week that Australia threatened its revenue, it was lights out.

Australians could, of course, access news or government websites directly. Still, Facebook had, by deliberate design, made itself essential, training users to rely on its platform as the end-all for news and information. With news content gone, rumor and misinformation filled the vacuum. Evelyn Douek, an Australian scholar studying the governance of social media platforms at Harvard Law School, called the blackout “calculated for impact and unconscionable.” Human Rights Watch described the intervention as “alarming and dangerous.” An Australian lawmaker warned that blocking the state weather service might disrupt citizens’ access to updates that, on a week when floods and wildfires both raged, could mean life or death. A few days later, Australias government capitulated, allowing Facebook sweeping exceptions from the regulations.”

When asked what would most effectively reform both the platforms and the companies overseeing them, Haugen had a simple answer: turn off the algorithm. “I think we don’t want computers deciding what we focus on,” she said. She also suggested that if Congress curtailed liability protections, making the companies legally responsible for the consequences of anything their systems promoted, “they would get rid of engagement-based ranking.” Platforms would roll back to the 2000s, when they simply displayed your friends’ posts by newest to oldest. No A.I. to swarm you with attention-maximizing content or route you down rabbit holes.”

“New studies continued to document the verifiable harms caused by the platforms. One set of experiments found that social media algorithms had learned to pick up on users’ unconscious racial biases. The systems internalize cues as subtle as a white user who hesitates a fraction of a second longer before liking or sharing posts from Black users. Platform algorithms might then downrank posts from users of whatever race had elicited that initial micro-hesitation.

Practically everyone carries some degree of implicit racial bias. But platform algorithms appeared to be reading those quick-twitch behaviors as a reflection of what those users wanted, when what the behaviors truly indicate is a person’s conscious mind taking a moment to outreason an unconscious prejudice. Social media was taking away our ability to overcome our biases by indulging them on our behalf.”

“By 2019, only one in four teens reported meeting up with friends almost every day — about half the rate of prior generations. Between 2010 and 2019, the proportion of teens who reported often feeling lonely nearly doubled. Teen anxiety and depression spiked in parallel.

Those born after 2012 or so, sometimes called Generation Alpha, may have it even worse. Being forced by the pandemic to spend two formative years relating to one another through screens rewired their socialization drive in ways we are only beginning to understand. Adolescents from this cohort often say they are so accustomed to forming and maintaining relationships through apps that the physical world, in comparison, feels strange and uncomfortable.

Online feels more peaceful and calming. You don’t have to talk with anybody in person or do anything in person,” a fourteen-year-old boy told a New York Times focus group.

“When I’m online, I can mute myself, and they can’t really see me. I can’t just mute myself in real life,” an eleven-year-old explained.

But growing up clear-eyed has its advantages. Teenage “Luddite Clubs” are springing up in high schools and colleges. Members delete their social media accounts and transfer their smartphone SIM cards to cheap flip phones, self-imposing a technological regression to the mid-2000s — the last moment before the social web came for us all.”

“The bank’s collapse exposed a critical weak point in Silicon Valley. The internet-era industry had been built on the historical aberration of near-zero interest rates — what economists call “free money.” Once that ended, so did big tech’s entire economic model.

Think back to the “squiggle chart” startup conference that had so baffled Renée DiResta. VCs shoving checks into founders’ palms for a share of some free-to-use app with little business plan beyond an upward-angled squiggle on their revenue projections. And though the businesses rarely made a profit, everyone was somehow getting rich.

But the only reason venture capitalists had all this cash to spend was that they were getting gobs from outside investors (rich individuals, pension funds) desperate for a healthy rate of return at a time when low interest rates had depressed more traditional investments. VCs could indeed promise a big profit, even if their startup never earned a dime, by quickly taking the company public on the stock exchange. Wall Street would value the startup even more highly — not because it made any money, but because it had acquired a bunch of users with some free-to-use service. Wall Street didn’t care that this made the company’s value hypothetical, whenever it figured out how to monetize those users. With interest rates low, it’s no big deal to borrow a bunch of money, stick it in tech stocks, and wait twenty years for the company to turn a profit. All the investment has to do is outpace whatever tiny interest payments you’ll owe the bank. Which it probably will as yet more investors drive up the stock price.

Meanwhile, tech companies like, say, Uber can use all that investment to subsidize customer rides, thereby generating more and more business — and repeatedly doubling its stock price — even as it loses money on every transaction. This is also why even very profitable tech companies like Facebook and Google obsess over user growth: their stock price relies on the assumption that they will continue growing forever. Everyone in every step of the company’s life cycle, from startup to Fortune 500, is overpaying on the expectation that someone else will overpay even more to buy them out, because money is free.

Once those rates rose, everything began to collapse. Venture capitalists, newly bearish and harder up for cash, invested half as much in tech firms in 2022 as they had the year before. Tech valuations also dropped, on average, by half. The number of tech companies to raise $100 million or more fell by three-quarters. So-called “unicorns,” startups valued at over $1 billion, all but disappeared. Market watchers heralded a “venture capital winter.” The startup era was ending.

The Silicon Valley giants were hit even harder. Facebook’s stock price dropped by more than three quarters, before recovering to about half of its peak. Google and others followed similar trajectories. Even that partial recovery was hard won. Investors were no longer interested in moonshots with twenty-year time horizons. They wanted dividends this quarter. Under shareholder pressure, projects like Facebook’s multibillion-dollar metaverse were largely shuttered. Big tech collectively laid off 154,000 employees in 2022. Then another 201,000 in just the first half of 2023. The companies have grown both smaller and smaller-minded, scraping each penny from a core business of day-to-day digital addiction.”

“That April, he offered to buy the company outright for $44 billion. It was probably a stunt for attention, a frequent habit of his; when Twitter’s board voted to accept, Musk fought for months to back out. Twitter sued, forcing Musk to follow through on the purchase that October.

Buying Twitter was a financial calamity for Musk’s empire. The only way he could fund the purchase was to sell a block of Tesla stock. Because this came as Tesla faced its own struggles, Wall Street took the CEO’s sell-off as a sign that the car company was faring even worse than known, sending its stock price, and Musk’s net worth, tumbling. Musk also took out a $13 billion loan — legally held by Twitter — to complete the financing. His bankers considered the loan so high-risk that they demanded a $1.2 billion annual debt payment. Twitter, which has lost money most years since its founding, almost surely cannot cover this. Musk may not be able to afford the payments either without selling more Tesla stock — a move he has promised investors not to make, and which might be legally difficult, in any case, as most of his shares are pledged as loan collateral. Desperate to cover the debt, Musk fired half of Twitter’s workforce, then pushed survivors to work such long hours that many slept in the office. He slashed essential technical functions, causing Twitter to become glitchy and slow, as well as teams dedicated to election integrity and combating misinformation.”

--

--

Austin Rose

I read non-fiction and take copious notes. Currently traveling around the world for 5 years, follow my journey at https://peacejoyaustin.wordpress.com/blog/