Under Fire and Losing Trust, Facebook Plays the Victim

On Tuesday morning, Facebook employees were quiet even for Facebook employees, buried in the news on their phones as they shuffled to a meeting in one of the largest cafeterias at the company’s headquarters in Menlo Park, Calif. Mark Zuckerberg, their chief executive officer, had always told them Facebook Inc.’s growth was good for the world. Sheryl Sandberg, their chief operating officer, had preached the importance of openness. Neither appeared in the cafeteria on Tuesday. Instead, the company sent a lawyer.

The context: Reports in the  and thethe previous weekend that Cambridge Analytica, the political consulting firm that advised President Trump’s electoral campaign on digital advertising, had effectively stolen personal information from at least 50 million Americans. The data had come from Facebook, which had allowed an outside developer to take it before that developer shared it with Cambridge Analytica.

Facebook tried to get ahead of the story, announcing in a blog post that it was suspending the right-leaning consultancy and that it no longer allowed this kind of data sharing. Its users—a cohort that includes 2 billion or so people—weren’t ready to forgive. The phrase #DeleteFacebook flooded social media. (Among the outraged was WhatsApp co-founder Brian Acton, who in 2014 sold Facebook his messaging app for $19 billion.) Regulators in the U.S. and Europe announced they were opening inquiries. The company’s stock fell almost 9 percent from March 19-20, erasing about $50 billion of value.

QuicktakeFacebook and Cambridge Analytica

In most moments of crisis for the company, Zuckerberg or Sandberg have typically played damage-controller-in-chief. This time, the employees got all of 30 minutes with Paul Grewal, the deputy general counsel. the news reports were true—a blame-deflecting phrase that struck some as odd—Grewal told them, Facebook had been lied to. Cambridge Analytica should have deleted the outside developer’s data, but it didn’t. Reporters were calling this a breach, but it wasn’t, because users freely signed away their own data and that of their friends. The rules were clear, and Facebook followed them.

One employee asked the same question twice: Even if Facebook played by its own rules, and the developer followed policies at the time, did the company ever consider the ethics of what it was doing with user data? Grewal didn’t answer directly.

A Facebook spokesman declined to comment for this story, referring to a January post by Zuckerberg stating the CEO’s aim to get the company on a “better trajectory.” On Wednesday afternoon, Zuckerberg published a post promising to audit and restrict developer access to user data. “We have a responsibility to protect your data, and if we can't then we don't deserve to serve you,” he wrote. “I've been working to understand exactly what happened and how to make sure this doesn't happen again.”

Read more: Silicon Valley Has Failed to Protect Our Data. Here’s How to Fix It

Of course, Facebook has weathered complaints about violating user privacy since its earliest days without radically altering its practices. The first revolt came in 2006, when users protested that the service’s news feed was making public information that the users had intended to keep private. The news feed is now the company’s core service. In 2009, Facebook began making users’ posts, which had previously been private, public by default. That incident triggered anger, confusion, an investigation by the U.S. Federal Trade Commission, and, ultimately, a consent decree. In 2014, the company disclosed that it had tried to manipulate users’ emotions as part of an internal psychology experiment.

As bad as each of these may have seemed, Facebook users have generally been unfazed. They’ve used the service in ever-greater numbers for greater amounts of time, in effect trading privacy for product. They were willing to give more and more data to Facebook in exchange for the ability to connect with old high school friends, see pictures of their grandkids, read only the news that they agree with. The concept was dubbed Zuckerberg’s Law in 2008, when the CEO argued at a conference that each year people would share twice as much information about themselves as they had the year before. Notions of privacy were eroding, Zuckerberg said in 2010. “That social norm,” he added, “is just something that has evolved over time.”

For a while, the only thing Facebook needed to do to keep growing was to remove barriers to downloading and using the product. By 2014, it had reached almost half the world’s internet-connected population, and Zuckerberg realized the only way to expand further was to add people to the internet. While Facebook invested in internet subsidy programs in developing countries, it also went on an acquisition binge, buying up popular social software makers such as Instagram and WhatsApp.

These moves led to annual revenue growth of about 50 percent, with most of the increase coming from mobile ads, and converted the company’s Wall Street doubters. Last year, even as Facebook was forced to acknowledge that it had played a role in the Russian disinformation campaign during the election of Trump, investors pushed its stock price up 53 percent.

But the big blue app, as employees call Facebook’s namesake service, hasn’t changed much in years. The company has tweaked its algorithm, at times favoring or punishing clickbait-style news and viral videos, but most people use the service the same way they did two or three years ago. And some people are simply over it. In North America, Facebook’s daily user counts fell for the first time in the fourth quarter, and time spent on the site declined by 50 million hours a day. Facebook claimed that this was by design: Zuckerberg was focusing on helping users achieve “time well-spent,” with the news feed de-emphasizing viral flotsam.

The company positioned its new algorithmic initiative as a reaction to a study co-authored by one of its employees, arguing that while Facebook could be bad for users' mental health if they used it passively, more active use was actually good for you. The study could be viewed as a rare show of corporate transparency or a novel way to goose engagement.

Some of the moves, however, look even more desperate. Now, when people stop going on Facebook as often as usual, the company sends them frequent emails and text messages to encourage them to re-engage. It’s also getting more aggressive about suggesting what users should post.  According to some employees, the focus on time well-spent just means the company will point to metrics such as comments and personal updates as signs of growth, rather than genuinely improving the user experience.

In the long run, Facebook wants to make its product even more immersive and personal than it is now. It wants people to buy video chatting and personal assistant devices for their homes, and plans to announce those products this spring, say people familiar with the matter. It wants users to dive into Facebook-developed virtual worlds. It wants them to use Facebook Messenger to communicate with businesses, and to store their credit-card data on the app so they can use it to make payments to friends.

Employees have begun to worry that the company won’t be able to achieve its biggest goals if users decide that Facebook isn’t trustworthy enough to hold their data. At the meeting on Tuesday, the mood was especially grim. One employee told a reporter that the only time he’d felt as uncomfortable at work, or as responsible for the world’s problems, was the day Donald Trump won the presidency.

BOTTOM LINE – As its share price tanks and regulators circle, Facebook is struggling to answer basic questions about its next moves, even from its own employees.

Read more: http://www.bloomberg.com/news/articles/2018-03-21/under-fire-and-losing-trust-facebook-plays-the-victim

When WhatsApp’s Fake News Problem Threatens Public Health

In remote areas of Brazil’s Amazon basin, yellow fever used to be a rare, if regular visitor. Every six to ten years, during the hot season, mosquitoes would pick it up from infected monkeys and spread it to a few loggers, hunters, and farmers at the forests’ edges in the northwestern part of the country. But in 2016, perhaps driven by climate change or deforestation or both, the deadly virus broke its pattern.

Yellow fever began expanding south, even through the winter months, infecting more than 1,500 people and killing nearly 500. The mosquito-borne virus attacks the liver, causing its signature jaundice and internal hemorrhaging (the Mayans called it xekik, or “blood vomit”). Today, that pestilence is racing toward Rio de Janeiro and Sao Paulo at the rate of more than a mile a day, turning Brazil’s coastal megacities into mega-ticking-timebombs. The only thing spreading faster is misinformation about the dangers of a yellow fever vaccine—the very thing that could halt the virus’s advance. And nowhere is it happening faster than on WhatsApp.

In recent weeks, rumors of fatal vaccine reactions, mercury preservatives, and government conspiracies have surfaced with alarming speed on the Facebook-owned encrypted messaging service, which is used by 120 million of Brazil’s roughly 200 million residents. The platform has long incubated and proliferated fake news, in Brazil in particular. With its modest data requirements, WhatsApp is especially popular among middle and lower income individuals there, many of whom rely on it as their primary news consumption platform. But as the country’s health authorities scramble to contain the worst outbreak in decades, WhatsApp’s misinformation trade threatens to go from destabilizing to deadly.

On January 25, Brazilian health officials launched a mass campaign to vaccinate 95 percent of residents in the 69 municipalities directly in the disease’s path—a total of 23 million people. A yellow fever vaccine has been mandatory since 2002 for any Brazilian born in regions where the virus is endemic. But in the last two years the disease has pushed beyond its normal range into territories where fewer than a quarter of people are immune, including the urban areas of Rio and Sao Paulo.

By the time of the announcement, the fake news cycle was already underway. Earlier in the month an audio message from a woman claiming to be a doctor at a well-known research institute began circulating on WhatsApp, warning that the vaccine is dangerous. (The institute denied that the recording came from any of its employees). A few weeks later it was a story linking the death of a university student to the vaccine. (That too proved to be a false report). In February, Igor Sacramento’s mother-in-law messaged him a pair of videos suggesting that the yellow fever vaccine was actually a scam aimed at reducing the world population. A health communication researcher at Fiocruz, one of Brazil’s largest scientific institutions, Sacramento recognized a scam when he saw one. And no, it wasn’t a global illuminati plot to kill off his countrymen. But he could understand why people would be taken in by it.

“These videos are very sophisticated, with good editing, testimonials from experts, and personal experiences,” Sacramento says. It's the same journalistic format people see on TV, so it bears the shape of truth. And when people share these videos or news stories within their social networks as personal messages, it changes the calculus of trust. “We are transitioning from a society that experienced truth based on facts to a society based on its experience of truth in intimacy, in emotion, in closeness."

People are more likely to believe rumours from family and friends. There’s no algorithm mediating the experience. And when that misinformation comes in the form of forwarded texts and videos—which look the same as personal messages in WhatsApp—they’re lent another layer of legitimacy. Then you get the network compounding effect; if you’re in multiple group chats that all receive the fake news, the repetition makes them more believable still.

Of course, these are all just theories. Because of WhatsApp’s end-to-end encryption and the closed nature of its networks, it’s nearly impossible to study how misinformation moves through it. For users in countries with a history of state-sponsored violence, like Brazil, that secrecy is a feature. But it’s a bug for anyone trying to study them. “I think WhatsApp hoaxes and disinformation campaigns are a bit more pernicious [than Facebook] because their diffusion cannot be monitored,” says Pablo Ortellado, a fake news researcher and professor of public policy at the University of Sao Paulo. Misinformation on WhatsApp can only be identified when it jumps to other social media sites or bleeds into the real world.

In Brazil, it’s starting to do both. One of the videos Sacramento received from his mother-in-law is still up on YouTube, where it’s been viewed over a million times. Other stories circulated on WhatsApp are now being shared in Facebook groups with thousands of users, mostly worried mothers exchanging stories and fears. And in the streets of Rio and Sao Paulo, some people are staying away from the health workers in white coats. As of February 27, only 5.5 million people had received the shot, though it’s difficult to say how much of the slow start is due to fake news as opposed to logistical delays. A spokeswoman for the Brazilian Ministry of Health said in an email that the agency has seen an uptick in concern from residents regarding post-vaccination adverse events since the start of the year and acknowledged that the spread of false news through social media can interfere with vaccination coverage, but did not comment on its specific impact on this latest campaign.

A spokesperson for WhatsApp pointed out the public health benefits of making communications cheaper and more reliable for millions of people, while recognizing that some of them have unfortunately found ways to abuse the platform. "We encourage people to think twice before forwarding suspicious messages," the spokesperson wrote in an email. "People can always report problematic content to WhatsApp so we can be aware and help take action.”1

While the Ministry of Health has engaged in a very active pro-vaccine education operation—publishing weekly newsletters, posting on social media, and getting people on the ground at churches, temples, trade unions, and clinics—health communication researchers like Sacramento say health officials made one glaring mistake. They didn’t pay close enough attention to language.

You see, on top of all this, there’s a global yellow fever vaccine shortage going on at the moment. The vaccine is available at a limited number of clinics in the US, but it’s only used here as a travel shot. So far this year, the Centers for Disease Control and Prevention has registered no cases of the virus within US borders, though in light of the outbreak it did issue a Level 2 travel notice in January, urging all Americans traveling to the affected states in Brazil to get vaccinated first.

Because it’s endemic in the country, Brazil makes its own vaccine, and is currently ramping up production from 5 million to 10 million doses per month by June. But in the interim, authorities are administering smaller doses of what they have on hand, known as a “fractional dose.” It’s a well-demonstrated emergency maneuver, which staved off a yellow fever outbreak in the Democratic Republic of the Congo in 2016. According to the WHO, it’s “the best way to stretch vaccine supplies and protect against as many people as possible.” But a partial dose, one that’s guaranteed for only 12 months, has been met by mistrust in Brazil, where a single vaccination had always been good for a lifetime of protection.

“The population in general understood the wording of ‘fractionated’ to mean weak,” says Sacramento. Although technically correct, the word took on a more sinister meaning as it spread through social media circles. Some videos even claimed the fractionated vaccine could cause renal failure. And while they may be unscientific, they’re not completely wrong.

Like any medicine, the yellow fever vaccine can cause side effects. Between 2 and 4 percent of people experience mild headaches, low-grade fevers, or pain at the site of injection. But there have also been rare reports of life-threatening allergic reactions and damage to the nervous system and other internal organs. According to the Health Ministry, six people died in 2017 on account of an adverse reaction to the vaccine. The agency estimates that one in 76,000 will have an anaphylactic reaction, one in 125,000 will experience a severe nervous system reaction, and one in 250,000 will suffer a life-threatening illness with organ failure. Which means that if 5 million people get vaccinated, you’ll wind up with about 20 organ failures, 50 nervous system issues, and 70 allergic shocks. Of course, if yellow fever infected 5 million people, 333,000 people could die.

Not every fake news story is 100 percent false. But they are out of proportion with reality. That’s the thing about social media. It can amplify real but statistically unlikely things just as much as it spreads totally made up stuff. What you wind up with is a murky mix of information that has just enough truth to be credible.

And that makes it a whole lot harder to fight. You can’t just start by shouting it all down. Sacramento says too often health officials opt to frame these rumors as a dichotomy: “Is this true or is this a myth?” That alienates people from the science. Instead, the institution where he works has begun to produce social media-specific videos that start a dialogue about the importance of vaccines, while remaining open to people’s fears. “Brazil is a country full of social inequalities and contradictions,” he says. “The only way to understand what is happening is to talk to people who are different from you.” Unfortunately, that’s the one thing WhatsApp is designed not to let you do.

1 Update 03/10/18 7:29pm EST This story has been updated to include comment from WhatsApp.

Viral Falsehoods

Read more: https://www.wired.com/story/when-whatsapps-fake-news-problem-threatens-public-health/

School Shooting Suspect Made ‘Disturbing’ Social Media Posts

Parkland, Fla. (AP) — The suspect in a deadly rampage at a Florida high school is a troubled teenager who posted disturbing material on social media before the shooting spree that killed at least 17 people, according to a law enforcement official and former schoolmates.

Broward County Sheriff Scott Israel said the 19-year-old suspect, Nikolas Cruz, had been expelled from Marjory Stoneman Douglas High School for "disciplinary reasons."

"I don't know the specifics," the sheriff said.

However, Victoria Olvera, a 17-year-old junior, said Cruz was expelled last school year after a fight with his ex-girlfriend's new boyfriend. She said Cruz had been abusive to his girlfriend.

School officials said Cruz was attending another school in Broward County after his expulsion.

Broward County Mayor Beam Furr said during an interview with CNN that the shooter was getting treatment at a mental health clinic for a while, but that he hadn't been back to the clinic for more than a year.

"It wasn't like there wasn't concern for him," Furr said.

"We try to keep our eyes out on those kids who aren't connected … Most teachers try to steer them toward some kind of connections. … In this case, we didn't find a way to connect with this kid," Furr said.

Israel said investigators were dissecting the suspect's social media posts.

"And some of the things that have come to mind are very, very disturbing," he added without elaborating.

Daniel Huerfano, a student who fled Wednesday's attack, said he recognized Cruz from an Instagram photo in which Cruz posed with a gun in front of his face. Huerfano recalled Cruz as a shy student and remembered seeing him walking around with his lunch bag.

"He was that weird kid that you see … like a loner," he added.

Dakota Mentcher, a 17-year-old junior, said he used to be close friends with Cruz but hadn't seen him in more than a year following his expulsion from school.

"He started progressively getting a little more weird," Mentcher said.

Mentcher recalled Cruz posting on Instagram about killing animals and said he had talked about doing target practice in his backyard with a pellet gun.

"He started going after one of my friends, threatening her, and I cut him off from there," Mentcher said.

"I think everyone had in their minds if anybody was going to do it, it was going to be him," Mentcher said.

Broward County School District Superintendent Robert Runcie told reporters on Wednesday afternoon that he did not know of any threats posed by Cruz to the school.

"Typically you see in these situations that there potentially could have been signs out there," Runcie said. "I would be speculating at this point if there were, but we didn't have any warnings. There weren't any phone calls or threats that we know of that were made."

However, a teacher told The Miami Herald that Cruz may have been identified as a potential threat to other students. Jim Gard, a math teacher who said Cruz had been in his class last year, said he believes the school had sent out an email warning teachers that Cruz shouldn't be allowed on campus with a backpack.

"There were problems with him last year threatening students, and I guess he was asked to leave campus," Gard said.

___

This story has been corrected to show that Dakota Mentcher, not Victoria Olvera, said, "I think everyone had in their minds if anybody was going to do it, it was going to be him."

    Read more: https://www.bloomberg.com/news/articles/2018-02-15/school-shooting-suspect-made-disturbing-social-media-posts

    It’s the (Democracy-Poisoning) Golden Age of Free Speech

    For most of modern history, the easiest way to block the spread of an idea was to keep it from being mechanically disseminated. Shutter the news­paper, pressure the broad­cast chief, install an official censor at the publishing house. Or, if push came to shove, hold a loaded gun to the announcer’s head.

    This actually happened once in Turkey. It was the spring of 1960, and a group of military officers had just seized control of the government and the national media, imposing an information blackout to suppress the coordination of any threats to their coup. But inconveniently for the conspirators, a highly anticipated soccer game between Turkey and Scotland was scheduled to take place in the capital two weeks after their takeover. Matches like this were broadcast live on national radio, with an announcer calling the game, play by play. People all across Turkey would huddle around their sets, cheering on the national team.

    Canceling the match was too risky for the junta; doing so might incite a protest. But what if the announcer said something political on live radio? A single remark could tip the country into chaos. So the officers came up with the obvious solution: They kept several guns trained on the announcer for the entire 2 hours and 45 minutes of the live broadcast.

    It was still a risk, but a managed one. After all, there was only one announcer to threaten: a single bottleneck to control of the airwaves.

    Variations on this general playbook for censorship—find the right choke point, then squeeze—were once the norm all around the world. That’s because, until recently, broadcasting and publishing were difficult and expensive affairs, their infrastructures riddled with bottlenecks and concentrated in a few hands.

    But today that playbook is all but obsolete. Whose throat do you squeeze when anyone can set up a Twitter account in seconds, and when almost any event is recorded by smartphone-­wielding mem­­bers of the public? When protests broke out in Ferguson, Missouri, in August 2014, a single livestreamer named Mustafa Hussein reportedly garnered an audience comparable in size to CNN’s for a short while. If a Bosnian Croat war criminal drinks poison in a courtroom, all of Twitter knows about it in minutes.

    February 2018. Subscribe to WIRED.

    Sean Freeman

    In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

    And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

    Or let’s say you were the one who posted that video. If so, is anyone even watching it? Or has it been lost in a sea of posts from hundreds of millions of content pro­ducers? Does it play well with Facebook’s algorithm? Is YouTube recommending it?

    Maybe you’re lucky and you’ve hit a jackpot in today’s algorithmic public sphere: an audience that either loves you or hates you. Is your post racking up the likes and shares? Or is it raking in a different kind of “engagement”: Have you received thousands of messages, mentions, notifications, and emails threatening and mocking you? Have you been doxed for your trouble? Have invisible, angry hordes ordered 100 pizzas to your house? Did they call in a SWAT team—men in black arriving, guns drawn, in the middle of dinner?

    Standing there, your hands over your head, you may feel like you’ve run afoul of the awesome power of the state for speaking your mind. But really you just pissed off 4chan. Or entertained them. Either way, congratulations: You’ve found an audience.

    Here’s how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

    These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself. But at their core, their business is mundane: They’re ad brokers. To virtually anyone who wants to pay them, they sell the capacity to precisely target our eyeballs. They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.

    So what does this algorithmic public sphere tend to feed us? In tech parlance, Facebook and YouTube are “optimized for engagement,” which their defenders will tell you means that they’re just giving us what we want. But there’s nothing natural or inevitable about the specific ways that Facebook and YouTube corral our attention. The patterns, by now, are well known. As Buzzfeed famously reported in November 2016, “top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.”

    Humans are a social species, equipped with few defenses against the natural world beyond our ability to acquire knowledge and stay in groups that work together. We are particularly susceptible to glimmers of novelty, messages of affirmation and belonging, and messages of outrage toward perceived enemies. These kinds of messages are to human community what salt, sugar, and fat are to the human appetite. And Facebook gorges us on them—in what the company’s first president, Sean Parker, recently called “a social-­validation feedback loop.”

    Sure, it is a golden age of free speech—if you can believe your lying eyes.

    There are, moreover, no nutritional labels in this cafeteria. For Facebook, YouTube, and Twitter, all speech—whether it’s a breaking news story, a saccharine animal video, an anti-Semitic meme, or a clever advertisement for razors—is but “content,” each post just another slice of pie on the carousel. A personal post looks almost the same as an ad, which looks very similar to a New York Times article, which has much the same visual feel as a fake newspaper created in an afternoon.

    What’s more, all this online speech is no longer public in any traditional sense. Sure, Facebook and Twitter sometimes feel like places where masses of people experience things together simultaneously. But in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.

    Not to put too fine a point on it, but all of this invalidates much of what we think about free speech—conceptually, legally, and ethically.

    The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

    These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

    Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that does look to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

    Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

    This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

    But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. During the 2016 presidential election, as Joshua Green and Sasha Issenberg reported for Bloomberg, the Trump campaign used so-called dark posts—nonpublic posts targeted at a specific audience—to discourage African Americans from voting in battleground states. The Clinton campaign could scarcely even monitor these messages, let alone directly counter them. Even if Hillary Clinton herself had taken to the evening news, that would not have been a way to reach the affected audience. Because only the Trump campaign and Facebook knew who the audience was.

    It’s important to realize that, in using these dark posts, the Trump campaign wasn’t deviantly weaponizing an innocent tool. It was simply using Facebook exactly as it was designed to be used. The campaign did it cheaply, with Facebook staffers assisting right there in the office, as the tech company does for most large advertisers and political campaigns. Who cares where the speech comes from or what it does, as long as people see the ads? The rest is not Facebook’s department.

    Mark Zuckerberg holds up Facebook’s mission to “connect the world” and “bring the world closer together” as proof of his company’s civic virtue. “In 2016, people had billions of interactions and open discussions on Facebook,” he said proudly in an online video, looking back at the US election. “Candidates had direct channels to communicate with tens of millions of citizens.”

    This idea that more speech—more participation, more connection—constitutes the highest, most unalloyed good is a common refrain in the tech industry. But a historian would recognize this belief as a fallacy on its face. Connectivity is not a pony. Facebook doesn’t just connect democracy-­loving Egyptian dissidents and fans of the videogame Civilization; it brings together white supremacists, who can now assemble far more effectively. It helps connect the efforts of radical Buddhist monks in Myanmar, who now have much more potent tools for spreading incitement to ethnic cleansing—fueling the fastest- growing refugee crisis in the world.

    The freedom of speech is an important democratic value, but it’s not the only one. In the liberal tradition, free speech is usually understood as a vehicle—a necessary condition for achieving certain other societal ideals: for creating a knowledgeable public; for engendering healthy, rational, and informed debate; for holding powerful people and institutions accountable; for keeping communities lively and vibrant. What we are seeing now is that when free speech is treated as an end and not a means, it is all too possible to thwart and distort everything it is supposed to deliver.

    Creating a knowledgeable public requires at least some workable signals that distinguish truth from falsehood. Fostering a healthy, rational, and informed debate in a mass society requires mechanisms that elevate opposing viewpoints, preferably their best versions. To be clear, no public sphere has ever fully achieved these ideal conditions—but at least they were ideals to fail from. Today’s engagement algorithms, by contrast, espouse no ideals about a healthy public sphere.

    The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech.

    Some scientists predict that within the next few years, the number of children struggling with obesity will surpass the number struggling with hunger. Why? When the human condition was marked by hunger and famine, it made perfect sense to crave condensed calories and salt. Now we live in a food glut environment, and we have few genetic, cultural, or psychological defenses against this novel threat to our health. Similarly, we have few defenses against these novel and potent threats to the ideals of democratic speech, even as we drown in more speech than ever.

    The stakes here are not low. In the past, it has taken generations for humans to develop political, cultural, and institutional antibodies to the novelty and upheaval of previous information revolutions. If The Birth of a Nation and Triumph of the Will came out now, they’d flop; but both debuted when film was still in its infancy, and their innovative use of the medium helped fuel the mass revival of the Ku Klux Klan and the rise of Nazism.

    By this point, we’ve already seen enough to recognize that the core business model underlying the Big Tech platforms—harvesting attention with a massive surveillance infrastructure to allow for targeted, mostly automated advertising at very large scale—is far too compatible with authoritarianism, propaganda, misinformation, and polarization. The institutional antibodies that humanity has developed to protect against censorship and propaganda thus far—laws, journalistic codes of ethics, independent watchdogs, mass education—all evolved for a world in which choking a few gatekeepers and threatening a few individuals was an effective means to block speech. They are no longer sufficient.

    But we don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-­channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.


    The Free Speech Issue

    • “Nice Website. It Would Be a Shame if Something Happened to It.”: Steven Johnson goes inside Cloudflare's decision to let an extremist stronghold burn.
    • Everything You Say Can and Will Be Used Against You: Doug Bock Clark profiles Antifa’s secret weapon against far-right extremists.
    • Please, Silence Your Speech: Alice Gregory visits a startup that wants to neutralize your smartphone—and un-change the world.
    • The Best Hope for Civil Discourse on the Internet … Is on Reddit: Virginia Heffernan submits to Change My View.
    • 6 Tales of Censorship: What it's like to be suspended by Facebook, blocked by Trump, and more, in the subjects’ own words.

    Zeynep Tufekci (@zeynep) is an associate professor at the University of North Carolina and an opinion writer for The New York Times.

    This article appears in the February issue. Subscribe now.

    Read more: https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/

    Facebook for 6-Year-Olds? Welcome to Messenger Kids

    Facebook says it built Messenger Kids, a new version of its popular communications app with parental controls, to help safeguard pre-teens who may be using unauthorized and unsupervised social-media accounts. Critics think Facebook is targeting children as young as 6 to hook them on its services.

    Facebook’s goal is to “push down the age” of when it’s acceptable for kids to be on social media, says Josh Golin, executive director of Campaign for a Commercial Free Childhood. Golin says 11-to-12-year-olds who already have a Facebook account, probably because they lied about their age, might find the animated emojis and GIFs of Messenger Kids “too babyish,” and are unlikely to convert to the new app.

    Facebook launched Messenger Kids for 6-to-12-year olds in the US Monday, saying it took extraordinary care and precautions. The company said its 100-person team building apps for teens and kids consulted with parent groups, advocates, and childhood-development experts during the 18-month development process and the app reflects their concerns. Parents download Messenger Kids on their child’s account, after verifying their identity by logging into Facebook. Since kids cannot be found in search, parents must initiate and respond to friend requests.

    Facebook says Messenger Kids will not display ads, nor collect data on kids for advertising purposes. Kids’ accounts will not automatically be rolled into Facebook accounts once they turn 13.

    Nonetheless, advocates focused on marketing to children expressed concerns. The company will collect the content of children’s messages, photos they send, what features they use on the app, and information about the device they use. Facebook says it will use this information to improve the app and will share the information “within the family of companies that are part of Facebook,” and outside companies that provide customer support, analysis, and technical infrastructure.

    “It’s all that squishy language that we normally see in privacy policies,” says Golin. “It seems to give Facebook a lot of wiggle room to share this information.” He says Facebook should be clearer about the outsiders with which it may share data.

    In response to questions from WIRED, a spokesperson for Facebook said: “It’s important to remember that Messenger Kids does not have ads and we don’t use the data for advertising. This provision about sharing information with vendors from the privacy policy is for things like providing infrastructure to deliver messages.”

    Kristen Strader, campaign coordinator for the nonprofit group Public Citizen, says Facebook has proven it cannot be trusted with youth data in the past, pointing to a leaked Facebook report from May that promised advertisers the ability to track teen emotions, such as insecurity, in real-time. "Their response was just that they will not do similar experiments in the future," says Strader. At the time, advocacy groups asked for a copy of the report, but Facebook declined.

    Tech companies have made a much more aggressive push into targeting younger users, a strategy that began in earnest in 2015 when Google launched YouTube Kids, which includes advertising. Parents create an account for their child through Google’s Family Link, a product to help parents monitor screentime. FamilyLink is also used for parents who want to start an account for their kid on Google Home, which gets matched to their child’s voice.

    “There is no way a company can really close its doors to kids anymore,” says Jeffrey Chester, executive director for the Center of Digital Democracy. “By openly commercializing young children’s digital media use, Google has lowered the bar,” he says, pointing to what toy company Mattel described as “an eight-figure deal” that it signed with YouTube in August.

    Chester says services such as YouTube Kids and Messenger Kids are designed to capture the attention, and affinity, of the youngest users. “If they are weaned on Google and Facebook, you have socialized them to use your service when they become an adult,” he says. “On the one hand it’s diabolical and on the other hand it’s how corporations work.”

    In past years, tech companies avoided targeting younger users because of the Children’s Online Privacy Protection ACT (COPPA), a law that requires parental permission in order to collect data on children under 13. But, “the weakness of COPPA is that you can do a lot of things if you get parental permission,” says Golin. In the past six months, new apps have launched marketed as parent helpers. “What they’re saying is this is great way for parents to have control, what they are getting is parental permission,” says Golin.

    Several children-focused nonprofit groups endorsed Facebook’s approach, including ConnectSafely and Family Online Safety Institute (FOSI). Both groups have received funding from Facebook and each has at least one representative on Facebook’s 13-person advisory board for Messenger Kids. The board also includes two representatives from MediaSmarts, which is sponsored by Facebook.

    A Facebook spokesperson says, “We have long-standing relationships with some of these groups and we’ve been transparent about those relationships.” The spokesperson says many backers of Facebook’s approach, including Kristelle Lavallee of the Center on Media and Child Health, and Dr. Kevin Clark of George Mason University’s Center for Digital Media Innovation and Diversity, do not receive support from Facebook.

    UPDATE 3:25 PM: This story has been updated with information about the advisory board for Messenger Kids.

    Read more: https://www.wired.com/story/facebook-for-6-year-olds-welcome-to-messenger-kids/

    Get Rid of Capitalism? Millennials Are Ready to Talk About It

    One of the hottest tickets in New York City this weekend was a discussion on whether to overthrow capitalism.

    The first run of tickets to “Capitalism: A Debate” sold out in a day. So the organizers, a pair of magazines with clear ideological affiliations, socialist and libertarian , found a larger venue: Cooper Union’s 960-capacity Great Hall, the site of an 1860 antislavery speech by Abraham Lincoln. The event sold out once again, this time in eight hours.

    The crowd waiting in a long line to get inside on Friday night was mostly young and mostly male. Asher Kaplan and Gabriel Gutierrez, both 24, hoped the event would be a real-life version of the humorous, anarchic political debates on social media. “So much of this stuff is a battle that’s waged online,” said Gutierrez, who identifies, along with Kaplan, as a “leftist,” if not quite a socialist.

    These days, among young people, socialism is “both a political identity and a culture,” Kaplan said. And it looks increasingly attractive.

    Young Americans have soured on capitalism. In a Harvard University poll conducted last year, 51 percent of 18-to-29 year-olds in the U.S. said they opposed capitalism; only 42 percent expressed support. Among Americans of all ages, by contrast, a Gallup survey last year found that 60 percent held positive views of capitalism.

    A poll released last month found American millennials closely split on the question of what type of society they would prefer to live in: 44 percent picked a socialist country, 42 percent a capitalist one. The poll, conducted by YouGov and the Victims of Communism Memorial Foundation, found that 59 percent of Americans across all age groups preferred to live under capitalism.

    “I’ve seen the failings of modern-day capitalism,” said Grayson SussmanSquires, an 18-year-old student at Wesleyan University who had turned up for the capitalism debate. To him and many of his peers, he said, the notion of well-functioning capitalist order is something recounted only by older people. He was 10 when the financial crisis hit, old to enough to watch his older siblings struggle to get jobs out of college. In high school, SussmanSquires said, he volunteered for the presidential campaign of Vermont Senator Bernie Sanders, a self-described socialist. “It spoke to me in a way nothing had before,” he said.

    Although debate attendees leaned left, several expressed the desire to have their views challenged by the pro-capitalist side. “It’s very easy to exist in a social group where everyone has the same political vibe,” Kaplan said.

    “I’m immersed in one side of the debate,” said Thomas Doscher, 26, a labor organizer who is studying for his LSATs. “I want to hear the other side.”

    The debate pitted two socialist stalwarts, Jacobin founder Bhaskar Sunkara and New York University professor Vivek Chibber, against the defenders of capitalism, Katherine Mangu-Ward, Reason’s editor in chief, and Nick Gillespie, the editor in chief of Reason.com and Reason TV.

    And it was the attempt to rebuff criticism of capitalism that mostly riled up the crowd.

    Chibber argued that the problem with capitalism is the power it has over workers. With the weakening of U.S. labor unions, “we have a complete despotism of the employers,” he said, leading to stagnant wages. When Mangu-Ward countered that Americans aren’t coerced on the job, the crowd erupted in laughter. “Every morning you wake up and you have a decision about whether or not you’re going to go to work,” she insisted, and the audience laughed again.

    Sunkara summed up his argument for socialism as a society that helped people tackle the necessities of life—food, housing, education, health care, childcare. “Wherever we end up, it won’t be a utopia,” he said. “It will still be a place where you might get your heart broken,” or feel lonely, or get indigestion.

    Mangu-Ward replied: “Capitalism kind of [fixes] those things, actually.” There’s the app Tinder to find dates, and Pepto Bismol to cure your upset stomach. “Those are the gifts of capitalism,” she said.

    The arguments stayed mostly abstract. Sunkara and Chibber insisted their idea of democratic socialism shouldn’t be confused with the communist dictatorships that killed millions of people in the 20th century. Mangu-Ward and Gillespie likewise insisted on defending a capitalist ideal, not the current, corrupt reality. “Neither Nick nor I are fans of big business,” she said. “We’re not fans of crony capitalism.”

    Talking theory left little time to wrestle with concrete problems, such as inequality or climate change. That frustrated Nathaniel Granor, a 31-year-old from Brooklyn who said he was worried about millions of people being put out of work by automation such as driverless vehicles.

    “It didn't touch on what I feel is the heart of the matter,” Granor said. Both capitalism and socialism might ideally be ways to improve the world, he concluded, but both can fall short when applied in the real world. 

      Read more: http://www.bloomberg.com/news/articles/2017-11-06/get-rid-of-capitalism-millennials-are-ready-to-talk-about-it

      Are smartphones really making our children sad?

      US psychologist Jean Twenge, who has claimed that social media is having a malign affect on the young, answers critics who accuse her of crying wolf

      Last week, the childrens commissioner, Anne Longfield, launched a campaign to help parents regulate internet and smartphone use at home. She suggested that the overconsumption of social media was a problem akin to that of junk-food diets. None of us, as parents, would want our children to eat junk food all the time double cheeseburger, chips, every day, every meal, she said. For those same reasons, we shouldnt want our children to do the same with their online time.

      A few days later, former GCHQ spy agency chief Robert Hannigan responded to the campaign. The assumption that time online or in front of a screen is life wasted needs challenging. It is driven by fear, he said. The best thing we can do is to focus less on the time they spend on screens at home and more on the nature of the activity.

      This exchange is just one more example of how childrens screentime has become an emotive, contested issue. Last December, more than 40 educationalists, psychologists and scientists signed a letter in the Guardian calling for action on childrens screen-based lifestyles. A few days later, another 40-odd academics described the fears as moral panic and said that any guidelines needed to build on evidence rather than scaremongering.

      Faced with these conflicting expert views, how should concerned parents proceed? Into this maelstrom comes the American psychologist Jean Twenge, who has written a book entitled iGen: Why Todays Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy and Completely Unprepared for Adulthood and What That Means for the Rest of Us.

      If the books title didnt make her view clear enough, last weekend an excerpt was published in the American magazine the Atlantic with the emotive headline Have smartphones destroyed a generation? It quickly generated differing reactions that were played out on social media these could be broadly characterised as praise from parents and criticism from scientists. In a phone interview and follow-up emails, Twenge explained her conclusions about the downsides of the connected world for teens, and answered some of her critics.

      The Atlantic excerpt from your book was headlined Have smartphones destroyed a generation? Is that an accurate reflection of what you think?
      Well, keep in mind that I didnt write the headline. Its obviously much more nuanced than that.

      So why did you write this book?
      Ive been researching generations for a long time now, since I was an undergraduate, almost 25 years. The databases I draw from are large national surveys of high school and college students, and one of adults. In 2013-14 I started to see some really sudden changes and at first I thought maybe these were just blips, but the trends kept going.

      Id never seen anything like it in all my years of looking at differences among generations. So I wondered what was going on.

      What were these sudden changes for teens?
      Loneliness and depressive symptoms started to go up, while happiness and life satisfaction started to go down. The other thing that I really noticed was the accelerated decline in seeing friends in person it falls off a cliff. Its an absolutely stunning pattern Id never seen anything like that. I really started to wonder, what is going on here? What happened around 2011-2012 [the survey data is a year or two behind] that would cause such sudden changes?

      And you concluded these changes were being brought about by increased time spent online?
      The high-school data detailed how much time teens spend online on social media and games and I noticed how that correlated with some of these indicators in terms of happiness, depression and so on.

      I was curious not just what the correlations were between these screen activities, mental health and wellbeing, but what were the links with non-screen activities, like spending time with friends in person, playing sports, going to religious services, doing homework, all these other things that teens do?

      And for happiness in particular, the pattern was so stark. Of the non-screen activities that were measured, they all correlated with greater happiness. All the screen activities correlated with lower happiness.

      Youve called these post-millennials the iGeneration. What are their characteristics?
      Im defining iGen as those born between 1995 and 2012 that latter date could change based on future data. Im reasonably certain about 1995, given the sudden changes in the trends. It also happens that 1995 was the year the internet was commercialised [Amazon launched that year, Yahoo in 1994 and Google in 1996], so if you were born in that year you have not known a time without the internet.

      But the introduction of the smartphone, exemplified by the iPhone, which was launched in 2007, is key?
      There are a lot of differences some are large, some are subtle, some are sudden and some had been building for a while but if I had to identify what really characterises them, the first influence is the smartphone.

      iGen is the first generation to spend their entire adolescence with the smartphone. This has led to many ripple effects for their wellbeing, their social interactions and the way they think about the world.

      Psychology
      Psychology professor Jean Twenge. Photograph: Gregory Bull/AP

      Why are you convinced they are unhappy because of social media, rather than it being a case of the unhappy kids being heavier users of social media?
      That is very unlikely to be true because of very good research on that very question. There is one experiment and two longitudinal studies that show the arrow goes from social media to lower wellbeing and not the other way around. For example, an experiment where people
      gave up Facebook for a week and had better wellbeing than those who had not.

      The other thing to keep in mind is that if you are spending eight hours a day with a screen you have less time to spend interacting with friends and family in person and we know definitively from decades of research that spending time with other people is one of the keys to emotional wellbeing; if youre doing that less, thats a very bad sign.

      A professor at Oxford University tweeted that your work is a non-systematic review of sloppy social science as a tool for lazy intergenerational shaming how do you respond?
      It is odd to equate documenting teens mental health issues with intergenerational shaming. Im not shaming anyone and the data I analyse is from teens, not older people criticising them.

      This comment is especially strange because this researchers best-known paper, about what he calls the Goldilocks theory, shows the same thing I find lower wellbeing after more hours of screen time. Were basically replicating each others research across two different countries, which is usually considered a good thing. So I am confused.

      Your arguments also seem to have been drawn on by the conservative right as ammunition for claims that technology is leading to the moral degradation of the young. Are you comfortable about that?
      My analyses look at what young people are saying about themselves and how they are feeling, so I dont think this idea of older people love to whine about the young is relevant. I didnt look at what older people have to say about young people. I looked at what young people are saying about their own experiences and their own lives, compared to young people 10, 20, or 30 years ago.

      Nor is it fair or accurate to characterise this as youth-bashing. Teens are saying they are suffering and documenting that should help them, not hurt them. I wrote the book because I wanted to give a voice to iGen and their experiences, through the 11 million who filled out national surveys, to the 200 plus who answered open-ended questions for me, to the 23 I talked to for up to two hours. It had absolutely nothing to do with older people and their complaints about youth.

      Many of us have a nagging feeling that social media is bad for our wellbeing, but we all suffer from a fear of missing out.
      Teens feel that very intensely, which is one reason why they are so addicted to their phones. Yet, ironically, the teens who spend more time on social media are actually more likely to report feeling left out.

      But is this confined to iGeners? One could go to a childs birthday party where the parents are glued to their smartphones and not talking to each other too.
      It is important to consider that while this trend also affects adults, it is particularly worrisome for teens because their brain development is ongoing and adolescence is a crucial time for developing social skills.

      You say teens might know the right emoji but in real life might not know the right facial expression.
      There is very little research on that question. There is one study that looked at the effects of screens on social skills among 11- to 12-year-olds, half of whom used screens at their normal level and half went to a five-day screen-free camp.

      Those who attended the camp improved their social skills reading emotions on faces was what they measured. That makes sense thats the social skill you would expect to suffer if you werent getting much in-person social interaction.

      So is it up to regulators or parents to improve the situation? Leaving this problem for parents to fix is a big challenge.
      Yes it is. I have three kids and my oldest is 10, but in her class about half have a phone, so many of them are on social media already. Parents have a tough job, because there are temptations on the screen constantly.

      What advice would you give parents?
      Put off getting your child a phone for as long as possible and, when you do, start with one that doesnt have internet access so they dont have the internet in their pocket all the time.

      But when your child says, but all my friends have got one, how do you reply?
      Maybe with my parents line If your friends all jumped in the lake, would you do it too? Although at that age the answer is usually yes, which I understand. But you can do social media on a desktop computer for a limited time each day. When we looked at the data, we found that an hour a day of electronic device use doesnt have any negative effects on mental health two hours a day or more is when you get the problems.

      The majority of teens are on screens a lot more than that. So if they want to use Instagram, Snapchat or Facebook to keep up with their friends activities, they can do that from a desktop computer.

      That sounds hard to enforce.
      We need to be more understanding of the effects of smartphones. In many ways, parents are worried about the wrong things theyre worried about their kids driving and going out. They dont worry about their kids sitting by themselves in a room with their phone and they should.

      Lots of social media features such as notifications or Snapchats Snapstreak feature are engineered to keep us glued to our phones. Should these types of features be outlawed?
      Oh man. Parents can put an app [such as Kidslox or Screentime] on their kids phone to limit the amount of time they spend on it. Do that right away. In terms of the bigger solutions, I think thats above my pay grade to figure out.

      Youve been accused by another psychologist of cherry-picking your data. Of ignoring, say, studies that suggest active social media use is associated with positive outcomes such as resilience. Did you collect data to fit a theory?
      Its impossible to judge that claim she does not provide citations to these studies. I found a few studies finding no effects or positive effects, but they were all older, before smartphones were on the scene. She says in order to prove smartphones are responsible for these trends we need a large study randomly assigning teens to not use smartphones or use them. If we wait for this kind of study, we will wait for ever that type of study is just about impossible to conduct.

      She concludes by saying: My suspicion is that the kids are gonna be OK. However, it is not OK that 50% more teens suffer from major depression now versus just six years ago and three times as many girls aged 12 to 14 take their own lives. It is not OK that more teens say that they are lonely and feel hopeless. It is not OK that teens arent seeing their friends in person as much. If we twiddle our thumbs waiting for the perfect experiment, we are taking a big risk and I for one am not willing to do that.

      Are you expecting anyone from Silicon Valley to say: How can we help?
      No, but what I think is interesting is many tech-connected people in Silicon Valley restrict their own childrens screen use, so they know. Theyre living off of it but they know its effects. It indicates that pointing out the effects of smartphones doesnt make you a luddite.

      iGen: Why Todays Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy and Completely Unprepared for Adulthood and What That Means for the Rest of Us by Jean Twenge is published by Simon & Schuster US ($27) on 22 August

      Read more: https://www.theguardian.com/technology/2017/aug/13/are-smartphones-really-making-our-children-sad