Why Zuckerbergs 14-Year Apology Tour Hasnt Fixed Facebook

In 2003, one year before Facebook was founded, a website called Facemash began nonconsensually scraping pictures of students at Harvard from the school’s intranet and asking users to rate their hotness. Obviously, it caused an outcry. The website’s developer quickly proffered an apology. "I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter,” wrote a young Mark Zuckerberg. “I definitely see how my intentions could be seen in the wrong light.”

In 2004 Zuckerberg cofounded Facebook, which rapidly spread from Harvard to other universities. And in 2006 the young company blindsided its users with the launch of News Feed, which collated and presented in one place information that people had previously had to search for piecemeal. Many users were shocked and alarmed that there was no warning and that there were no privacy controls. Zuckerberg apologized. “This was a big mistake on our part, and I'm sorry for it,” he wrote on Facebook’s blog. "We really messed this one up," he said. "We did a bad job of explaining what the new features were and an even worse job of giving you control of them."

Zeynep Tufekci (@zeynep) is an associate professor at the University of North Carolina and an opinion writer for The New York Times. She recently wrote about the (democracy-poisoning) golden age of free speech.

Then in 2007, Facebook’s Beacon advertising system, which was launched without proper controls or consent, ended up compromising user privacy by making people’s purchases public. Fifty thousand Facebook users signed an e-petition titled “Facebook: Stop invading my privacy.” Zuckerberg responded with an apology: “We simply did a bad job with this release and I apologize for it." He promised to improve. “I'm not proud of the way we've handled this situation and I know we can do better,” he wrote.

By 2008, Zuckerberg had written only four posts on Facebook’s blog: Every single one of them was an apology or an attempt to explain a decision that had upset users.

In 2010, after Facebook violated users' privacy by making key types of information public without proper consent or warning, Zuckerberg again responded with an apology—this time published in an op-ed in The Washington Post. “We just missed the mark,” he said. “We heard the feedback,” he added. “There needs to be a simpler way to control your information.” “In the coming weeks, we will add privacy controls that are much simpler to use,” he promised.

I’m going to run out of space here, so let’s jump to 2018 and skip over all the other mishaps and apologies and promises to do better—oh yeah, and the consent decree that the Federal Trade Commission made Facebook sign in 2011, charging that the company had deceptively promised privacy to its users and then repeatedly broken that promise—in the intervening years.

Last month, Facebook once again garnered widespread attention with a privacy related backlash when it became widely known that, between 2008 and 2015, it had allowed hundreds, maybe thousands, of apps to scrape voluminous data from Facebook users—not just from the users who had downloaded the apps, but detailed information from all their friends as well. One such app was run by a Cambridge University academic named Aleksandr Kogan, who apparently siphoned up detailed data on up to 87 million users in the United States and then surreptitiously forwarded the loot to the political data firm Cambridge Analytica. The incident caused a lot of turmoil because it connects to the rolling story of distortions in the 2016 US presidential election. But in reality, Kogan’s app was just one among many, many apps that amassed a huge amount of information in a way most Facebook users were completely unaware of.

At first Facebook indignantly defended itself, claiming that people had consented to these terms; after all, the disclosures were buried somewhere in the dense language surrounding obscure user privacy controls. People were asking for it, in other words.

But the backlash wouldn’t die down. Attempting to respond to the growing outrage, Facebook announced changes. “It’s Time to Make Our Privacy Tools Easier to Find”, the company announced without a hint of irony—or any other kind of hint—that Zuckerberg had promised to do just that in the “coming few weeks” eight full years ago. On the company blog, Facebook’s chief privacy editor wrote that instead of being “spread across nearly 20 different screens” (why were they ever spread all over the place?), the controls would now finally be in one place.

Zuckerberg again went on an apology tour, giving interviews to The New York Times, CNN, Recode, WIRED, and Vox (but not to the Guardian and Observer reporters who broke the story). In each interview he apologized. “I’m really sorry that this happened,” he told CNN. “This was certainly a breach of trust.”

But Zuckerberg didn’t stop at an apology this time. He also defended Facebook as an “idealistic company” that cares about its users and spoke disparagingly about rival companies that charge users money for their products while maintaining a strong record in protecting user privacy. In his interview with Vox’s Ezra Klein, Zuckerberg said that anyone who believes Apple cares more about users than Facebook does has “Stockholm syndrome”—the phenomenon whereby hostages start sympathizing and identifying with their captors.

This is an interesting argument coming from the CEO of Facebook, a company that essentially holds its users' data hostage. Yes, Apple charges handsomely for its products, but it also includes advanced encryption hardware on all its phones, delivers timely security updates to its whole user base, and has largely locked itself out of user data—to the chagrin of many governments, including that of the United States, and of Facebook itself.

Most Android phones, by contrast, gravely lag behind in receiving security updates, have no specialized encryption hardware, and often handle privacy controls in a way that is detrimental to user interests. Few governments or companies complain about Android phones. After the Cambridge Analytica scandal, it came to light that Facebook had been downloading and keeping all the text messages of its users on the Android platform—their content as well as their metadata. “The users consented!” Facebook again cried out. But people were soon posting screenshots that showed how difficult it was for a mere mortal to discern that’s what was going on, let alone figure out how to opt out, on the vague permission screen that flashed before users.

On Apple phones, however, Facebook couldn’t harvest people’s text messages because the permissions wouldn’t allow it.

In the same interview, Zuckerberg took wide aim at the oft-repeated notion that, if an online service is free, you—the user—are the product. He said that he found the argument that “if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth.” His rebuttal to that accusation, however, was itself glib; and as for whether it was aligned with the truth—well, we just have to take his word for it. “To the dissatisfaction of our sales team here,” he said, “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.”

As far as I can tell, not once in his apology tour was Zuckerberg asked what on earth he means when he refers to Facebook’s 2 billion-plus users as “a community” or “the Facebook community.” A community is a set of people with reciprocal rights, powers, and responsibilities. If Facebook really were a community, Zuckerberg would not be able to make so many statements about unilateral decisions he has made—often, as he boasts in many interviews, in defiance of Facebook’s shareholders and various factions of the company’s workforce. Zuckerberg’s decisions are final, since he controls all the voting stock in Facebook, and always will until he decides not to—it’s just the way he has structured the company.

This isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

Facebook’s 2 billion users are not Facebook’s “community.” They are its user base, and they have been repeatedly carried along by the decisions of the one person who controls the platform. These users have invested time and money in building their social networks on Facebook, yet they have no means to port the connectivity elsewhere. Whenever a serious competitor to Facebook has arisen, the company has quickly copied it (Snapchat) or purchased it (WhatsApp, Instagram), often at a mind-boggling price that only a behemoth with massive cash reserves could afford. Nor do people have any means to completely stop being tracked by Facebook. The surveillance follows them not just on the platform, but elsewhere on the internet—some of them apparently can’t even text their friends without Facebook trying to snoop in on the conversation. Facebook doesn’t just collect data itself; it has purchased external data from data brokers; it creates “shadow profiles” of nonusers and is now attempting to match offline data to its online profiles.

Again, this isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

There is no other way to interpret Facebook’s privacy invading moves over the years—even if it’s time to simplify! finally!―as anything other than decisions driven by a combination of self-serving impulses: namely, profit motives, the structural incentives inherent to the company’s business model, and the one-sided ideology of its founders and some executives. All these are forces over which the users themselves have little input, aside from the regular opportunity to grouse through repeated scandals. And even the ideology—a vague philosophy that purports to prize openness and connectivity with little to say about privacy and other values—is one that does not seem to apply to people who run Facebook or work for it. Zuckerberg buys houses surrounding his and tapes over his computer’s camera to preserve his own privacy, and company employees went up in arms when a controversial internal memo that made an argument for growth at all costs was recently leaked to the press—a nonconsensual, surprising, and uncomfortable disclosure of the kind that Facebook has routinely imposed upon its billions of users over the years.

This isn’t to say Facebook doesn’t provide real value to its users, even as it locks them in through network effects and by crushing, buying, and copying its competition. I wrote a whole book in which I document, among other things, how useful Facebook has been to anticensorship efforts around the world. It doesn’t even mean that Facebook executives make all decisions merely to increase the company valuation or profit, or that they don’t care about users. But multiple things can be true at the same time; all of this is quite complicated. And fundamentally, Facebook’s business model and reckless mode of operating are a giant dagger threatening the health and well-being of the public sphere and the privacy of its users in many countries.

So, here’s the thing. There is indeed a case of Stockholm syndrome here. There are very few other contexts in which a person would be allowed to make a series of decisions that have obviously enriched them while eroding the privacy and well-being of billions of people; to make basically the same apology for those decisions countless times over the space of just 14 years; and then to profess innocence, idealism, and complete independence from the obvious structural incentives that have shaped the whole process. This should ordinarily cause all the other educated, literate, and smart people in the room to break into howls of protest or laughter. Or maybe tears.

Facebook has tens of thousands of employees, and reportedly an open culture with strong internal forums. Insiders often talk of how free employees feel to speak up, and indeed I’ve repeatedly been told how they are encouraged to disagree and discuss all the key issues. Facebook has an educated workforce.

By now, it ought to be plain to them, and to everyone, that Facebook’s 2 billion-plus users are surveilled and profiled, that their attention is then sold to advertisers and, it seems, practically anyone else who will pay Facebook—including unsavory dictators like the Philippines’ Rodrigo Duterte. That is Facebook’s business model. That is why the company has an almost half-a-trillion-dollar market capitalization, along with billions in spare cash to buy competitors.

These are such readily apparent facts that any denial of them is quite astounding.

And yet, it appears that nobody around Facebook’s sovereign and singular ruler has managed to convince their leader that these are blindingly obvious truths whose acceptance may well provide us with some hints of a healthier way forward. That the repeated word of the use “community” to refer Facebook’s users is not appropriate and is, in fact, misleading. That the constant repetition of “sorry” and “we meant well” and “we will fix it this time!” to refer to what is basically the same betrayal over 14 years should no longer be accepted as a promise to do better, but should instead be seen as but one symptom of a profound crisis of accountability. When a large chorus of people outside the company raises alarms on a regular basis, it’s not a sufficient explanation to say, “Oh we were blindsided (again).”

Maybe, just maybe, that is the case of Stockholm syndrome we should be focusing on.

Zuckerberg’s outright denial that Facebook’s business interests play a powerful role in shaping its behavior doesn’t bode well for Facebook’s chances of doing better in the future. I don’t doubt that the company has, on occasion, held itself back from bad behavior. That doesn’t make Facebook that exceptional, nor does it excuse its existing choices, nor does it alter the fact that its business model is fundamentally driving its actions.

At a minimum, Facebook has long needed an ombudsman’s office with real teeth and power: an institution within the company that can act as a check on its worst impulses and to protect its users. And it needs a lot more employees whose task is to keep the platform healthier. But what would truly be disruptive and innovative would be for Facebook to alter its business model. Such a change could come from within, or it could be driven by regulations on data retention and opaque, surveillance-based targeting—regulations that would make such practices less profitable or even forbidden.

Facebook will respond to the latest crisis by keeping more of its data within its own walls (of course, that fits well with the business of charging third parties for access to users based on extensive profiling with data held by Facebook, so this is no sacrifice). Sure, it’s good that Facebook is now promising not to leak user data to unscrupulous third parties; but it should finally allow truly independent researchers better (and secure, not reckless) access to the company’s data in order to investigate the true effects of the platform. Thus far, Facebook has not cooperated with independent researchers who want to study it. Such investigation would be essential to informing the kind of political discussion we need to have about the trade-offs inherent in how Facebook, and indeed all of social media, operate.

Even without that independent investigation, one thing is clear: Facebook’s sole sovereign is neither equipped to, nor should he be in a position to, make all these decisions by himself, and Facebook’s long reign of unaccountability should end.


Facebook in Crisis

  • Initially, Facebook said that Cambridge Analytica got unauthorized access to some 50 million users' data. The social network has now raised that number to 87 million.
  • Next week, Mark Zuckerberg will testify before Congress. The question on our minds: How can Facebook prevent the next crisis if its guiding principle is and always has been connection at all cost?
  • Facebook has a long history of privacy gaffes. Here are just a few.

Photograph by WIRED/Getty Images

Read more: https://www.wired.com/story/why-zuckerberg-15-year-apology-tour-hasnt-fixed-facebook/

A Hurricane Flattens Facebook

Two weeks ago, Facebook learned that The New York Times, Guardian, and Observer were working on blockbuster stories based on interviews with a man named Christopher Wylie. The core of the tale was familiar but the details were new, and now the scandal was attached to a charismatic face with a top of pink hair. Four years ago, a slug of Facebook data on 50 million Americans was sucked down by a UK academic named Aleksandr Kogan, and wrongly sold to Cambridge Analytica. Wylie, who worked at the firm and has never talked publicly before, showed the newspapers a trove of emails and invoices to prove his allegations. Worse, Cambridge appears to have lied to Facebook about entirely deleting the data.

To Facebook, before the stories went live, the scandal appeared bad but manageable. The worst deeds had been done outside of Facebook and long ago. Plus, like weather forecasters in the Caribbean, Facebook has been busy lately. Just in the past month, they’ve had to deal with scandals created by vacuous Friday tweets from an ad executive, porn, the darn Russian bots, angry politicians in Sri Lanka, and even the United Nations. All of those crises have passed with limited damage. And perhaps that’s why the company appears to have underestimated the power of the storm clouds moving in.

Facebook has burned its fingers on issues of data privacy frequently in its 14 year history. But this time it was different.

On Friday night, the company made its first move, jumping out in front of the news reports to publish its own blog post announcing that it was suspending Cambridge Analytica’s use of the platform. It also made one last stern appeal to ask The Guardian not to use the word “breach” in its story. The word, the company argued, was inaccurate. Data had been misused, but moats and walls had not been breached. The Guardian apparently did not find that argument sympathetic or persuasive. On Saturday its story appeared, “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.”

The crisis was familiar in a way: Facebook has burned its fingers on issues of data privacy frequently in its 14 year history. But this time it was different. The data leakage hadn’t helped Unilever sell mayonnaise. It appeared to have helped Donald Trump sell a political vision of division and antipathy. The news made it look as if Facebook’s data controls were lax and that its executives were indifferent. Around the world lawmakers, regulators, and Facebook users began asking very publicly how they could support a platform that didn’t do more to protect them. Soon, powerful politicians were chiming in and demanding to hear from Zuckerberg.

As the storm built over the weekend, Facebook’s executives, including Mark Zuckerberg and Sheryl Sandberg, strategized and argued late into the night. They knew that the public was hammering them, but they also believed that the fault lay much more with Cambridge Analytica than with them. Still, there were four main questions that consumed them. How could they tighten up the system to make sure this didn’t happen again? What should they do about all the calls for Zuckerberg to testify? Should they sue Cambridge Analytica? And what could they do about psychologist Joseph Chancellor, who had helped found Kogan’s firm and who now worked, of all places, at Facebook?

By Monday, Facebook remained frozen, and Zuckerberg and Sandberg stayed silent. Then, late in the afternoon in Menlo Park, more bad news came. The New York Times reported that Alex Stamos, the company’s well-respected chief of security, had grown dissatisfied with the top of senior management and was planning to exit in a few months. Some people had known this for a while, but it was still a very bad look. You don’t want news about your head of data security bailing when you’re having a crisis about how to secure your data. And then news broke that Facebook had been denied in its efforts to get access to Cambridge Analytica’s servers. The United Kingdom’s Information Commissioner’s Office, which had started an investigation, would handle that.

A company-wide Q&A was called for Tuesday but for some reason it was led by Facebook’s legal counsel, not its leaders, both of whom have remained deafeningly silent and both of whom reportedly skipped the session. Meanwhile, the stock had collapsed, chopping $36 billion off the company’s market value on Monday. By mid-Tuesday morning, it had fallen 10 percent since the scandal broke. What the company expected to be a tough summer storm had turned into a Category 5 hurricane.

Walking in the Front Door

The story of how Kogan ended up with data on 50 million American Facebook users sounds like it should involve secret handshakes and black hats. But Kogan actually got his Facebook data by just walking in Facebook’s front door and asking for it. Like all technology platforms, Facebook encourages outside software developers to build applications to run inside it, just like Google does with its Android operating system and Apple does with iOS. And so in November 2013 Kogan, a psychology professor at the University of Cambridge, created an application developer account on Facebook and explained why he wanted access to Facebook’s data for a research project. He started work soon thereafter.

Kogan had created the most anodyne of tools for electoral manipulation: an app based on personality quizzes. Users signed up and answered a series of questions. Then the app would take those answers, mush them together with that person’s Facebook likes and declared interests, and spit out a profile that was supposed to know the test-taker better than he knew himself.

About 270,000 Americans participated. However what they didn’t know was that by agreeing to take the quiz and giving Facebook access to their data, they also granted access to many of their Facebook friends’ likes and interests as well. Users could turn off this setting, but it’s hard to turn off something you don’t know exists and that you couldn’t find if you did. Kogan quickly ended up with data on roughly 50 million people.

About five months after Kogan began his research, Facebook announced that it was tightening its app review policies. For one: Developers couldn’t mine data from your friends anymore. The barn door was shut, but Facebook told all the horses already in the pasture that they had another year to run around. Kogan, then, got a year and a half to do his business. And when the stricter policies went into effect, Facebook promptly rejected version two of his app.

By then Kogan had already mined the data and sold it to Cambridge Analytica, violating his agreement with Facebook and revealing one of the strange asymmetries of this story. Facebook knows everything about its users—but in some ways it knows nothing about its developers. And so Facebook didn’t start to suspect that Kogan had misused its data until it read a blaring headline in The Guardian in December 2015: “Ted Cruz using firm that harvested data on millions of unwitting Facebook users.”

That story passed out of the cycle quickly though, swept away by news about the Iowa caucuses. And so while Facebook’s legal team might have been sweating at the end of 2015, outwardly Zuckerberg projected an air of total calm. His first public statement after the Guardian story broke was a Christmas note about all the books he’d read: “Reading has given me more perspective on a number of topics – from science to religion, from poverty to prosperity, from health to energy to social justice, from political philosophy to foreign policy, and from history to futuristic fiction.”

An Incomplete Response

When the 2015 Guardian story broke, Facebook immediately secured written assertions from Cambridge Analytica, Kogan, and Christopher Wylie that the data had been deleted. Lawyers on all sides started talking, and by the early summer of 2016 Facebook had more substantial legal agreements with Kogan and Wylie certifying that the data had been deleted. Cambridge Analytica signed similar documents, but their paperwork wasn’t submitted until 2017. Facebook’s lawyers describe it as a tortured and intense legal process. Wylie describes it as a pinkie promise. “All they asked me to do was tick a box on a form and post it back,” he told the Guardian.

Facebook’s stronger option would have been to insist on an audit of all of Cambridge Analytica’s machines. Did the data still exist, and had it been used at all? And in fact, according to the standard rules that developers agree to, Facebook reserves that right. “We can audit your app to ensure it is safe and does not violate our Terms. If requested, you must provide us with proof that your app complies with our terms,” the policy currently states, as it did then.

Kogan, too, may have merited closer scrutiny regardless, especially in the context of the 2016 presidential campaign. In addition to his University of Cambridge appointment, Kogan was also an associate professor at St. Petersburg State University, and had accepted research grants from the Russian government.

'All options are on the table.'

Paul Grewal, Facebook Deputy General Counsel

Why didn’t Facebook conduct an audit—a decision that may go down as Facebook’s most crucial mistake? Perhaps because no audit can ever be completely persuasive. Even if no trace of data exists on a server, it could still have been stuck on a hard-drive and shoved in a closet. Facebook’s legal team also insists that an audit would have been time-consuming and would have required a court order even though the developer contract allows for one. A third possible explanation is fear of accusations of political bias. Most of the senior employees at Facebook are Democrats who blanch at allegations that they would let politics seep into the platform.

Whatever the reason, Facebook trusted the signed documents from Cambridge Analytica. In June 2016, Facebook staff even went down to San Antonio to sit with Trump campaign officials and the Cambridge Analytica consultants by their side.

To Facebook, the story seemed to go away. In the year following Trump’s victory, public interest advocates hammered Cambridge Analytica over its data practices, and other publications, particularly The Intercept, dug into its practices. But Facebook, according to executives at the company, never thought to double check if the data was gone until reporters began to call this winter. And then it was only after the story broke that Facebook considered serious action including suing Cambridge Analytica. A lawyer for the company, Paul Grewal, told WIRED on Monday evening that “all options are on the table.”

What Comes Next

Of Facebook’s many problems, one of the most confusing appears to be figuring out what to do with Chancellor, who currently works with the VR team. He may know about the fate of the user data, but this weekend the company was debating how forcefully it could ask him since it could be considered a violation of rules protecting employees from being forced to give up trade secrets from previous jobs.

A harder question is when, and how exactly, Zuckerberg and Sandberg should emerge from their bunkers. Sandberg, in particular, has passed through the crucible of the past two years relatively unscathed. Zuckerberg’s name now trends on Twitter when crises hit, and this magazine put his bruised face on the cover. Even Stamos has taken heat during the outcry over the Russia investigation. And a small bevy of brave employees have waded out into the rushing rivers of Twitter, where they have generally been sucked below the surface or swept over waterfalls.

At its core, according to a former Facebook executive, the problem is really an existential one.

The last most vexing question is what to do to make Facebook data safer. For much of the past year, Facebook has been besieged by critics saying that it should make its data more open. It should let outsiders audit its data and peer around inside with a flashlight. But it was an excess of openness with developers—and opaque privacy practices—that got the company in trouble here. Facebook tightened up third-party access in 2015, meaning an exact replay of the Cambridge Analytica fiasco couldn’t happen today. But if the company decides to close down even further, then what happens to the researchers doing genuinely important work using the platform? How well can you vet intentions? A possible solution would be for Facebook to change its data retention policies. But doing so could undermine how the service fundamentally works, and make it far more difficult to catch malevolent actors—like Russian propaganda teams—after the fact.

User data is now the foundation of the internet. Every time you download an app, you give the developer access to bits of your personal information. Every time you engage with any technology company—Facebook, Google, Amazon, and so on—you help build their giant database of information. In exchange, you trust that they won’t do bad things with that data, because you want the services they offer.

Responding to a thread about how to fix the problem, Stamos tweeted, “I don’t think a digital utopia where everybody has privacy, anonymity and choice, but the bad guys are magically kept out, can exist.”

At its core, according to a former Facebook executive, the problem is really an existential one. The company is very good at dealing with things that happen frequently and have very low stakes. When mistakes happen, they move on. According to the executive, the philosophy of the company has long been “We’re trying to do good things. We’ll make mistakes. But people are good and the world is forgiving.”

If Facebook doesn’t find a satisfactory solution, it faces the unsavory prospect of heavy regulation. Already in the UK, the General Data Protection Regulation rule will give people much more insight and control over what data companies like Facebook take, and how it’s used. In the US, senators like Ron Wyden, Mark Warner, Amy Klobuchar, and others may have the appetite for similar legislation, if Facebook’s privacy woes continue.

Facebook will hold its all-hands today, and hope for that inevitable moment when something horrible happens elsewhere and everyone’s attention turns. But it also knows that things might get worse, much worse. The nightmare scenario will come if the Cambridge Analytica story fully converges with the story of Russian meddling in American democracy: if it turns out that the Facebook data harvested by Cambridge Analytica ended up in the hands of Putin’s trolls.

At that point, Facebook will have to deal with yet another devastating asymmetry: data from a silly quiz app, created under obsolete rules, fueling a national security crisis. But those asymmetries are just part of the nature of Facebook today. The company has immense power, and it’s only begun to grapple with its immense responsibility. And the world isn’t as forgiving of Silicon Valley as it used to be.

Facebook and Cambridge Analytica

This story has been updated to include further details about Tuesday's company-wide meeting.

Read more: https://www.wired.com/story/facebook-cambridge-analytica-response/

Facebook is more dangerous than ever. You can make it stop

Cambridge Analytica's chief executive officer Alexander Nix. His firm recently found itself in the spotlight for misrepresenting itself and harvesting data from millions of Facebook users to aid the Trump campaign / AFP PHOTO / PATRICIA DE MELO MOREIRA
Image: AFP/Getty Images

Remember the Marlboro Man? He was a sexy vision of the American west, created by a cigarette corporation to sell a fatal product. People knew this and used that product anyway, at great detriment to themselves and those around them who quietly inhaled toxic secondhand smoke, day into long night.

An agreement between states and tobacco companies banished the rugged cowboy at the end of the 1990s, but the symbol is useful even 20 years later as we contend with a less deadly but no less frightening corporate force. Social networks that many of us signed up for in simpler times — a proverbial first smoke — have become gargantuan archives of our personal data. Now, that data is collected and leveraged by bad actors in an attempt to manipulate you and your friends. 

The time for ignorance is over. We need social responsibility to counterbalance a bad product. The public learned in alarming detail this weekend how a Trump-aligned firm called Cambridge Analytica managed to collect data on 50 million people using Facebook. All, as the Guardian put it, to “predict and influence choices at the ballot box.” Individuals who opted into Cambridge Analytica’s service — which was disguised as a personality quiz on Facebook — made their friends vulnerable to this manipulation, as well.

There were better days on the social network. When you signed up for Facebook, it’s likely because it was an alluring way for you to connect with old friends and share pictures. You hadn’t ever imagined “Russian trolls” or “fake news” or, lord knows, “Cambridge Analytica.” Chances are, you signed up before 2016, when Wired recently declared the social network had begun “two years of hell,” thanks in no small part to reporting efforts from current Mashable staffer Michael Nuñez

By then, the vast majority of Facebook’s 239 million monthly users in America had registered, had likely built an entire virtual life of friends and photos and status updates that were primed to be harvested by forces they couldn’t yet see or understand. Unlike those who continued smoking after the Marlboro Man arrived (two years after a seminal 1952 article in Reader’s Digest explained the dangers of cigarettes to the broad American public), these Facebook users lit up before they knew the cancer was coming.

Running with a health metaphor, Wired‘s “two years of hell” feature was promoted with a photo illustration by Jake Rowland that depicted a bloodied and bruised Mark Zuckerberg:

Image: photo illustration by jake rowland/esto. courtesy conde nast.

Zuckerberg may have been assaulted from all sides, but we — his users — took more of a licking than he did.

That’s because Facebook’s past two years have been all about ethical and technological crises that hurt users most of all. A favorite editor of mine hated that word, “users,” because it made it sound as though we were talking about something other than people. I can agree with that, but also see now that “users” is the word of moment: Facebook’s problems extend forever out of the idea that we are all different clumps of data generation. Human life is incidental.

Facebook’s problems extend forever out of the idea that we are all different clumps of data generation

The photos you post are interpreted by Facebook’s programs to automatically recognize your face; the interests you communicate via text are collated and cross-examined by algorithms to serve you advertising. Our virtual social connections enrich this marketing web and make advertisers more powerful.

And many of us open the app to scroll without really knowing why. Facebook literally presents us with a “feed.” We are users the way drug addicts are users, and we’re used like a focus group is used to vet shades of red in a new can of Coca-Cola.

None of this has been secret for some time. Braver, more fed up, or perhaps more responsible users have deactivated their Facebook accounts before. But any change they made was on the basis of their experience as individuals. New revelations demand we think more in terms of our online societies.

I have exactly 1,000 Facebook friends, and about 10 actual, best friends I see on a regular basis. It wouldn’t have occurred to me to care much about those other 990 Facebook friends until revelations from the Cambridge Analytica scandal. We have to admit now that the choices we make on Facebook can directly impact others.

The social network’s policies have changed since Cambridge Analytica’s 2016 operation. But Facebook’s business model — gather data on people and profit from that data — hasn’t. We cannot expect it to. But a reasonable person would anticipate it’s only a matter of time until the next major ethical breach is revealed to the public.

We know from bad faith campaigns surrounding Brexit and the 2016 U.S. election that individual users are extremely susceptible to viral disinformation. But until now, it was less clear how Facebook’s own tools could be used by third parties to manipulate an entire network of friends in an attempt to manipulate voter behavior.

Your irresponsibility on Facebook can impact a lot of people. A link you share can catch on and influence minds even if it’s totally falsified; more to this immediate concern, a stupid quiz you take could have opened your friends’ information up in a way they’d never have expected.

You could throw the pack away and deactivate your Facebook account altogether. It will get harder the longer you wait — the more photos you post there, or apps you connect to it.

Or you could be judicious about what you post and share, and what apps you allow access to your account. There are easy ways to review this.

But just remember: There’s no precedent for a social network of this size. We can’t guess what catastrophe it sets off next. Will a policy change someday mean it’s open season on your data, even if that data has limited protections in the here and now? 

Be smart: It’s not just you, or me, out there alone.

Read more: https://mashable.com/2018/03/19/protect-yourself-and-your-friends-from-facebook/

I made Steve Bannons psychological warfare tool: meet the data war whistleblower

Christopher Wylie goes on the record to discuss his role in hijacking the profiles of millions of Facebook users in order to target the US electorate

The first time I met Christopher Wylie, he didnt yet have pink hair. That comes later. As does his mission to rewind time. To put the genie back in the bottle.

By the time I met him in person, Id already been talking to him on a daily basis for hours at a time. On the phone, he was clever, funny, bitchy, profound, intellectually ravenous, compelling. A master storyteller. A politicker. A data science nerd.

Play Video
13:04

Cambridge Analytica whistleblower: ‘We spent $1m harvesting millions of Facebook profiles’ video

Two months later, when he arrived in London from Canada, he was all those things in the flesh. And yet the flesh was impossibly young. He was 27 then (hes 28 now), a fact that has always seemed glaringly at odds with what he has done. He may have played a pivotal role in the momentous political upheavals of 2016. At the very least, he played a consequential role. At 24, he came up with an idea that led to the foundation of a company called Cambridge Analytica, a data analytics firm that went on to claim a major role in the Leave campaign for Britains EU membership referendum, and later became a key figure in digital operations during Donald Trumps election campaign.

Or, as Wylie describes it, he was the gay Canadian vegan who somehow ended up creating Steve Bannons psychological warfare mindfuck tool.

In 2014, Steve Bannon then executive chairman of the alt-right news network Breitbart was Wylies boss. And Robert Mercer, the secretive US hedge-fund billionaire and Republican donor, was Cambridge Analyticas investor. And the idea they bought into was to bring big data and social media to an established military methodology information operations then turn it on the US electorate.

It was Wylie who came up with that idea and oversaw its realisation. And it was Wylie who, last spring, became my source. In May 2017, I wrote an article headlined The great British Brexit robbery, which set out a skein of threads that linked Brexit to Trump to Russia. Wylie was one of a handful of individuals who provided the evidence behind it. I found him, via another Cambridge Analytica ex-employee, lying low in Canada: guilty, brooding, indignant, confused. I havent talked about this to anyone, he said at the time. And then he couldnt stop talking.

Explainer embed

By that time, Steve Bannon had become Trumps chief strategist. Cambridge Analyticas parent company, SCL, had won contracts with the US State Department and was pitching to the Pentagon, and Wylie was genuinely freaked out. Its insane, he told me one night. The company has created psychological profiles of 230 million Americans. And now they want to work with the Pentagon? Its like Nixon on steroids.

He ended up showing me a tranche of documents that laid out the secret workings behind Cambridge Analytica. And in the months following publication of my article in May,it was revealed that the company had reached out to WikiLeaks to help distribute Hillary Clintons stolen emails in 2016. And then we watched as it became a subject of special counsel Robert Muellers investigation into possible Russian collusion in the US election.

The Observer also received the first of three letters from Cambridge Analytica threatening to sue Guardian News and Media for defamation. We are still only just starting to understand the maelstrom of forces that came together to create the conditions for what Mueller confirmed last month was information warfare. But Wylie offers a unique, worms-eye view of the events of 2016. Of how Facebook was hijacked, repurposed to become a theatre of war: how it became a launchpad for what seems to be an extraordinary attack on the USs democratic process.

Wylie oversaw what may have been the first critical breach. Aged 24, while studying for a PhD in fashion trend forecasting, he came up with a plan to harvest the Facebook profiles of millions of people in the US, and to use their private and personal information to create sophisticated psychological and political profiles. And then target them with political ads designed to work on their particular psychological makeup.

We broke Facebook, he says.

And he did it on behalf of his new boss, Steve Bannon.

Is it fair to say you hacked Facebook? I ask him one night.

He hesitates. Ill point out that I assumed it was entirely legal and above board.

Last month, Facebooks UK director of policy, Simon Milner, told British MPs on a select committee inquiry into fake news, chaired by Conservative MP Damian Collins, that Cambridge Analytica did not have Facebook data. The official Hansard extract reads:

Christian Matheson (MP for Chester): Have you ever passed any user information over to Cambridge Analytica or any of its associated companies?

Simon Milner: No.

Matheson: But they do hold a large chunk of Facebooks user data, dont they?

Milner: No. They may have lots of data, but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.

Alexander
Alexander Nix, Cambridge Analytica CEO. Photograph: The Washington Post/Getty Images

Two weeks later, on 27 February, as part of the same parliamentary inquiry, Rebecca Pow, MP for Taunton Deane, asked Cambridge Analyticas CEO, Alexander Nix: Does any of the data come from Facebook? Nix replied: We do not work with Facebook data and we do not have Facebook data.

And through it all, Wylie and I, plus a handful of editors and a small, international group of academics and researchers, have known that at least in 2014 that certainly wasnt the case, because Wylie has the paper trail. In our first phone call, he told me he had the receipts, invoices, emails, legal letters records that showed how, between June and August 2014, the profiles of more than 50 million Facebook users had been harvested. Most damning of all, he had a letter from Facebooks own lawyers admitting that Cambridge Analytica had acquired the data illegitimately.

Going public involves an enormous amount of risk. Wylie is breaking a non-disclosure agreement and risks being sued. He is breaking the confidence of Steve Bannon and Robert Mercer.

Its taken a rollercoaster of a year to help get Wylie to a place where its possible for him to finally come forward. A year in which Cambridge Analytica has been the subject of investigations on both sides of the Atlantic Robert Muellers in the US, and separate inquiries by the Electoral Commission and the Information Commissioners Office in the UK, both triggered in February 2017, after the Observers first article in this investigation.

It has been a year, too, in which Wylie has been trying his best to rewind to undo events that he set in motion. Earlier this month, he submitted a dossier of evidence to the Information Commissioners Office and the National Crime Agencys cybercrime unit. He is now in a position to go on the record: the data nerd who came in from the cold.

There are many points where this story could begin. One is in 2012, when Wylie was 21 and working for the Liberal Democrats in the UK, then in government as junior coalition partners. His career trajectory has been, like most aspects of his life so far, extraordinary, preposterous, implausible.

Profile

Cambridge Analytica: the key players

Alexander Nix, CEO

An Old Etonian with a degree from Manchester University, Nix, 42, worked as a financial analyst in Mexico and the UK before joining SCL, a strategic communications firm, in 2003. From 2007 he took over the companys elections division, and claims to have worked on 260 campaigns globally. He set up Cambridge Analytica to work in America, with investment from RobertMercer.

Aleksandr Kogan, data miner

Aleksandr Kogan was born in Moldova and lived in Moscow until the age of seven, then moved with his family to the US, where he became a naturalised citizen. He studied at the University of California, Berkeley, and got his PhD at the University of Hong Kong before joining Cambridge as a lecturer in psychology and expert in social media psychometrics. He set up Global Science Research (GSR) to carry out CAs data research. While at Cambridge he accepted a position at St Petersburg State University, and also took Russian government grants for research. He changed his name to Spectre when he married, but later reverted to Kogan.

Steve Bannon, former board member

A former investment banker turned alt-right media svengali, Steve Bannon was boss at website Breitbart when he met Christopher Wylie and Nix and advised Robert Mercer to invest in political data research by setting up CA. In August 2016 he became Donald Trumps campaign CEO. Bannon encouraged the reality TV star to embrace the populist, economic nationalist agenda that would carry him into the White House. That earned Bannon the post of chief strategist to the president and for a while he was arguably the second most powerful man in America. By August 2017 his relationship with Trump had soured and he was out.

Robert Mercer, investor

Robert Mercer, 71, is a computer scientist and hedge fund billionaire, who used his fortune to become one of the most influential men in US politics as a top Republican donor. An AI expert, he made a fortune with quantitative trading pioneers Renaissance Technologies, then built a $60m war chest to back conservative causes by using an offshore investment vehicle to avoid US tax.

Rebekah Mercer, investor

Rebekah Mercer has a maths degree from Stanford, and worked as a trader, but her influence comes primarily from her fathers billions. The fortysomething, the second of Mercers three daughters, heads up the family foundation which channels money to rightwing groups. The conservative megadonors backed Breitbart, Bannon and, most influentially, poured millions into Trumps presidential campaign.

Wylie grew up in British Columbia and as a teenager he was diagnosed with ADHD and dyslexia. He left school at 16 without a single qualification. Yet at 17, he was working in the office of the leader of the Canadian opposition; at 18, he went to learn all things data from Obamas national director of targeting, which he then introduced to Canada for the Liberal party. At 19, he taught himself to code, and in 2010, age 20, he came to London to study law at the London School of Economics.

Politics is like the mob, though, he says. You never really leave. I got a call from the Lib Dems. They wanted to upgrade their databases and voter targeting. So, I combined working for them with studying for my degree.

Politics is also where he feels most comfortable. He hated school, but as an intern in the Canadian parliament he discovered a world where he could talk to adults and they would listen. He was the kid who did the internet stuff and within a year he was working for the leader of the opposition.

Hes one of the brightest people you will ever meet, a senior politician whos known Wylie since he was 20 told me. Sometimes thats a blessing and sometimes a curse.

Meanwhile, at Cambridge Universitys Psychometrics Centre, two psychologists, Michal Kosinski and David Stillwell, were experimenting with a way of studying personality by quantifying it.

Starting in 2007,Stillwell, while a student, had devised various apps for Facebook, one of which, a personality quiz called myPersonality, had gone viral. Users were scored on big five personality traits Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism and in exchange, 40% of them consented to give him access to their Facebook profiles. Suddenly, there was a way of measuring personality traits across the population and correlating scores against Facebook likes across millions of people.

An
Examples, above and below, of the visual messages trialled by GSRs online profiling test. Respondents were asked: How important should this message be to all Americans?

The research was original, groundbreaking and had obvious possibilities. They had a lot of approaches from the security services, a member of the centre told me. There was one called You Are What You Like and it was demonstrated to the intelligence services. And it showed these odd patterns; that, for example, people who liked I hate Israel on Facebook also tended to like Nike shoes and KitKats.

There are agencies that fund research on behalf of the intelligence services. And they were all over this research. That one was nicknamed Operation KitKat.

The defence and military establishment were the first to see the potential of the research. Boeing, a major US defence contractor, funded Kosinskis PhD and Darpa, the US governments secretive Defense Advanced Research Projects Agency, is cited in at least two academic papers supporting Kosinskis work.

But when, in 2013, the first major paper was published, others saw this potential too, including Wylie. He had finished his degree and had started his PhD in fashion forecasting, and was thinking about the Lib Dems. It is fair to say that he didnt have a clue what he was walking into.

An

I wanted to know why the Lib Dems sucked at winning elections when they used to run the country up to the end of the 19th century, Wylie explains. And I began looking at consumer and demographic data to see what united Lib Dem voters, because apart from bits of Wales and the Shetlands its weird, disparate regions. And what I found is there were no strong correlations. There was no signal in the data.

And then I came across a paper about how personality traits could be a precursor to political behaviour, and it suddenly made sense. Liberalism is correlated with high openness and low conscientiousness, and when you think of Lib Dems theyre absent-minded professors and hippies. Theyre the early adopters theyre highly open to new ideas. And it just clicked all of a sudden.

Here was a way for the party to identify potential new voters. The only problem was that the Lib Dems werent interested.

I did this presentation at which I told them they would lose half their 57 seats, and they were like: Why are you so pessimistic? They actually lost all but eight of their seats, FYI.

Another Lib Dem connection introduced Wylie to a company called SCL Group, one of whose subsidiaries, SCL Elections, would go on to create Cambridge Analytica (an incorporated venture between SCL Elections and Robert Mercer, funded by the latter). For all intents and purposes, SCL/Cambridge Analytica are one and the same.

Alexander Nix, then CEO of SCL Elections, made Wylie an offer he couldnt resist. He said: Well give you total freedom. Experiment. Come and test out all your crazy ideas.

An
Another example of the visual messages trialled by GSRs online profiling test.

In the history of bad ideas, this turned out to be one of the worst. The job was research director across the SCL group, a private contractor that has both defence and elections operations. Its defence arm was a contractor to the UKs Ministry of Defence and the USs Department of Defense, among others. Its expertise was in psychological operations or psyops changing peoples minds not through persuasion but through informational dominance, a set of techniques that includes rumour, disinformation and fake news.

SCL Elections had used a similar suite of tools in more than 200 elections around the world, mostly in undeveloped democracies that Wylie would come to realise were unequipped to defend themselves.

Wylie holds a British Tier 1 Exceptional Talent visa a UK work visa given to just 200 people a year. He was working inside government (with the Lib Dems) as a political strategist with advanced data science skills. But no one, least of all him, could have predicted what came next. When he turned up at SCLs offices in Mayfair, he had no clue that he was walking into the middle of a nexus of defence and intelligence projects, private contractors and cutting-edge cyberweaponry.

The thing I think about all the time is, what if Id taken a job at Deloitte instead? They offered me one. I just think if Id taken literally any other job, Cambridge Analytica wouldnt exist. You have no idea how much I brood on this.

A few months later, in autumn 2013, Wylie met Steve Bannon. At the time, he was editor-in-chief of Breitbart, which he had brought to Britain to support his friend Nigel Farage in his mission to take Britain out of the European Union.

What was he like?

Smart, says Wylie. Interesting. Really interested in ideas. Hes the only straight man Ive ever talked to about intersectional feminist theory. He saw its relevance straightaway to the oppressions that conservative, young white men feel.

Wylie meeting Bannon was the moment petrol was poured on a flickering flame. Wylie lives for ideas. He speaks 19 to the dozen for hours at a time. He had a theory to prove. And at the time, this was a purely intellectual problem. Politics was like fashion, he told Bannon.

[Bannon] got it immediately. He believes in the whole Andrew Breitbart doctrine that politics is downstream from culture, so to change politics you need to change culture. And fashion trends are a useful proxy for that. Trump is like a pair of Uggs, or Crocs, basically. So how do you get from people thinking Ugh. Totally ugly to the moment when everyone is wearing them? That was the inflection point he was looking for.

But Wylie wasnt just talking about fashion. He had recently been exposed to a new discipline: information operations, which ranks alongside land, sea, air and space in the US militarys doctrine of the five-dimensional battle space. His brief ranged across the SCL Group the British government has paid SCL to conduct counter-extremism operations in the Middle East, and the US Department of Defense has contracted it to work in Afghanistan.

I tell him that another former employee described the firm as MI6 for hire, and Id never quite understood it.

Its like dirty MI6 because youre not constrained. Theres no having to go to a judge to apply for permission. Its normal for a market research company to amass data on domestic populations. And if youre working in some country and theres an auxiliary benefit to a current client with aligned interests, well thats just a bonus.

When I ask how Bannon even found SCL, Wylie tells me what sounds like a tall tale, though its one he can back up with an email about how Mark Block, a veteran Republican strategist, happened to sit next to a cyberwarfare expert for the US air force on a plane. And the cyberwarfare guy is like, Oh, you should meet SCL. They do cyberwarfare for elections.

U.S.
Steve Bannon: He loved the gays, says Wylie. He saw us as early adopters. Photograph: Tony Gentile/Reuters

It was Bannon who took this idea to the Mercers: Robert Mercer the co-CEO of the hedge fund Renaissance Technologies, who used his billions to pursue a rightwing agenda, donating to Republican causes and supporting Republican candidates and his daughter Rebekah.

Nix and Wylie flew to New York to meet the Mercers in Rebekahs Manhattan apartment.

She loved me. She was like, Oh we need more of your type on our side!

Your type?

The gays. She loved the gays. So did Steve [Bannon]. He saw us as early adopters. He figured, if you can get the gays on board, everyone else will follow. Its why he was so into the whole Milo [Yiannopoulos] thing.

Robert Mercer was a pioneer in AI and machine translation. He helped invent algorithmic trading which replaced hedge fund managers with computer programs and he listened to Wylies pitch. It was for a new kind of political message-targeting based on an influential and groundbreaking 2014 paper researched at Cambridges Psychometrics Centre, called: Computer-based personality judgments are more accurate than those made by humans.

In politics, the money man is usually the dumbest person in the room. Whereas its the opposite way around with Mercer, says Wylie. He said very little, but he really listened. He wanted to understand the science. And he wanted proof that it worked.

And to do that, Wylie needed data.

How Cambridge Analytica acquired the data has been the subject of internal reviews at Cambridge University, of many news articles and much speculation and rumour.

When Nix was interviewed by MPs last month, Damian Collins asked him:

Does any of your data come from Global Science Research company?

Nix: GSR?

Collins: Yes.

Nix: We had a relationship with GSR. They did some research for us back in 2014. That research proved to be fruitless and so the answer is no.

Collins: They have not supplied you with data or information?

Nix: No.

Collins: Your datasets are not based on information you have received from them?

Nix: No.

Collins: At all?

Nix: At all.

The problem with Nixs response to Collins is that Wylie has a copy of an executed contract, dated 4 June 2014, which confirms that SCL, the parent company of Cambridge Analytica, entered into a commercial arrangement with a company called Global Science Research (GSR), owned by Cambridge-based academic Aleksandr Kogan, specifically premised on the harvesting and processing of Facebook data, so that it could be matched to personality traits and voter rolls.

He has receipts showing that Cambridge Analytica spent $7m to amass this data, about $1m of it with GSR. He has the bank records and wire transfers. Emails reveal Wylie first negotiated with Michal Kosinski, one of the co-authors of the original myPersonality research paper, to use the myPersonality database. But when negotiations broke down, another psychologist, Aleksandr Kogan, offered a solution that many of his colleagues considered unethical. He offered to replicate Kosinski and Stilwells research and cut them out of the deal. For Wylie it seemed a perfect solution. Kosinski was asking for $500,000 for the IP but Kogan said he could replicate it and just harvest his own set of data. (Kosinski says the fee was to fund further research.)

Dr
An unethical solution? Dr Aleksandr Kogan Photograph: alex kogan

Kogan then set up GSR to do the work, and proposed to Wylie they use the data to set up an interdisciplinary institute working across the social sciences. What happened to that idea, I ask Wylie. It never happened. I dont know why. Thats one of the things that upsets me the most.

It was Bannons interest in culture as war that ignited Wylies intellectual concept. But it was Robert Mercers millions that created a firestorm. Kogan was able to throw money at the hard problem of acquiring personal data: he advertised for people who were willing to be paid to take a personality quiz on Amazons Mechanical Turk and Qualtrics. At the end of which Kogans app, called thisismydigitallife, gave him permission to access their Facebook profiles. And not just theirs, but their friends too. On average, each seeder the people who had taken the personality test, around 320,000 in total unwittingly gave access to at least 160 other peoples profiles, none of whom would have known or had reason to suspect.

What the email correspondence between Cambridge Analytica employees and Kogan shows is that Kogan had collected millions of profiles in a matter of weeks. But neither Wylie nor anyone else at Cambridge Analytica had checked that it was legal. It certainly wasnt authorised. Kogan did have permission to pull Facebook data, but for academic purposes only. Whats more, under British data protection laws, its illegal for personal data to be sold to a third party without consent.

Facebook could see it was happening, says Wylie. Their security protocols were triggered because Kogans apps were pulling this enormous amount of data, but apparently Kogan told them it was for academic use. So they were like, Fine.

Kogan maintains that everything he did was legal and he had a close working relationship with Facebook, which had granted him permission for his apps.

Cambridge Analytica had its data. This was the foundation of everything it did next how it extracted psychological insights from the seeders and then built an algorithm to profile millions more.

For more than a year, the reporting around what Cambridge Analytica did or didnt do for Trump has revolved around the question of psychographics, but Wylie points out: Everything was built on the back of that data. The models, the algorithm. Everything. Why wouldnt you use it in your biggest campaign ever?

In December 2015, the Guardians Harry Davies published the first report about Cambridge Analytica acquiring Facebook data and using it to support Ted Cruz in his campaign to be the US Republican candidate. But it wasnt until many months later that Facebook took action. And then, all they did was write a letter. In August 2016, shortly before the US election, and two years after the breach took place, Facebooks lawyers wrote to Wylie, who left Cambridge Analytica in 2014, and told him the data had been illicitly obtained and that GSR was not authorised to share or sell it. They said it must be deleted immediately.

Christopher
Christopher Wylie: Its like Nixon on steroids

I already had. But literally all I had to do was tick a box and sign it and send it back, and that was it, says Wylie. Facebook made zero effort to get the data back.

There were multiple copies of it. It had been emailed in unencrypted files.

Cambridge Analytica rejected all allegations the Observer put to them.

Dr Kogan who later changed his name to Dr Spectre, but has subsequently changed it back to Dr Kogan is still a faculty member at Cambridge University, a senior research associate. But what his fellow academics didnt know until Kogan revealed it in emails to the Observer (although Cambridge University says that Kogan told the head of the psychology department), is that he is also an associate professor at St Petersburg University. Further research revealed that hes received grants from the Russian government to research Stress, health and psychological wellbeing in social networks. The opportunity came about on a trip to the city to visit friends and family, he said.

There are other dramatic documents in Wylies stash, including a pitch made by Cambridge Analytica to Lukoil, Russias second biggest oil producer. In an email dated 17 July 2014, about the US presidential primaries, Nix wrote to Wylie: We have been asked to write a memo to Lukoil (the Russian oil and gas company) to explain to them how our services are going to apply to the petroleum business. Nix said that they understand behavioural microtargeting in the context of elections but that they were failing to make the connection between voters and their consumers. The work, he said, would be shared with the CEO of the business, a former Soviet oil minister and associate of Putin, Vagit Alekperov.

It didnt make any sense to me, says Wylie. I didnt understand either the email or the pitch presentation we did. Why would a Russian oil company want to target information on American voters?

Muellers investigation traces the first stages of the Russian operation to disrupt the 2016 US election back to 2014, when the Russian state made what appears to be its first concerted efforts to harness the power of Americas social media platforms, including Facebook. And it was in late summer of the same year that Cambridge Analytica presented the Russian oil company with an outline of its datasets, capabilities and methodology. The presentation had little to do with consumers. Instead, documents show it focused on election disruption techniques. The first slide illustrates how a rumour campaign spread fear in the 2007 Nigerian election in which the company worked by spreading the idea that the election would be rigged. The final slide, branded with Lukoils logo and that of SCL Group and SCL Elections, headlines its deliverables: psychographic messaging.

https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump

No one wants to build a feel good internet

If there is one policy dilemma facing nearly every tech company today, it is what to do about “content moderation,” the almost-Orwellian term for censorship.

Charlie Warzel of Buzzfeed pointedly asked the question a little more than a week ago: “How is it that the average untrained human can do something that multibillion-dollar technology companies that pride themselves on innovation cannot? And beyond that, why is it that — after multiple national tragedies politicized by malicious hoaxes and misinformation — such a question even needs to be asked?”

For years, companies like Facebook, Twitter, YouTube, and others have avoided putting serious resources behind implementing moderation, preferring relatively small teams of moderators coupled with basic crowdsourced flagging tools to prioritize the worst offending content.

There has been something of a revolution in thinking though over the past few months, as opposition to content moderation retreats in the face of repeated public outcries.

In his message on global community, Mark Zuckerberg asked “How do we help people build a safe community that prevents harm, helps during crises and rebuilds afterwards in a world where anyone across the world can affect us?” (emphasis mine) Meanwhile, Jack Dorsey tweeted this week that “We’re committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.”

Both messages are wonderful paeans to better community and integrity. There is just one problem: neither company truly wants to wade into the politics of censorship, which is what it will take to make a “feel good” internet.

Take just the most recent example. The New York Times on Friday wrote that Facebook will allow a photo of a bare-chested male on its platform, but will block photos of women showing the skin on their backs. “For advertisers, debating what constitutes ‘adult content’ with those human reviewers can be frustrating,” the article notes. “Goodbye Bread, an edgy online retailer for young women, said it had a heated debate with Facebook in December over the image of young woman modeling a leopard-print mesh shirt. Facebook said the picture was too suggestive.”

Or rewind a bit in time to the controversy over Nick Ut’s famous Vietnam War photograph entitled “Napalm Girl.” Facebook’s content moderation initially banned the photo, then the company unbanned it following a public outcry over censorship. Is it nudity? Well, yes, there is are breasts exposed. Is it violent? Yet, it is a picture from a war.

Whatever your politics, and whatever your proclivities toward or against suggestive or violent imagery, the reality is that there is simply no obviously “right” answer in many of these cases. Facebook and other social networks are determining taste, but taste differs widely from group to group and person to person. It’s as if you have melded the audiences from Penthouse and Focus on the Family Magazine together and delivered to them the same editorial product.

The answer to Warzel’s question is obvious in retrospect. Yes, tech companies have failed to invest in content moderation, and for a specific reason: it’s intentional. There is an old saw about work: if you don’t want to be asked to do something, be really, really bad at it, so then no one will ask you to do it again. Silicon Valley tech companies are really, really, bad about content moderation, not because they can’t do it, but because they specifically don’t want to.

It’s not hard to understand why. Suppressing speech is anathema not just to the U.S. constitution and its First Amendment, and not just to the libertarian ethos that pervades Silicon Valley companies, but also to the safe harbor legal framework that protects online sites from taking responsibility for their content in the first place. No company wants to cross so many simultaneous tripwires.

Let’s be clear too that there are ways of doing content moderation at scale. China does it today through a set of technologies generally referred to as the Great Firewall, as well as an army of content moderators that some estimate reaches past two million individuals. South Korea, a democracy rated free by Freedom House, has had a complicated history of requiring comments on the internet to be attached to a user’s national identification number to prevent “misinformation” from spreading.

Facebook, Google (and by extension, YouTube), and Twitter are at a scale where they can do content moderation this way if they really wanted to. Facebook could hire hundreds of thousands of people in the Midwest, which Zuckerberg just toured, and provide decent paying, flexible jobs reading over posts and verifying images. Posts could require a user’s Social Security Number to ensure that content came from bona fide humans.

As of last year, users on YouTube uploaded 400 hours of video per minute. Maintaining real-time content moderation would require 24,000 people working every hour of the day, at a cost of $8.6 million per day or $3.1 billion per year (assuming a $15 hourly wage). That’s of course a very liberal estimate: artificial intelligence and crowdsourced flagging can provide at least some level of leverage, and it almost certainly the case that not every video needs to be reviewed as carefully or in real-time.

Yes, it’s expensive — YouTube financials are not disclosed by Alphabet, but analysts put the service’s revenues as high as $15 billion. And yes, hiring and training tens of thousands of people is a huge undertaking, but the internet could be made “safe” for its users if any of these companies truly wanted to.

But then we go back to the challenge laid out before: what is YouTube’s taste? What is allowed and what is not? China solves this by declaring certain online discussions illegal. China Digital Times, for instance, has extensively covered the evolving blacklists of words disseminated by the government around particularly contentious topics.

That doesn’t mean the rules lack nuance. Gary King and a team of researchers at Harvard concluded in a brilliant study that China allows for criticism of the government, but specifically bans any conversation that calls for collective action — often even if it is in favor of the government. That’s a very clear bright line for content moderators to follow, not to mention that mistakes are fine: if one post accidentally gets blocked, the Chinese government really doesn’t care.

The U.S. has thankfully very few rules around speech, and today’s content moderation systems generally handle those expeditiously. What’s left is the ambiguous speech that crosses the line for some people and not for others, which is why Facebook and other social networks get castigated by the press for blocking Napalm Girl or the back of a female’s body.

Facebook, ingeniously, has a solution for all of this. It has declared that it wants the feed to show more content from family and friends, rather than the sort of viral content that has been controversial in the past. By focusing on content from friends, the feed can show more positive, engaging content that improves a user’s state of mind.

I say it is ingenious though, because emphasizing content from family and friends is really just a method of insulating a user’s echo chamber even further. Sociologists have longed studied social network homophily, the strong tendency of people to know those similar to themselves. A friend sharing a post isn’t just more organic, it’s also content you’re more likely to agree with in the first place.

Do we want to live in an echo chamber, or do we want to be bombarded by negative, and sometimes hurtful content? That ultimately is what I mean when I say that building a feel good internet is impossible. The more we want positivity and uplifting stories in our streams of content, the more we need to blank out not just the racist and vile material that Twitter and other social networks purvey, but also the kinds of negative stories about politics, war, and peace that are necessary for democratic citizenship.

Ignorance is ultimately bliss, but the Internet was designed to provide the most amount of information with the most speed. The two goals directly compete, and Silicon Valley companies are rightfully dragging their heels in avoiding deep content moderation.

Read more: https://techcrunch.com/2018/03/03/no-one-wants-to-build-a-feel-good-internet/

InfoSums first product touts decentralized big data insights

Nick Halstead’s new startup, InfoSum, is launching its first product today — moving one step closer to his founding vision of a data platform that can help businesses and organizations unlock insights from big data silos without compromising user privacy, data security or data protection law. So a pretty high bar then.

If the underlying tech lives up to the promises being made for it, the timing for this business looks very good indeed, with the European Union’s new General Data Protection Regulation (GDPR) mere months away from applying across the region — ushering in a new regime of eye-wateringly large penalties to incentivize data handling best practice.

InfoSum bills its approach to collaboration around personal data as fully GDPR compliant — because it says it doesn’t rely on sharing the actual raw data with any third parties.

Rather a mathematical model is used to make a statistical comparison, and the platform delivers aggregated — but still, says Halstead — useful insights. Though he says the regulatory angle is fortuitous, rather than the full inspiration for the product.

“Two years ago, I saw that the world definitely needed a different way to think about working on knowledge about people,” he tells TechCrunch. “Both for privacy [reasons] — there isn’t a week where we don’t see some kind of data breach… they happen all the time — but also privacy isn’t enough by itself. There has to be a commercial reason to change things.”

The commercial imperative he reckons he’s spied is around how “unmanageable” big data can become when it’s pooled for collaborative purposes.

Datasets invariably need a lot of cleaning up to make different databases align and overlap. And the process of cleaning and structuring data so it can be usefully compared can run to multiple weeks. Yet that effort has to be put in before you really know if it will be worth your while doing so.

That snag of time + effort is a major barrier preventing even large companies from doing more interesting things with their data holdings, argues Halstead.

So InfoSum’s first product — called Link — is intended to give businesses a glimpse of the “art of the possible”, as he puts it — in just a couple of hours, rather than the “nine, ten weeks” he says it might otherwise take them.

“I set myself a challenge… could I get through the barriers that companies have around privacy, security, and the commercial risks when they handle consumer data. And, more importantly, when they need to work with third parties or need to work across their corporation where they’ve got numbers of consumer data and they want to be able to look at that data and look at the combined knowledge across those.

“That’s really where I came up with this idea of non-movement of data. And that’s the core principle of what’s behind InfoSum… I can connect knowledge across two data sets, as if they’ve been pooled.”

Halstead says that the problem with the traditional data pooling route — so copying and sharing raw data with all sorts of partners (or even internally, thereby expanding the risk vector surface area) — is that it’s risky. The myriad data breaches that regularly make headlines nowadays are a testament to that.

But that’s not the only commercial consideration in play, as he points out that raw data which has been shared is immediately less valuable — because it can’t be sold again.

“If I give you a data set in its raw form, I can’t sell that to you again — you can take it away, you can slice it and dice it as many ways as you want. You won’t need to come back to me for another three or four years for that same data,” he argues. “From a commercial point of view [what we’re doing] makes the data more valuable. In that data is never actually having to be handed over to the other party.”

Not blockchain for privacy

Decentralization, as a technology approach, is also of course having a major moment right now — thanks to blockchain hype. But InfoSum is definitely not blockchain. Which is a good thing. No sensible person should be trying to put personal data on a blockchain.

“The reality is that all the companies that say they’re doing blockchain for privacy aren’t using blockchain for the privacy part, they’re just using it for a trust model, or recording the transactions that occur,” says Halstead, discussing why blockchain is terrible for privacy.

“Because you can’t use the blockchain and say it’s GDPR compliant or privacy safe. Because the whole transparency part of it and the fact that it’s immutable. You can’t have an immutable database where you can’t then delete users from it. It just doesn’t work.”

Instead he describes InfoSum’s technology as “blockchain-esque” — because “everyone stays holding their data”. “The trust is then that because everyone holds their data, no one needs to give their data to everyone else. But you can still crucially, through our technology, combine the knowledge across those different data sets.”

So what exactly is InfoSum doing to the raw personal data to make it “privacy safe”? Halstead claims it goes “beyond hashing” or encrypting it. “Our solution goes beyond that — there is no way to re-identify any of our data because it’s not ever represented in that way,” he says, further claiming: “It is absolutely 100 per cent data isolation, and we are the only company doing this in this way.

“There are solutions out there where traditional models are pooling it but with encryption on top of it. But again if the encryption gets broken the data is still ending up being in a single silo.”

InfoSum’s approach is based on mathematically modeling users, using a “one way model”, and using that to make statistical comparisons and serve up aggregated insights.

“You can’t read things out of it, you can only test things against it,” he says of how it’s transforming the data. “So it’s only useful if you actually knew who those users were beforehand — which obviously you’re not going to. And you wouldn’t be able to do that unless you had access to our underlying code-base. Everyone else either users encryption or hashing or a combination of both of those.”

This one-way modeling technique is in the process of being patented — so Halstead says he can’t discuss the “fine details” — but he does mention a long standing technique for optimizing database communications, called bloom filters, saying those sorts of “principles” underpin InfoSum’s approach.

Although he also says it’s using those kind of techniques differently. Here’s how InfoSum’s website describes this process (which it calls Quantum):

InfoSum Quantum irreversibly anonymises data and creates a mathematical model that enables isolated datasets to be statistically compared. Identities are matched at an individual level and results are collated at an aggregate level – without bringing the datasets together.

On the surface, the approach shares a similar structure to Facebook’s Custom Audiences Product, where advertisers’ customer lists are locally hashed and then uploaded to Facebook for matching against its own list of hashed customer IDs — with any matches used to create a custom audience for ad targeting purposes.

Though Halstead argues InfoSum’s platform offers more for even this kind of audience building marketing scenario, because its users can use “much more valuable knowledge” to model on — knowledge they would not comfortably share with Facebook “because of the commercial risks of handing over that first person valuable data”.

“For instance if you had an attribute that defined which were your most valuable customers, you would be very unlikely to share that valuable knowledge — yet if you could safely then it would be one of the most potent indicators to model upon,” he suggests.

He also argues that InfoSum users will be able to achieve greater marketing insights via collaborations with other users of the platform vs being a customer of Facebook Custom Audiences — because Facebook simply “does not open up its knowledge”.

“You send them your customer lists, but they don’t then let you have the data they have,” he adds. “InfoSum for many DMPs [data management platforms] will allow them to collaborate with customers so the whole purchasing of marketing can be much more transparent.”

He also emphasizes that marketing is just one of the use-cases InfoSum’s platform can address.

Decentralized bunkers of data

One important clarification: InfoSum customers’ data does get moved — but it’s moved into a “private isolated bunker” of their choosing, rather than being uploaded to a third party.

“The easiest one to use is where we basically create you a 100 per cent isolated instance in Amazon [Web Services],” says Halstead. “We’ve worked with Amazon on this so that we’ve used a whole number of techniques so that once we create this for you, you put your data into it — we don’t have access to it. And when you connect it to the other part we use this data modeling so that no data then moves between them.”

“The ‘bunker’ is… an isolated instance,” he adds, elaborating on how communications with these bunkers are secured. “It has its own firewall, a private VPN, and of course uses standard SSL security. And once you have finished normalising the data it is turned into a form in which all PII [personally identifiable information] is deleted.

“And of course like any other security related company we have had independent security companies penetration test our solution and look at our architecture design.”

Other key pieces of InfoSum’s technology are around data integration and identity mapping — aimed at tackling the (inevitable) problem of data in different databases/datasets being stored in different formats. Which again is one of the commercial reasons why big data silos often stay just that: Silos.

Halstead gave TechCrunch a demo showing how the platform ingests and connects data, with users able to use “simple steps” to teach the system what is meant by data types stored in different formats — such as that ‘f’ means the same as ‘female’ for gender category purposes — to smooth the data mapping and “try to get it as clean as possible”.

Once that step has been completed, the user (or collaborating users) are able to get a view on how well linked their data sets are — and thus to glimpse “the start of the art of the possible”.

In practice this means they can choose to run different reports atop their linked datasets — such as if they want to enrich their data holdings by linking their own users across different products to gain new insights, such as for internal research purposes.

Or, where there’s two InfoSum users linking different data sets, they could use it for propensity modeling or lookalike modeling of customers, says Halstead. So, for example, a company could link models of their users with models of the users of a third party that holds richer data on its users to identify potential new customer types to target marketing at.

“Because I’ve asked to look at the overlap I can literally say I only know the gender of these people but I would also like to know what their income is,” he says, fleshing out another possible usage scenario. “You can’t drill into this, you can’t do really deep analytics — that’s what we’ll be launching later. But Link allows you to get this idea of what would it look like if I combine our datasets.

“The key here is it’s opening up a whole load of industries where sensitivity around doing this — and where, even in industries that share a lot of data already but where GDPR is going to be a massive barrier to it in the future.”

Halstead says he expects big demand from the marketing industry which is of course having to scramble to rework its processes to ensure they don’t fall foul of GDPR.

Our model is safer, faster, and actually still really lets people do all the things they did before but while protecting the customers.

“Within marketing there is going to be a whole load of new challenges for companies where they were currently enhancing their databases, buying up large raw datasets and bringing their data into their own CRM. That world’s gone once we’ve got GDPR.

“Our model is safer, faster, and actually still really lets people do all the things they did before but while protecting the customers.”

But it’s not just marketing exciting him. Halstead believes InfoSum’s approach to lifting insights from personal data could be very widely applicable — arguing, for example, that it’s only a minority of use-cases, such as credit risk and fraud within banking, where companies actually need to look at data at an individual level.

One area he says he’s “very passionate” about InfoSum’s potential is in the healthcare space.

“We believe that this model isn’t just about helping marketing and helping a whole load of others — healthcare especially for us I think is going to be huge. Because [this affords] the ability to do research against health data where health data is never been actually shared,” he says.

“In the UK especially we’ve had a number of massive false starts where companies have, for very good reasons, wanted to be able to look at health records and combine data — which can turn into vital research to help people. But actually their way of doing it has been about giving out large datasets. And that’s just not acceptable.”

He even suggests the platform could be used for training AIs within the isolated bunkers — flagging a developer interface that will be launching after Link which will let users query the data as a traditional SQL query.

Though he says he sees most initial healthcare-related demand coming from analytics that need “one or two additional attributes” — such as, for example, comparing health records of people with diabetes with activity tracker data to look at outcomes for different activity levels.

“You don’t need to drill down into individuals to know that the research capabilities could give you incredible results to understand behavior,” he adds. “When you do medical research you need bodies of data to be able to prove things so the fact that we can only work at an aggregate level is not, I don’t think, any barrier to being able to do the kind of health research required.”

Another area he believes could really benefit is M&A — saying InfoSum’s platform could offer companies a way to understand how their user bases overlap before they sign on the line. (It is also of course handling and thus simplifying the legal side of multiple entities collaborating over data sets.)

“There hasn’t been the technology to allow them to look at whether there’s an overlap before,” he claims. “It puts the power in the hands of the buyer to be able to say we’d like to be able to look at what your user base looks like in comparison to ours.

“The problem right now is you could do that manually but if they then backed out there’s all kinds of legal problems because I’ve had to hand the raw data over… so no one does it. So we’re going to change the M&A market for allowing people to discover whether I should acquire someone before they go through to the data room process.”

While Link is something of a taster of what InfoSum’s platform aims to ultimately offer (with this first product priced low but not freemium), the SaaS business it’s intending to get into is data matchmaking — whereby, once it has a pipeline of users, it can start to suggest links that might be interesting for its customers to explore.

“There is no point in us reinventing the wheel of being the best visualization company because there’s plenty that have done that,” he says. “So we are working on data connectors for all of the most popular BI tools that plug in to then visualize the actual data.

“The long term vision for us moves more into being more of an introductory service — i.e. one we’ve got 100 companies in this how do we help those companies work out what other companies that they should be working with.”

“We’ve got some very good systems for — in a fully anonymized way — helping you understand what the intersection is from your data to all of the other datasets, obviously with their permission if they want us to calculate that for them,” he adds.

“The way our investors looked at this, this is the big opportunity going forward. There is not limit, in a decentralized world… imagine 1,000 bunkers around the world in these different corporates who all can start to collaborate. And that’s our ultimate goal — that all of them are still holding onto their own knowledge, 100% privacy safe, but then they have that opportunity to work with each other, which they don’t right now.”

Engineering around privacy risks?

But does he not see any risks to privacy of enabling the linking of so many separate datasets — even with limits in place to avoid individuals being directly outed as connected across different services?

“However many data sets there are the only thing it can reveal extra is whether every extra data has an extra bit of knowledge,” he responds on that. “And every party has the ability to define  what bit of data they would then want to be open to others to then work on.

“There are obviously sensitivities around certain combinations of attributes, around religion, gender and things like that. Where we already have a very clever permission system where the owners can define what combinations are acceptable and what aren’t.”

“My experience of working with all the social networks has meant — I hope — that we are ahead of the game of thinking about those,” he adds, saying that the matchmaking stage is also six months out at this point.

“I don’t see any down sides to it, as long as the controls are there to be able to limit it. It’s not like it’s going to be a sudden free for all. It’s an introductory service, rather than an open platform so everyone can see everything else.”

The permission system is clearly going to be important. But InfoSum does essentially appear to be heading down the platform route of offloading responsibility for ethical considerations — in its case around dataset linkages — to its customers.

Which does open the door to problematic data linkages down the line, and all sorts of unintended dots being joined.

Say, for example, a health clinic decides to match people with particular medical conditions to users of different dating apps — and the relative proportions of HIV rates across straight and gay dating apps in the local area gets published. What unintended consequences might spring from that linkage being made?

Other equally problematic linkages aren’t hard to imagine. And we’ve seen the appetite businesses have for making creepy observations about their users public.

“Combining two sets of aggregate data meaningfully is not easy,” says Eerke Boiten, professor of cyber security at De Montford University, discussing InfoSum’s approach. “If they can make this all work out in a way that makes sense, preserves privacy, and is GDPR compliant, then they deserve a patent I suppose.”

On data linkages, Boiten points to the problems Facebook has had with racial profiling as illustrative of the potential pitfalls.

He also says there may also be GDPR-specific risks around customer profiling enabled by the platform. In an edge case scenario, for example, where two overlapped datasets are linked and found to have a 100% user match, that would mean people’s personal data had been processed by default — so that processing would have required a legal basis to be in place beforehand.

And there may be wider legal risks around profiling too. If, for example, linkages are used to deny services or vary pricing to certain types or blocks of customers, is that legal or ethical?

“From a company’s perspective, if it already has either consent or a legitimate purpose (under GDPR) to use customer data for analytical/statistical purposes then it can use our products,” says InfoSum’s COO Danvers Baillieu, on data processing consent. “Where a company has an issue using InfoSum as a sub-processor, then… we can set up the system differently so that we simply supply the software and they run it on their own machines (so we are not a data processor) –- but this is not yet available in Link.”

Baillieu also notes that the bin sizes InfoSum’s platform aggregates individuals into are configurable in its first product. “The default bin size is 10, and the absolute minimum is three,” he adds.

“The other key point around disclosure control is that our system never needs to publish the raw data table. All the famous breaches from Netflix onwards are because datasets have been pseudonymised badly and researchers have been able to run analysis across the visible fields and then figure out who the individuals are — this is simply not possible with our system as this data is never revealed.”

‘Fully GDPR compliant’ is certainly a big claim — and one that it going to have a lot of slings and arrows thrown at it as data gets ingested by InfoSum’s platform.

It’s also fair to say that a whole library of books could be written about technology’s unintended consequences.

Indeed, InfoSum’s own website credits Halstead as the inventor of the embedded retweet button, noting the technology is “something that is now ubiquitous on almost every website in the world”.

Those ubiquitous social plugins are also of course a core part of the infrastructure used to track Internet users wherever and almost everywhere they browse. So does he have any regrets about the invention, given how that bit of innovation has ended up being so devastating for digital privacy?

“When I invented it, the driving force for the retweet button was only really as a single number to count engagement. It was never to do with tracking. Our version of the retweet button never had any trackers in it,” he responds on that. “It was the number that drove our algorithms for delivering news in a very transparent way.

“I don’t need to add my voice to all the US pundits of the regrets of the beast that’s been unleashed. All of us feel that desire to unhook from some of these networks now because they aren’t being healthy for us in certain ways. And I certainly feel that what we’re not doing for improving the world of data is going to be good for everyone.”

When we first covered the UK-based startup it was going under the name CognitiveLogic — a placeholder name, as three weeks in Halstead says he was still figuring out exactly how to take his idea to market.

The founder of DataSift has not had difficulties raising funding for his new venture. There was an initial $3M from Upfront Ventures and IA Ventures, with the seed topped up by a further $5M last year, with new investors including Saul Klein (formerly Index Ventures) and Mike Chalfen of Mosaic Ventures. Halstead says he’ll be looking to raise “a very large Series A” over the summer.

In the meanwhile he says he has a “very long list” of hundreds customers wanting to get their hands on the platform to kick its tires. “The last three months has been a whirlwind of me going back to all of the major brands, all of the big data companies, there no large corporate that doesn’t have these kinds of challenges,” he adds.

“I saw a very big client this morning… they’re a large multinational, they’ve got three major brands where the three customer sets had never been joined together. So they don’t even know what the overlap of those brands are at the moment. So even giving them that insight would be massively valuable to them.”

Read more: https://techcrunch.com/2018/03/09/infosums-first-product-touts-decentralized-big-data-insights/

The world is watching: How Florida shooting made U.S. gun control a global conversation

AR-15 "Sport" rifles on sale at deep discounts in an Arizona store.
Image: john moore/Getty Images

When you move to America from a country with more effective gun control laws, one of the first things you learn is how hard it is to talk to Americans — even on the sympathetic side of the political divide — about the gun issue. 

It was particularly difficult when I arrived on these shores in 1996, direct from living in Scotland during its (and Britain’s) worst-ever school shooting. In the tiny town of Dunblane, a 43-year old former shopkeeper and scoutmaster brought four handguns to a school gymnasium full of five-year-olds. He shot and killed 16 of them and their teacher, then turned his handgun on himself.

After Dunblane, the British plunged into a state of collective mourning that was at least as widespread as the better-known grieving process for Princess Diana the following year. (Americans don’t always believe that part, to which I usually say: the kids were five, for crying out loud. Five.)

In a country where nobody would dream of pulling public funding for studies into gun violence, the solution was amazingly rational and bipartisan. After a year, and an official inquiry into Dunblane, the Conservative government passed a sweeping piece of legislation restricting handguns. Then after Labour won the 1997 election, it passed another. Britain hasn’t seen a school shooting since. (Same with Australia, which also passed major gun control legislation in 1996). 

But trying to talk about all that in America over the last two decades, I’ve learned from experience, has been like touching the proverbial third rail: only tourists would be dumb enough to try it. Even gun control advocates now think they’re dealing with an intractable, generational problem. Many tell me that we need to tackle mental health services or gun fetishization in Hollywood movies first. The legislation route couldn’t possibly be that easy, they say.

But what if it is that easy? What if the rest of the world also loves Hollywood action movies and has mental health problems, but manages to have fewer shootings simply because it has fewer guns available? What if the rest of the world has been shouting at America for years that gun control is less intractable than you think — you just have to vote in large numbers for the politicians that favor it, and keep doing so at every election? 

If that’s the case, then perhaps some powerful, leveling international marketplace of ideas could help the U.S. see what everyone else has already seen. Something like social media. 

In one sense, Wednesday’s massacre in Parkland, Florida — a school shooting as shocking and senseless as Dunblane —  was evidence that America was further away from a gun control solution than ever. In 1996, buying an AR-15 assault rifle was illegal under federal law. Now, in Florida and many other states, a 19-year old can walk into any gun store and walk out with this military-grade weapon of mass destruction. 

Yet anecdotally, I have noticed one glimmer of hope. Since the last American gun massacre that got everyone talking, there has been a small shift in the online conversation. It has become a little more global. The students of Parkland have been broadcasting to the world via social media, and the world is taking notice. 

I’m not suggesting some kind of slam-dunk situation where every American on Twitter and Facebook and Snapchat has an epiphany about gun control because they’re more frequently interacting with people from other nations with different laws. 

But I am saying it’s noticeably harder for pro-gun accounts to spread lies about the situation in other countries without people from those countries chiming in. 

Meanwhile, there is a mountain of evidence that Russian bots and troll accounts are attempting to hijack the online conversation using the same playbook from the 2016 elections — manufacture conflict to destabilize American discourse. That means taking the most trollishly pro-NRA position they can think of, in a bid to counteract the large majority of Americans who want sensible gun control. 

So the voices from other countries are chiming in just in time. If anything, we need more of them to balance out cynical foreign influence in a pro-gun direction. 

How effective gun control can happen

Twenty years of trying to have this debate in the U.S. have worn me down. As you might expect, I’ve been on the receiving end of a lot of Second Amendment-splaining from the pro-gun lobby. (Yep, I’m very familiar with the two centuries of debate over the militia clause, thanks.) I’ve been told I didn’t understand the power of the NRA (which, again, I’m quite familiar with: the organization supported sensible gun restrictions until it was radicalized in 1977).

I’ve heard every argument you could imagine: the notion that British police must now be lording it over the poor defenseless population; the blinkered insistence that there must have been a rise in crime with illegal guns and legal knives now all the good people with guns have been taken out of the equation. (Violent crime is still too high in the UK, but it is a fraction of America’s total — and has declined significantly since 1996.) 

I no longer have the dream that a UK-Australia-style handgun ban would work here. There are as many as 300 million firearms in private hands, according to a 2012 Congressional estimate; even though most of them are concentrated in the hands of a small percentage of owners, it’s simply impractical to talk about removing a significant percentage of them from the equation. 

But if anything, I’m more aware of creative legal solutions: laws that require gun insurance the way we require car insurance, or tax ammunition, or hold manufacturers responsible for gun deaths. I’ve seen my adopted state of California implement some of the toughest gun laws in the nation, laws that just went into effect. The fight to prevent future massacres is just getting started.

And any time you want to talk about how it can happen, the rest of a shrinking world is listening — and ready to talk. 

Read more: https://mashable.com/2018/02/17/gun-control-social-media/

Facebooks plan to build community with hiring and $10M in grants

Facebook will award up to $1 million to five community leaders and $50,000 to 100 more to fund their ideas for how to strengthen people’s online and offline connections. Facebook will also hire double the number of engineers currently working on safety-related causes in its London office. These were a few of the big announcements made at today’s Facebook Communities Summit Europe.

The moves show Facebook isn’t scared to throw its sizable wallet around on behalf of its new mission statement to bring people closer together. The company is dealing with an onslaught of criticism around polarization, election interference, fake news, censorship, sucking money out of journalism, and negative impacts on well-being from overuse of its social network. Facebook knows it can’t come up with all the ideas to offset these issues and become more of a force for good in the world, so it’s crowdsourcing them with the new Community Leadership grants.

Facebook threw its first Communities Summit in Chicago back in June to bring together hundreds of admins of the most influential Facebook Groups. The company sees the Groups product as one of its best contributions to the fabric of society, so it’s investing more in both the actual product and the people who use it.

“2017 was a really tough year for Facebook” said Chief Product Officer Chris Cox. “We faced tough questions about our role in democracy, our role in discourse, our role in journalism, and our role in well-being . . . We have our best teams internally partnered with the best experts externally to work through each set of the issues”

At the event, Facebook announced a new set of Groups features including the combination of member requests, Group Insights, and other admin tools into a centralized space. Group Announcements let admins pin up to 10 important messages at the top of their Group to keep members on the same page. Admins can personalize their Group with a color theme. And admins can post a specific set of Group rules to keep members in line and discussion civil.

The announcements and rules features should be especially helpful for large Groups that want to allow open contributions from members, but thwart spam by defining exactly what content people can post and in what format. I’ve been using an ad hoc rules section at the top of the 10,000 person event discovery group I started in 2010, and it’s been critical to dampening noise and getting people to keep their notifications on. Formalizing this tool could make organizing easier for other admins.

To build out these features plus other safety and community infrastructure, Facebook says that “by the end of 2018, we will double the number of people working in London on these issues”, specifically the “engineering team for community safety” that Cox says are “The folks that build tools to make it easy to moderate content and proactively take it down”. Facebook had previously said it would double the size of its content review team from 10,000 to 20,000, and hire 1,000 people to work on political ads transparency. These hires could ensure Facebook has enough staff to tackle its problems in a timely manner before problems spread.

Finally, Facebook announced its Facebook Community Leadership Program that will dish out grants and education. The program is open to community builders who’ve made a proven impact, foster a tolerant space for diverse voices, drive offline as well as online connections, and use Facebook’s family of apps in their efforts. Applications to the program including proposals for how to spend the grant money can be submitted here until March 8th.

The five selected community leaders in residence will get up to $1 million for their proposals, plus customized leadership development training during five one-week sessions in residence at Facebook’s offices. The 100 people selected for the community fellowship program will get up to $50,000 for their proposals and participate in four in-person gatherings with other fellows.

Taking a cue from Facebook COO Sheryl Sandberg’s Lean In female empowerment “circles”, Facebook will organize in-person Community Leadership Circles around the world after assembling three in the U.S. in 2017. Facebook will also add more people to its Groups for Facebook Power Admins that let Group leaders share tips and tricks.

You can watch a replay of the Communities Summit below:

Facebook still often concludes that the best thing for society is more Facebook, as with Groups. Hence the requirement that grant recipients be using Facebook to organize their communities. One of Facebook’s big milestones is it’s striving to increase the number of users in meaningful groups from 100 million to 1 billion.

Recently, though, the company has put a bigger stress on encouraging and measuring “time well spent” instead of just total time on Facebook. That’s mostly taken the form of significant News Feed algorithm changes that reduce public posts, news, and viral videos. Facebook even admitted its first daily active user count loss, a drop of 700,000 in Q4 2017 in the U.S. & Canada region, that in part resulted from these changes that caused an overall 5 percent reduction in time spent on Facebook

By promoting Groups and making them more powerful, Facebook could create a natural place to shift our attention that’s more enriching than passively scrolling through its feed. The question is how far Facebook is willing to go to push well-being at the expense of its bottom line or how its products operate. Investing $10 million in grants and more in hiring is one thing. It’d be another to see Facebook implement material changes like batched notifications, stronger do not disturb controls, or even a timer showing how long people spend on the site per day.

For more on the social network’s impact on people’s health, read our feature piece: “The difference between good and bad Facebooking”

Read more: https://techcrunch.com/2018/02/09/facebook-community-grants/

Inside the Two Years That Shook Facebookand the World

One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.

“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.

All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”

So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.

A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.

Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.

March 2018. Subscribe to WIRED.

Jake Rowland/Esto

Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.

The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.

That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.

The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.

The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.

This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)

The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.

In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.

II

By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.

The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.

But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.

Facebook’s Reckoning

Two years that forced the platform to change

by Blanca Myers

March 2016

Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.

May 2016

Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.

July 2016

Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.

August 2016

Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.

November 2016

Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.

December 2016

Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.

September 2017

Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.

October 2017

Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.

November 2017

Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.

January 2018

Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”

In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.

So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”

It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”

This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.

In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.

III

In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”

But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.

Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.

The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.

Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.

According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.

The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.

Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”

Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.

The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.

IV

Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.

Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.

Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)

Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.

When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.

But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?

The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary."

V

While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.

In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.

Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.

Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.

Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”

Eddie Guy

VI

It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.

Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.

Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”

A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”

At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.

One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.

Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.

Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.

In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.

Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”

Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”

VII

Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.

Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.

“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”

As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”

The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.

The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.

VIII

Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.

And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.

Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.

This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.

As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.

Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.

IX

One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”

That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.

During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)

On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”

One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.

Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.

Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.

When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”

Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.

This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.

Eddie Guy

X

To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”

McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.

And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”

After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” h

Read more: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/

Facebook is overhauling its News Feed so users feel better again

Facebook is re-tweaking its News Feed again. 

This time it wants to bring it back to friends and family instead of viral videos and media posts, Facebook CEO Mark Zuckerberg announced in a post Thursday. 

“I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions,” he wrote.

He said the change should make everyone feel better: “The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health.”

With fewer posts from businesses, brands, and media, expect to see more of what your “friends” are sharing and liking. 

Zuckerberg didn’t mention Facebook’s role in the 2016 election or Russian meddling through the platform as motivation to change what shows up on the social network.

A breakdown of the “closer together” initiative (also outlined in a video above) indicates news stories will get de-prioritized, while conversations that Facebook thinks will spark a lot of engagement will get a boost. 

To achieve a happier Facebook user base, it looks like Facebook will focus on comment-heavy posts — and not just quick comments like, “Oh no!” or “Thanks!” but lengthy (meaningful!) comments.

All those “likes” won’t mean as much as full-on engagement, which under the new rules seems to mean back-and-forth conversations. Sounds like posting links back and forth won’t count as much in the meaningfulness meter.

In other words, publishers will almost certainly see traffic drop and video views decrease.

Zuckerberg rationalized that the changes will ultimately make for a better Facebook experience, naturally, but might actually cause people to spend less time on the social network.

“I also expect the time you do spend on Facebook will be more valuable,” he wrote.

UPDATE: Jan. 11, 2018, 5:07 p.m. PST This post has been updated with more information about the News Feed changes.

Read more: http://mashable.com/2018/01/11/facebook-news-feed-algorithm-changes-family-friends/