Why Zuckerbergs 14-Year Apology Tour Hasnt Fixed Facebook

In 2003, one year before Facebook was founded, a website called Facemash began nonconsensually scraping pictures of students at Harvard from the school’s intranet and asking users to rate their hotness. Obviously, it caused an outcry. The website’s developer quickly proffered an apology. "I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter,” wrote a young Mark Zuckerberg. “I definitely see how my intentions could be seen in the wrong light.”

In 2004 Zuckerberg cofounded Facebook, which rapidly spread from Harvard to other universities. And in 2006 the young company blindsided its users with the launch of News Feed, which collated and presented in one place information that people had previously had to search for piecemeal. Many users were shocked and alarmed that there was no warning and that there were no privacy controls. Zuckerberg apologized. “This was a big mistake on our part, and I'm sorry for it,” he wrote on Facebook’s blog. "We really messed this one up," he said. "We did a bad job of explaining what the new features were and an even worse job of giving you control of them."

Zeynep Tufekci (@zeynep) is an associate professor at the University of North Carolina and an opinion writer for The New York Times. She recently wrote about the (democracy-poisoning) golden age of free speech.

Then in 2007, Facebook’s Beacon advertising system, which was launched without proper controls or consent, ended up compromising user privacy by making people’s purchases public. Fifty thousand Facebook users signed an e-petition titled “Facebook: Stop invading my privacy.” Zuckerberg responded with an apology: “We simply did a bad job with this release and I apologize for it." He promised to improve. “I'm not proud of the way we've handled this situation and I know we can do better,” he wrote.

By 2008, Zuckerberg had written only four posts on Facebook’s blog: Every single one of them was an apology or an attempt to explain a decision that had upset users.

In 2010, after Facebook violated users' privacy by making key types of information public without proper consent or warning, Zuckerberg again responded with an apology—this time published in an op-ed in The Washington Post. “We just missed the mark,” he said. “We heard the feedback,” he added. “There needs to be a simpler way to control your information.” “In the coming weeks, we will add privacy controls that are much simpler to use,” he promised.

I’m going to run out of space here, so let’s jump to 2018 and skip over all the other mishaps and apologies and promises to do better—oh yeah, and the consent decree that the Federal Trade Commission made Facebook sign in 2011, charging that the company had deceptively promised privacy to its users and then repeatedly broken that promise—in the intervening years.

Last month, Facebook once again garnered widespread attention with a privacy related backlash when it became widely known that, between 2008 and 2015, it had allowed hundreds, maybe thousands, of apps to scrape voluminous data from Facebook users—not just from the users who had downloaded the apps, but detailed information from all their friends as well. One such app was run by a Cambridge University academic named Aleksandr Kogan, who apparently siphoned up detailed data on up to 87 million users in the United States and then surreptitiously forwarded the loot to the political data firm Cambridge Analytica. The incident caused a lot of turmoil because it connects to the rolling story of distortions in the 2016 US presidential election. But in reality, Kogan’s app was just one among many, many apps that amassed a huge amount of information in a way most Facebook users were completely unaware of.

At first Facebook indignantly defended itself, claiming that people had consented to these terms; after all, the disclosures were buried somewhere in the dense language surrounding obscure user privacy controls. People were asking for it, in other words.

But the backlash wouldn’t die down. Attempting to respond to the growing outrage, Facebook announced changes. “It’s Time to Make Our Privacy Tools Easier to Find”, the company announced without a hint of irony—or any other kind of hint—that Zuckerberg had promised to do just that in the “coming few weeks” eight full years ago. On the company blog, Facebook’s chief privacy editor wrote that instead of being “spread across nearly 20 different screens” (why were they ever spread all over the place?), the controls would now finally be in one place.

Zuckerberg again went on an apology tour, giving interviews to The New York Times, CNN, Recode, WIRED, and Vox (but not to the Guardian and Observer reporters who broke the story). In each interview he apologized. “I’m really sorry that this happened,” he told CNN. “This was certainly a breach of trust.”

But Zuckerberg didn’t stop at an apology this time. He also defended Facebook as an “idealistic company” that cares about its users and spoke disparagingly about rival companies that charge users money for their products while maintaining a strong record in protecting user privacy. In his interview with Vox’s Ezra Klein, Zuckerberg said that anyone who believes Apple cares more about users than Facebook does has “Stockholm syndrome”—the phenomenon whereby hostages start sympathizing and identifying with their captors.

This is an interesting argument coming from the CEO of Facebook, a company that essentially holds its users' data hostage. Yes, Apple charges handsomely for its products, but it also includes advanced encryption hardware on all its phones, delivers timely security updates to its whole user base, and has largely locked itself out of user data—to the chagrin of many governments, including that of the United States, and of Facebook itself.

Most Android phones, by contrast, gravely lag behind in receiving security updates, have no specialized encryption hardware, and often handle privacy controls in a way that is detrimental to user interests. Few governments or companies complain about Android phones. After the Cambridge Analytica scandal, it came to light that Facebook had been downloading and keeping all the text messages of its users on the Android platform—their content as well as their metadata. “The users consented!” Facebook again cried out. But people were soon posting screenshots that showed how difficult it was for a mere mortal to discern that’s what was going on, let alone figure out how to opt out, on the vague permission screen that flashed before users.

On Apple phones, however, Facebook couldn’t harvest people’s text messages because the permissions wouldn’t allow it.

In the same interview, Zuckerberg took wide aim at the oft-repeated notion that, if an online service is free, you—the user—are the product. He said that he found the argument that “if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth.” His rebuttal to that accusation, however, was itself glib; and as for whether it was aligned with the truth—well, we just have to take his word for it. “To the dissatisfaction of our sales team here,” he said, “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.”

As far as I can tell, not once in his apology tour was Zuckerberg asked what on earth he means when he refers to Facebook’s 2 billion-plus users as “a community” or “the Facebook community.” A community is a set of people with reciprocal rights, powers, and responsibilities. If Facebook really were a community, Zuckerberg would not be able to make so many statements about unilateral decisions he has made—often, as he boasts in many interviews, in defiance of Facebook’s shareholders and various factions of the company’s workforce. Zuckerberg’s decisions are final, since he controls all the voting stock in Facebook, and always will until he decides not to—it’s just the way he has structured the company.

This isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

Facebook’s 2 billion users are not Facebook’s “community.” They are its user base, and they have been repeatedly carried along by the decisions of the one person who controls the platform. These users have invested time and money in building their social networks on Facebook, yet they have no means to port the connectivity elsewhere. Whenever a serious competitor to Facebook has arisen, the company has quickly copied it (Snapchat) or purchased it (WhatsApp, Instagram), often at a mind-boggling price that only a behemoth with massive cash reserves could afford. Nor do people have any means to completely stop being tracked by Facebook. The surveillance follows them not just on the platform, but elsewhere on the internet—some of them apparently can’t even text their friends without Facebook trying to snoop in on the conversation. Facebook doesn’t just collect data itself; it has purchased external data from data brokers; it creates “shadow profiles” of nonusers and is now attempting to match offline data to its online profiles.

Again, this isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

There is no other way to interpret Facebook’s privacy invading moves over the years—even if it’s time to simplify! finally!―as anything other than decisions driven by a combination of self-serving impulses: namely, profit motives, the structural incentives inherent to the company’s business model, and the one-sided ideology of its founders and some executives. All these are forces over which the users themselves have little input, aside from the regular opportunity to grouse through repeated scandals. And even the ideology—a vague philosophy that purports to prize openness and connectivity with little to say about privacy and other values—is one that does not seem to apply to people who run Facebook or work for it. Zuckerberg buys houses surrounding his and tapes over his computer’s camera to preserve his own privacy, and company employees went up in arms when a controversial internal memo that made an argument for growth at all costs was recently leaked to the press—a nonconsensual, surprising, and uncomfortable disclosure of the kind that Facebook has routinely imposed upon its billions of users over the years.

This isn’t to say Facebook doesn’t provide real value to its users, even as it locks them in through network effects and by crushing, buying, and copying its competition. I wrote a whole book in which I document, among other things, how useful Facebook has been to anticensorship efforts around the world. It doesn’t even mean that Facebook executives make all decisions merely to increase the company valuation or profit, or that they don’t care about users. But multiple things can be true at the same time; all of this is quite complicated. And fundamentally, Facebook’s business model and reckless mode of operating are a giant dagger threatening the health and well-being of the public sphere and the privacy of its users in many countries.

So, here’s the thing. There is indeed a case of Stockholm syndrome here. There are very few other contexts in which a person would be allowed to make a series of decisions that have obviously enriched them while eroding the privacy and well-being of billions of people; to make basically the same apology for those decisions countless times over the space of just 14 years; and then to profess innocence, idealism, and complete independence from the obvious structural incentives that have shaped the whole process. This should ordinarily cause all the other educated, literate, and smart people in the room to break into howls of protest or laughter. Or maybe tears.

Facebook has tens of thousands of employees, and reportedly an open culture with strong internal forums. Insiders often talk of how free employees feel to speak up, and indeed I’ve repeatedly been told how they are encouraged to disagree and discuss all the key issues. Facebook has an educated workforce.

By now, it ought to be plain to them, and to everyone, that Facebook’s 2 billion-plus users are surveilled and profiled, that their attention is then sold to advertisers and, it seems, practically anyone else who will pay Facebook—including unsavory dictators like the Philippines’ Rodrigo Duterte. That is Facebook’s business model. That is why the company has an almost half-a-trillion-dollar market capitalization, along with billions in spare cash to buy competitors.

These are such readily apparent facts that any denial of them is quite astounding.

And yet, it appears that nobody around Facebook’s sovereign and singular ruler has managed to convince their leader that these are blindingly obvious truths whose acceptance may well provide us with some hints of a healthier way forward. That the repeated word of the use “community” to refer Facebook’s users is not appropriate and is, in fact, misleading. That the constant repetition of “sorry” and “we meant well” and “we will fix it this time!” to refer to what is basically the same betrayal over 14 years should no longer be accepted as a promise to do better, but should instead be seen as but one symptom of a profound crisis of accountability. When a large chorus of people outside the company raises alarms on a regular basis, it’s not a sufficient explanation to say, “Oh we were blindsided (again).”

Maybe, just maybe, that is the case of Stockholm syndrome we should be focusing on.

Zuckerberg’s outright denial that Facebook’s business interests play a powerful role in shaping its behavior doesn’t bode well for Facebook’s chances of doing better in the future. I don’t doubt that the company has, on occasion, held itself back from bad behavior. That doesn’t make Facebook that exceptional, nor does it excuse its existing choices, nor does it alter the fact that its business model is fundamentally driving its actions.

At a minimum, Facebook has long needed an ombudsman’s office with real teeth and power: an institution within the company that can act as a check on its worst impulses and to protect its users. And it needs a lot more employees whose task is to keep the platform healthier. But what would truly be disruptive and innovative would be for Facebook to alter its business model. Such a change could come from within, or it could be driven by regulations on data retention and opaque, surveillance-based targeting—regulations that would make such practices less profitable or even forbidden.

Facebook will respond to the latest crisis by keeping more of its data within its own walls (of course, that fits well with the business of charging third parties for access to users based on extensive profiling with data held by Facebook, so this is no sacrifice). Sure, it’s good that Facebook is now promising not to leak user data to unscrupulous third parties; but it should finally allow truly independent researchers better (and secure, not reckless) access to the company’s data in order to investigate the true effects of the platform. Thus far, Facebook has not cooperated with independent researchers who want to study it. Such investigation would be essential to informing the kind of political discussion we need to have about the trade-offs inherent in how Facebook, and indeed all of social media, operate.

Even without that independent investigation, one thing is clear: Facebook’s sole sovereign is neither equipped to, nor should he be in a position to, make all these decisions by himself, and Facebook’s long reign of unaccountability should end.


Facebook in Crisis

  • Initially, Facebook said that Cambridge Analytica got unauthorized access to some 50 million users' data. The social network has now raised that number to 87 million.
  • Next week, Mark Zuckerberg will testify before Congress. The question on our minds: How can Facebook prevent the next crisis if its guiding principle is and always has been connection at all cost?
  • Facebook has a long history of privacy gaffes. Here are just a few.

Photograph by WIRED/Getty Images

Read more: https://www.wired.com/story/why-zuckerberg-15-year-apology-tour-hasnt-fixed-facebook/

Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning

Sundar Pichai, the chief executive of Google, has said that AI “is more profound than … electricity or fire.” Andrew Ng, who founded Google Brain and now invests in AI startups, wrote that “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”

Their enthusiasm is pardonable. There have been remarkable advances in AI, after decades of frustration. Today we can tell a voice-activated personal assistant like Alexa to “Play the band Television,” or count on Facebook to tag our photographs; Google Translate is often almost as accurate as a human translator. Over the last half decade, billions of dollars in research funding and venture capital have flowed towards AI; it is the hottest course in computer science programs at MIT and Stanford. In Silicon Valley, newly minted AI specialists command half a million dollars in salary and stock.

But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you’ve seen you’ve seen it, you can’t un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.

Jason Pontin (@jason_pontin) is an Ideas contributor for WIRED. He is a senior partner at Flagship Pioneering, a firm in Boston that creates, builds, and funds companies that solve problems in health, food, and sustainability. From 2004 to 2017 he was the editor in chief and publisher of MIT Technology Review. Before that he was the editor of Red Herring magazine, a business magazine that was popular during the dot-com boom.

To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. Such networks possess inputs and outputs, a little like the neurons in our own brains; they are said to be “deep” when they possess multiple hidden layers that contain many nodes, with a blooming multitude of connections. Deep learning employs an algorithm called backpropagation, or backprop, that adjusts the mathematical weights between nodes, so that an input leads to the right output. In speech recognition, the phonemes c-a-t should spell the word “cat;” in image recognition, a photograph of a cat must not be labeled “a dog;” in translation, qui canem et faelem ut deos colunt should spit out “who worship dogs and cats as gods.” Deep learning is “supervised” when neural nets are trained to recognize phonemes, photographs, or the relation of Latin to English using millions or billions of prior, laboriously labeled examples.

Deep learning’s advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren’t classification problems at all. “People naively believe that if you take deep learning and scale it 100 times more layers, and add 1000 times more data, a neural net will be able to do anything a human being can do,” says François Chollet, a researcher at Google. “But that’s just not true.”

Gary Marcus, a professor of cognitive psychology at NYU and briefly director of Uber’s AI lab, recently published a remarkable trilogy of essays, offering a critical appraisal of deep learning. Marcus believes that deep learning is not “a universal solvent, but one tool among many.” And without new approaches, Marcus worries that AI is rushing toward a wall, beyond which lie all the problems that pattern recognition cannot solve. His views are quietly shared with varying degrees of intensity by most leaders in the field, with the exceptions of Yann LeCun, the director of AI research at Facebook, who curtly dismissed the argument as “all wrong,” and Geoffrey Hinton, a professor emeritus at the University of Toronto and the grandfather of backpropagation, who sees “no evidence” of a looming obstacle.

According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data. Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks. They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.

These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience,” explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. “Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch.” In January, Facebook abandoned M, a text-based virtual assistant that used humans to supplement and train a deep learning system, but never offered useful suggestions or employed language naturally.

What’s wrong? “It must be that we have a better learning algorithm in our heads than anything we’ve come up with for machines,” Domingos says. We need to invent better methods of machine learning, skeptics aver. The remedy for artificial intelligence, according to Marcus, is syncretism: combining deep learning with unsupervised learning techniques that don’t depend so much on labeled training data, as well as the old-fashioned description of the world with logical rules that dominated AI before the rise of deep learning. Marcus claims that our best model for intelligence is ourselves, and humans think in many different ways. His young children could learn general rules about language, and without many examples, but they were also born with innate capacities. “We are born knowing there are causal relationships in the world, that wholes can be made of parts, and that the world consists of places and objects that persist in space and time,” he says. “No machine ever learned any of that stuff using backprop.”

Other researchers have different ideas. “We’ve used the same basic paradigms [for machine learning] since the 1950s,” says Pedro Domingos, “and at the end of the day, we’re going to need some new ideas.” Chollet looks for inspiration in program synthesis, programs that automatically create other programs. Hinton’s current research explores an idea he calls “capsules,” which preserves backpropagation, the algorithm for deep learning, but addresses some of its limitations.

“There are a lot of core questions in AI that are completely unsolved,” says Chollet, “and even largely unasked.” We must answer these questions because there are tasks that a lot of humans don’t want to do, such as cleaning toilets and classifying pornography, or which intelligent machines would do better, such as discovering drugs to treat diseases. More: there are things that we can’t do at all, most of which we cannot yet imagine.


AI Anxieties

  • You can stop panicking about a superhuman AI. As Kevin Kelly writes, that’s a myth.

  • Another worry you can cross off your list? The fear that robots will take all of our jobs. It’s not nearly that simple.

  • But AI is becoming an ever-more integral factor in the future of work. Say hello to your new AI coworkers.

Photograph by WIRED/Getty Images

Read more: https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning/