Why Zuckerbergs 14-Year Apology Tour Hasnt Fixed Facebook

In 2003, one year before Facebook was founded, a website called Facemash began nonconsensually scraping pictures of students at Harvard from the school’s intranet and asking users to rate their hotness. Obviously, it caused an outcry. The website’s developer quickly proffered an apology. "I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter,” wrote a young Mark Zuckerberg. “I definitely see how my intentions could be seen in the wrong light.”

In 2004 Zuckerberg cofounded Facebook, which rapidly spread from Harvard to other universities. And in 2006 the young company blindsided its users with the launch of News Feed, which collated and presented in one place information that people had previously had to search for piecemeal. Many users were shocked and alarmed that there was no warning and that there were no privacy controls. Zuckerberg apologized. “This was a big mistake on our part, and I'm sorry for it,” he wrote on Facebook’s blog. "We really messed this one up," he said. "We did a bad job of explaining what the new features were and an even worse job of giving you control of them."

Zeynep Tufekci (@zeynep) is an associate professor at the University of North Carolina and an opinion writer for The New York Times. She recently wrote about the (democracy-poisoning) golden age of free speech.

Then in 2007, Facebook’s Beacon advertising system, which was launched without proper controls or consent, ended up compromising user privacy by making people’s purchases public. Fifty thousand Facebook users signed an e-petition titled “Facebook: Stop invading my privacy.” Zuckerberg responded with an apology: “We simply did a bad job with this release and I apologize for it." He promised to improve. “I'm not proud of the way we've handled this situation and I know we can do better,” he wrote.

By 2008, Zuckerberg had written only four posts on Facebook’s blog: Every single one of them was an apology or an attempt to explain a decision that had upset users.

In 2010, after Facebook violated users' privacy by making key types of information public without proper consent or warning, Zuckerberg again responded with an apology—this time published in an op-ed in The Washington Post. “We just missed the mark,” he said. “We heard the feedback,” he added. “There needs to be a simpler way to control your information.” “In the coming weeks, we will add privacy controls that are much simpler to use,” he promised.

I’m going to run out of space here, so let’s jump to 2018 and skip over all the other mishaps and apologies and promises to do better—oh yeah, and the consent decree that the Federal Trade Commission made Facebook sign in 2011, charging that the company had deceptively promised privacy to its users and then repeatedly broken that promise—in the intervening years.

Last month, Facebook once again garnered widespread attention with a privacy related backlash when it became widely known that, between 2008 and 2015, it had allowed hundreds, maybe thousands, of apps to scrape voluminous data from Facebook users—not just from the users who had downloaded the apps, but detailed information from all their friends as well. One such app was run by a Cambridge University academic named Aleksandr Kogan, who apparently siphoned up detailed data on up to 87 million users in the United States and then surreptitiously forwarded the loot to the political data firm Cambridge Analytica. The incident caused a lot of turmoil because it connects to the rolling story of distortions in the 2016 US presidential election. But in reality, Kogan’s app was just one among many, many apps that amassed a huge amount of information in a way most Facebook users were completely unaware of.

At first Facebook indignantly defended itself, claiming that people had consented to these terms; after all, the disclosures were buried somewhere in the dense language surrounding obscure user privacy controls. People were asking for it, in other words.

But the backlash wouldn’t die down. Attempting to respond to the growing outrage, Facebook announced changes. “It’s Time to Make Our Privacy Tools Easier to Find”, the company announced without a hint of irony—or any other kind of hint—that Zuckerberg had promised to do just that in the “coming few weeks” eight full years ago. On the company blog, Facebook’s chief privacy editor wrote that instead of being “spread across nearly 20 different screens” (why were they ever spread all over the place?), the controls would now finally be in one place.

Zuckerberg again went on an apology tour, giving interviews to The New York Times, CNN, Recode, WIRED, and Vox (but not to the Guardian and Observer reporters who broke the story). In each interview he apologized. “I’m really sorry that this happened,” he told CNN. “This was certainly a breach of trust.”

But Zuckerberg didn’t stop at an apology this time. He also defended Facebook as an “idealistic company” that cares about its users and spoke disparagingly about rival companies that charge users money for their products while maintaining a strong record in protecting user privacy. In his interview with Vox’s Ezra Klein, Zuckerberg said that anyone who believes Apple cares more about users than Facebook does has “Stockholm syndrome”—the phenomenon whereby hostages start sympathizing and identifying with their captors.

This is an interesting argument coming from the CEO of Facebook, a company that essentially holds its users' data hostage. Yes, Apple charges handsomely for its products, but it also includes advanced encryption hardware on all its phones, delivers timely security updates to its whole user base, and has largely locked itself out of user data—to the chagrin of many governments, including that of the United States, and of Facebook itself.

Most Android phones, by contrast, gravely lag behind in receiving security updates, have no specialized encryption hardware, and often handle privacy controls in a way that is detrimental to user interests. Few governments or companies complain about Android phones. After the Cambridge Analytica scandal, it came to light that Facebook had been downloading and keeping all the text messages of its users on the Android platform—their content as well as their metadata. “The users consented!” Facebook again cried out. But people were soon posting screenshots that showed how difficult it was for a mere mortal to discern that’s what was going on, let alone figure out how to opt out, on the vague permission screen that flashed before users.

On Apple phones, however, Facebook couldn’t harvest people’s text messages because the permissions wouldn’t allow it.

In the same interview, Zuckerberg took wide aim at the oft-repeated notion that, if an online service is free, you—the user—are the product. He said that he found the argument that “if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth.” His rebuttal to that accusation, however, was itself glib; and as for whether it was aligned with the truth—well, we just have to take his word for it. “To the dissatisfaction of our sales team here,” he said, “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.”

As far as I can tell, not once in his apology tour was Zuckerberg asked what on earth he means when he refers to Facebook’s 2 billion-plus users as “a community” or “the Facebook community.” A community is a set of people with reciprocal rights, powers, and responsibilities. If Facebook really were a community, Zuckerberg would not be able to make so many statements about unilateral decisions he has made—often, as he boasts in many interviews, in defiance of Facebook’s shareholders and various factions of the company’s workforce. Zuckerberg’s decisions are final, since he controls all the voting stock in Facebook, and always will until he decides not to—it’s just the way he has structured the company.

This isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

Facebook’s 2 billion users are not Facebook’s “community.” They are its user base, and they have been repeatedly carried along by the decisions of the one person who controls the platform. These users have invested time and money in building their social networks on Facebook, yet they have no means to port the connectivity elsewhere. Whenever a serious competitor to Facebook has arisen, the company has quickly copied it (Snapchat) or purchased it (WhatsApp, Instagram), often at a mind-boggling price that only a behemoth with massive cash reserves could afford. Nor do people have any means to completely stop being tracked by Facebook. The surveillance follows them not just on the platform, but elsewhere on the internet—some of them apparently can’t even text their friends without Facebook trying to snoop in on the conversation. Facebook doesn’t just collect data itself; it has purchased external data from data brokers; it creates “shadow profiles” of nonusers and is now attempting to match offline data to its online profiles.

Again, this isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

There is no other way to interpret Facebook’s privacy invading moves over the years—even if it’s time to simplify! finally!―as anything other than decisions driven by a combination of self-serving impulses: namely, profit motives, the structural incentives inherent to the company’s business model, and the one-sided ideology of its founders and some executives. All these are forces over which the users themselves have little input, aside from the regular opportunity to grouse through repeated scandals. And even the ideology—a vague philosophy that purports to prize openness and connectivity with little to say about privacy and other values—is one that does not seem to apply to people who run Facebook or work for it. Zuckerberg buys houses surrounding his and tapes over his computer’s camera to preserve his own privacy, and company employees went up in arms when a controversial internal memo that made an argument for growth at all costs was recently leaked to the press—a nonconsensual, surprising, and uncomfortable disclosure of the kind that Facebook has routinely imposed upon its billions of users over the years.

This isn’t to say Facebook doesn’t provide real value to its users, even as it locks them in through network effects and by crushing, buying, and copying its competition. I wrote a whole book in which I document, among other things, how useful Facebook has been to anticensorship efforts around the world. It doesn’t even mean that Facebook executives make all decisions merely to increase the company valuation or profit, or that they don’t care about users. But multiple things can be true at the same time; all of this is quite complicated. And fundamentally, Facebook’s business model and reckless mode of operating are a giant dagger threatening the health and well-being of the public sphere and the privacy of its users in many countries.

So, here’s the thing. There is indeed a case of Stockholm syndrome here. There are very few other contexts in which a person would be allowed to make a series of decisions that have obviously enriched them while eroding the privacy and well-being of billions of people; to make basically the same apology for those decisions countless times over the space of just 14 years; and then to profess innocence, idealism, and complete independence from the obvious structural incentives that have shaped the whole process. This should ordinarily cause all the other educated, literate, and smart people in the room to break into howls of protest or laughter. Or maybe tears.

Facebook has tens of thousands of employees, and reportedly an open culture with strong internal forums. Insiders often talk of how free employees feel to speak up, and indeed I’ve repeatedly been told how they are encouraged to disagree and discuss all the key issues. Facebook has an educated workforce.

By now, it ought to be plain to them, and to everyone, that Facebook’s 2 billion-plus users are surveilled and profiled, that their attention is then sold to advertisers and, it seems, practically anyone else who will pay Facebook—including unsavory dictators like the Philippines’ Rodrigo Duterte. That is Facebook’s business model. That is why the company has an almost half-a-trillion-dollar market capitalization, along with billions in spare cash to buy competitors.

These are such readily apparent facts that any denial of them is quite astounding.

And yet, it appears that nobody around Facebook’s sovereign and singular ruler has managed to convince their leader that these are blindingly obvious truths whose acceptance may well provide us with some hints of a healthier way forward. That the repeated word of the use “community” to refer Facebook’s users is not appropriate and is, in fact, misleading. That the constant repetition of “sorry” and “we meant well” and “we will fix it this time!” to refer to what is basically the same betrayal over 14 years should no longer be accepted as a promise to do better, but should instead be seen as but one symptom of a profound crisis of accountability. When a large chorus of people outside the company raises alarms on a regular basis, it’s not a sufficient explanation to say, “Oh we were blindsided (again).”

Maybe, just maybe, that is the case of Stockholm syndrome we should be focusing on.

Zuckerberg’s outright denial that Facebook’s business interests play a powerful role in shaping its behavior doesn’t bode well for Facebook’s chances of doing better in the future. I don’t doubt that the company has, on occasion, held itself back from bad behavior. That doesn’t make Facebook that exceptional, nor does it excuse its existing choices, nor does it alter the fact that its business model is fundamentally driving its actions.

At a minimum, Facebook has long needed an ombudsman’s office with real teeth and power: an institution within the company that can act as a check on its worst impulses and to protect its users. And it needs a lot more employees whose task is to keep the platform healthier. But what would truly be disruptive and innovative would be for Facebook to alter its business model. Such a change could come from within, or it could be driven by regulations on data retention and opaque, surveillance-based targeting—regulations that would make such practices less profitable or even forbidden.

Facebook will respond to the latest crisis by keeping more of its data within its own walls (of course, that fits well with the business of charging third parties for access to users based on extensive profiling with data held by Facebook, so this is no sacrifice). Sure, it’s good that Facebook is now promising not to leak user data to unscrupulous third parties; but it should finally allow truly independent researchers better (and secure, not reckless) access to the company’s data in order to investigate the true effects of the platform. Thus far, Facebook has not cooperated with independent researchers who want to study it. Such investigation would be essential to informing the kind of political discussion we need to have about the trade-offs inherent in how Facebook, and indeed all of social media, operate.

Even without that independent investigation, one thing is clear: Facebook’s sole sovereign is neither equipped to, nor should he be in a position to, make all these decisions by himself, and Facebook’s long reign of unaccountability should end.


Facebook in Crisis

  • Initially, Facebook said that Cambridge Analytica got unauthorized access to some 50 million users' data. The social network has now raised that number to 87 million.
  • Next week, Mark Zuckerberg will testify before Congress. The question on our minds: How can Facebook prevent the next crisis if its guiding principle is and always has been connection at all cost?
  • Facebook has a long history of privacy gaffes. Here are just a few.

Photograph by WIRED/Getty Images

Read more: https://www.wired.com/story/why-zuckerberg-15-year-apology-tour-hasnt-fixed-facebook/

Finland has found the answer to homelessness. It couldnt be simpler | Harry Quilter-Pinner

With the number of rough sleepers in Britain soaring, its time we got over our prejudices, writes Harry Quilter-Pinner, who works at the homelessness charity SCT

I was born in Liverpool and grew up on a council estate. I had a clean home, toys and nice meals as a kid. When I was nine years old, the sexual abuse started. My abusers made me feel special. They gave me gifts, moneys, cigarettes and sweets. When I was 13 I ran away from home and soon found myself in the murky world of prostitution on the streets. My life was out of control.

This is how it all started for Simon. I met him 23 years later at SCT, a local charity I help to run in east London that offers support to people who are homeless and face alcohol and drug addiction. He used to make me coffee every morning at the social enterprise cafe we run. In the intervening period he had spent years in and out of hostels and institutions, as well as long spells on the streets.

When I met him, Simon was sober and working for the first time in years. He said at the time that SCT offered me the opportunity to get my life back on track. Life is worth living now. Im looking forward to my future. Tragically, this future wasnt to be: soon afterwards he decided to return to the streets and died as a result.

I would like to be able to say that Simons story is an exception. But in reality it is all too familiar, as new statistics published by the Guardian showed on Wednesday. The number of homeless people dying on the streets or in temporary accommodation in the UK has more than doubled over the past five years to more than one per week. The average age of a rough sleeper when they die is 43, about half the UK life expectancy.

The tragedy is that its entirely within our power to do something about it: homelessness is not a choice made by the individual, it is a reality forced by government policy. As homelessness has rocketed in the UK up 134% since 2010 it has fallen by 35% in Finland over a similar period of time. The Finnish government is now aiming to abolish it altogether in the coming years.

I recently travelled to Finland to understand how it had done this. It turns out its solution is painfully simple and blindingly obvious: give homes to homeless people. As Juha Kaakinen, who has led much of the work on housing first in Finland, explained to me when I met him in Helsinki, this takes housing as a basic human right rather than being conditional on engaging in services for addictions or mental health.

This is fundamentally different to our model in the UK, where stable accommodation is only provided as a reward for engaging in treatment services. The problem with this is obvious if you stop and think about it: how do we expect people to address complex personal problems while exposed to the chaos of life on the streets?

Sceptics will argue that giving homes to homeless people is a recipe for disaster. Arent we just subsidising addiction? Wont we end up with huge bills when it all goes wrong? Dont people need an incentive to get their lives back on track and engage in services?

Actually, no. The evidence from Finland as well as numerous other pilot schemes across the world shows the opposite is true. When people are given homes, homelessness is radically reduced, engagement in support services goes up and recovery rates from addiction are comparable to a treatment first approach. Even more impressive is that there are overall savings for government, as peoples use of emergency health services and the criminal justice system is lessened.

At the last election, the government committed to pilot a housing first approach in the UK. This isnt good enough we dont need another pilot. During my time in Finland I didnt see one homeless person. Within a few hours of coming back to London I walked past more than 100 rough sleepers queuing for food in the rain, just a few minutes from parliament. What we need is action. Ending homelessness is eminently achievable if we have the moral capacity and will to take proper action. We must overcome our prejudices and our apathy. The status quo is simply not good enough.

Harry Quilter-Pinner is director of strategy at SCT, a homelessness and addictions charity in east London. He is also a research fellow at IPPR, the UKs progressive thinktank. He writes here in a personal capacity

Read more: https://www.theguardian.com/commentisfree/2018/apr/12/finland-homelessness-rough-sleepers-britain

From AI to Russia, Heres How Estonias President Is Planning for the Future

At 48 years old, Kersti Kaljulaid is Estonia’s youngest president ever, and its first female president. A marathon runner with degrees in genetics and an MBA, she spent a career behind the scenes—mostly as a European government auditor—before being elected by Estonia’s legislature in 2016. Two years later, she’s continuing Estonia’s push for global digital security while deflecting military and cyber threats from Russia, which occupied Estonia for 50 years until its liberation in 1991.

Known for its digital government, tax, and medical systems, Estonia is planning for the future. The country’s “e-resident” program—which allows global citizens to obtain a government-issued ID card and set up remotely-operated businesses in Estonia—has attracted 35,000 people since 2014. Now the government is discussing a proposal to grant some rights to artificially intelligent systems. The law could make it easier to regulate decision-making by autonomous systems, robots, or driverless cars.

This week, Kaljulaid visited the White House along with the leaders of Latvia and Lithuania, to meet with President Donald Trump about issues including security along the Russian border. The visit coincided with the 100th anniversary of Baltic independence after World War I, and Trump took the opportunity to reaffirm the US's commitment to protecting the Baltic States in accordance with the NATO Treaty. After attending the US-Baltic Trade Summit and laying a wreath at Arlington National Cemetery on Wednesday, Kaljulaid sat down with WIRED’s Eric Niiler for an interview at the Estonian Embassy in Washington.

EN: With various efforts over the past decade, Estonia is moving from a traditional state to a digital society in many ways. Where does that effort stand now and what do you hope to see happen during your next few years in office?

KK: Digital society is born when your people refuse to use paper. And in our country we know that our people refuse to use paper. If you arrive at such a point in your development, you have to make your digital state always secure. You need several alternatives if something goes wrong. All the time you need to worry about security; it doesn’t differ much from your paper archives.

We have already a society which is digitally disrupted. We also see that it changes how people think about technology and work and what possibilities the internet can offer for new types of careers. For example, people don’t need enterprises to work; they can sell their skills online independently.

In our case, also the government is within in the digital sphere. We recognize that there is the need to think about tax systems if people work in five different companies in five different countries at the same time. This needs to be sorted out. We cannot sort it alone, we need to sort it globally.

Estonian citizens seem to trust their government when it comes to sharing digital information. Here in the US, we trust Facebook and Amazon to a point, but with the government, it’s quite the opposite. How have you done this?

The way we have created our trust is because our people are not anonymous on the internet. It has always been secure. If you try to transact with someone online, you would not do it with an email and pay with a credit card. What we do instead is create an encrypted channel and sign a contract that is time stamped. Estonians are much more used to internet banking rather than an online credit card. You can create trust, but you have to create tools and the legal space that supports the security for these tools. The state has to promise people to keep them safe on the internet. I find it astonishing that globally businesses are on the internet. Very few states have followed them.

What about external threats? What other sort of steps might be needed to prevent Russian aggression in places like Ukraine, or the kind of cyber-attacks and hacking that have occurred in the United States during the 2016 presidential election?

With conventional aggression, since we got the sanctions in place, Russia has not made any further advances in any other region. In cyber, we must not get narrowly concentrated on Russia only. Cyber attacks rain down on us from many places. You have to make your systems secure and safe and teach your people cyber hygiene. If you are able to attribute some attacks, it's good to be open about it as the United States has been. We need to have an understanding globally about how international rules apply in the internet sphere. Right now, that is massively missing.

What do you mean, global rules?

There’s lot of academic work on this, for example the Tallinn Manual 1 and 2. For example, we don’t attack each other’s sovereignty. Could attacking some vital electronic systems be considered an attack? What are the rights of the defender in that case? What are the rights where you fall under attack from a country you can identify, but not from the government? And if this government cannot go after the attacker because it is too weak—what are your rights then?

Speaking of rights, Estonia is looking to become perhaps the first nation to grant legal rights to artificial intelligence agents, such as fully autonomous robots or vehicles. How will it affect ordinary Estonians?

The discussion centers on whether we need to create a special legal entity for autonomous systems. If you regulate for AI, you also regulate for machine learning, self-acting and autonomous systems. We want our state to be proactive to offer services to people. You need to carefully think how to make this offer safe to our people and their private data. We want AI to be safely grown in Estonia.

Was this pushed by the advent of driverless cars?

No, it's pushed by the Estonian people's demand to get more proactive state services. For example, if a couple has a child, they are entitled to universal child support. In the Estonian people’s minds, it is unnecessary to apply for this. They say, “I had my baby, just pay me." For that, this is proactive. People demand efficiency from an automated system that is making decisions. We have to regulate. Once you go digital, you are constantly pushed by your people to provide better services.

You’ve just launched a new genetic testing program for 100,000 Estonian citizens adding to the 52,000 who have already been tested. How will this information be used to improve public health? And what kind of safeguards are there to prevent possible genetic discrimination by employers, for example?

This information belongs to those people whose genome has been analyzed. This information does not belong to the Estonian Genome Bank or the government, and it's not shared with other individuals. People’s genetic data is in an anonymous form. The aim of this program is so people will know their diabetes risk, or their heart attack risk. They can share this information with their family doctor, but they are not obliged to. They can keep it to themselves, but most people will probably share it with their doctor.

Are there any other big things on the horizon in Estonia that we should be looking for?

I wouldn’t tell you if I had. The genome bank and the digital society are the projects that have flied. I am sure there are others that have not. Our people are willing to work with the government on new technologies. Now it’s a habit; every Estonian looks at it as part of our national identity. We understand that this allows us to provide better services to our people than our money would allow.

Read more: https://www.wired.com/story/from-ai-to-russia-heres-how-estonias-president-is-planning-for-the-future/

Sociologists Examine Hackathons and See Exploitation

As the gospel of Silicon Valley-style disruption spreads to every sector in the economy, so too have the industry’s favorite competitive ritual, hackathons. The contests, where small teams of “hackers” build tech products in marathon all-night coding sessions, are a hallmark of Silicon Valley culture. Recall Facebook’s most famous hackathon, thrown on the eve of its IPO to show the world that the demands of being a public company would not kill the “hacker way” at One Hacker Way.

Now, sponsors ranging from Fortune 500 conglomerates to conference organizers host them. Even New York Fashion Week and the Vatican have hosted hackathons. They’ve become part of a “toolkit” for large organizations seeking a veneer of innovation. Some organizers view them as recruiting opportunities, others as opportunities to evangelize their company’s technology platforms, and others simply want to be associated with something cool and techie. They’re so common that hackathon enthusiast Mike Swift started a company dedicated to organizing and building community around them called Major League Hacking. Last year the company provided services for more than 200 hackathons with more than 65,000 participants.

The phenomenon is attracting attention from academics. One pair of sociologists recently examined hackathons and emerged with troubling conclusions. Sharon Zukin, professor of sociology at Brooklyn College and CUNY Graduate Center, spent a year observing seven hackathons, mostly sponsored by corporations, in New York City, interviewing participants, organizers, and sponsors. In a study called “Hackathons As Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the ‘New’ Economy,” she and co-author Max Papadantonakis argue that hackathons create “fictional expectations of innovation that benefits all,” which Zukin writes is a “powerful strategy for manufacturing workers’ consent in the ‘new’ economy.” In other words, institutions use the allure of hackathons, with sponsors, prizes, snacks, and potential for career advancement, to get people to work for free.

To Zukin, this is a problem, because hackathons are making the “hacker subculture” they promote into the new work norm. That norm, which coincides with the labor market trend of less-secure employment, encourages professional workers to adopt an “entrepreneurial” career and market themselves for continually shifting jobs. The trend also includes motivating workers with Soviet-style slogans venerating the pleasures of work.

Zukin tells WIRED the unpaid labor of hackathons recalls sociological research on fashion models, who are also expected to spend time promoting themselves on social media, and party girls, who go to nightclubs with male VIPs in hopes of boosting acting or modeling aspirations. Participants are combining self-investment with self-exploitation, she says. It’s rational given the demands of the modern labor market. It’s just precarious work.

Zukin was surprised to find that hackathon participants almost universally view the events positively. Hackathons are often social, emotionally charged, and a way to learn. Swift says his company found that 86 percent of student participants say they learn skills they can’t get in the classroom, and a third of them believe skills they learned at a hackathon helped them get a job.

Zukin observed hackathon sponsors fueling the “romance of digital innovation by appealing to the hackers’ aspiration to be multi-dimensional agents of change,” she writes. The themes of exhaustion (participants often work for 24 or 36 hours straight), achievement, and the belief that this work could bring future financial reward, were prevalent at the events she observed.

To the tech industry and its imitators, these are normal ideas. To a sociologist, they’re exploitative. “From my perspective, they’re doing unpaid work for corporations,” Zukin says. (Even hackathons thrown by schools, non-profits, publishers, and civic organizations tend to have corporate sponsors.)

Viewed through a sociologist’s framework, Zukin says the events’ aspirational messaging—typical Silicon Valley-style futurebabble about changing the world—feels dystopian. Hackathons show “the fault lines of an emerging production system” by embodying a set of “quasi-Orwellian” ideas that are prevalent in the current economic climate, she writes. Zukin encapsulates those ideas in slogans that could be at home on the walls of a WeWork lobby: “Work is Play,” “Exhaustion is Effervescent,” and “Precarity is Opportunity.”

Zukin only examined hackathons that were open to the public. But many companies, like Facebook, host internal hackathons over weekends. Zukin notes that such events, in which employees may feel obligated to participate, are a form of labor control. “They’re just trying to squeeze the innovation out of [their workers],” she says.

Hackathons reflect an asymmetry of power between the hackathons’ corporate sponsors and their participants, the study argues. Their corporate sponsors outsource work, crowdsource innovation, and burnish their reputations while concealing their business goals.

I noticed this phenomenon while reporting on a dozen hackathons between 2012 and 2014. At a 2013 college-sponsored hackathon, it seemed that everyone involved wanted something from the participants: Sponsors wanted to lay the groundwork for potential investments, hire the hackers, convince them to use particular software to build tools and apps, and boost their own reputations by offering cash, snacks and other prizes.

Swift, of Major League Hacking, doesn’t think sponsor involvement is bad for participants. “The corporate sponsors enable these amazing experiences that the students have at these hackathons,” he says. Their sponsorship “demonstrates that the companies understand developers, care about their interest and goals, and are investing in this community,” he says. He notes that because of sponsors, participants get to work with tools they might not have access to, like VR headsets or expensive software platforms.

The irony is that, regardless of whether hackathon participants willingly participate in self-exploitation or are simply having fun and learning, they rarely produce useful innovations that last beyond the event’s 36 hours. Startup lore has plenty of tales of successful companies that were created at hackathons—a popular example is GroupMe, the messaging app created at a TechCrunch hackathon, which sold to Skype for $85 million one year later. But such examples are rare. “Hacks are hacks, not startups,” Swift wrote in a blog post. “Most hackers don’t want to work on their hackathon project after the hackathon ends.”

Hackathons are not particularly effective as recruiting strategies for large companies, either, the study finds. But they sell the dream of self-improvement via technology, something companies want to be associated with regardless of any immediate benefit to their bottom line. As symbols of innovation, they’re not likely to go anywhere anytime soon.

Hacking Away

  • More than 100 students recently coded for 36 hours straight at the Vatican’s first-ever hackathon.
  • Some participants in a federal government hackathon aimed at solutions to the opioid crisis had second thoughts.
  • A photographer documented the networking parties, hackathons and grubby crash pads where techies tap away at their laptops.

Read more: https://www.wired.com/story/sociologists-examine-hackathons-and-see-exploitation/

Promising Cancer “Vaccine” Is Set To Begin Human Trials

An immune system-boosting cancer treatment that recently demonstrated astounding results in mice is now advancing to human testing.

Published less than two months ago in Science Translational Medicine, a study by Stanford University researchers showed that injection of two immune-stimulating agents directly into a tumor caused T-cells to recognize and destroy cancerous cells in both the local tumor as well as a distantly located secondary mass.

Because the combination treatment provokes an immune response and can be easily administered by an injection, the scientists have referred to it as a cancer “vaccine”, although technically speaking it is not a true vaccine.

Normally, T-cells are infective against tumors because the malignant cells within are either too similar to healthy cells to be recognized or the cancerous cells actually excrete chemicals that allow them to go undetected.

Existing antibody-based cancer treatments get around this by targeting cancerous cells through highly specific mutations, but consequently only work on certain cancers. The newly approved CAR T-cell therapies also work by boosting T-cell function, yet the treatment requires each individual patient’s immune cells to be genetically engineered.

Thus, the Stanford team’s finding that a simple injection of two agents caused mouse T-cells to mobilize themselves against genetically identical nearby cancer cells – plus far away ones that mimic metastasized cells – is quite remarkable. Moreover, the treatment was effective against multiple types of cancer. The best results, a whopping 97 percent cure rate, were seen against lymphoma. 

Now, the researchers will evaluate the injection in humans with a subtype of lymphoma called low-grade B-cell Non-Hodgkin.

Dr Ronald Levy, leader of the planned phase 1 trial and senior author of the mouse study, told the SF Gate that he and his colleagues hope to enroll a total of 35 adult patients for two study groups by the end of this year.

Each participant will first receive low-dose radiation therapy to kill some cancer cells and weaken those that remain, followed by two rounds of treatment injection.

The aim of the trial will be to determine the optimal dose and examine the treatment’s side effects.

“The two drugs we are injecting are made by two different companies and have already been proven safe for people,” Levy said. “It’s the combination we are testing.”

One of the treatment’s components is an antibody called anti-OX40 that activates both CD4 T-cells, helper cells that communicate with other immune cells, and CD8 killer cells, which, as the name suggests, release chemicals that destroy targeted cells.

The other ingredient is a short strand of synthetic DNA that tells immune cells to express a cell surface protein called TLR9 ligand – this, in turn, boosts antibody production and leads to the creation of specialized memory cells whose purpose is to quickly sound the alarm if the same threatening cell reappears in the future.  

Read more: http://www.iflscience.com/health-and-medicine/promising-cancer-vaccine-is-set-to-begin-human-trials/

When do you know you’re old enough to die? Barbara Ehrenreich has some answers

With her latest book, Natural Causes, Barbara Ehrenreich notes that theres an age at which death no longer requires much explanation

Four years ago, Barbara Ehrenreich, 76, reached the realisation that she was old enough to die. Not that the author, journalist and political activist was sick; she just didnt want to spoil the time she had left undergoing myriad preventive medical tests or restricting her diet in pursuit of a longer life.

While she would seek help for an urgent health issue, she wouldnt look for problems.

Now Ehrenreich felt free to enjoy herself. I tend to worry that a lot of my friends who are my age dont get to that point, she tells the Guardian. Theyre frantically scrambling for new things that might prolong their lives.

It is not a suicidal decision, she stresses. Ehrenreich has what she calls a very keen bullshit detector and she has done her research.

The results of this are detailed in her latest book, Natural Causes: An Epidemic of Wellness, the Certainty of Dying, and Killing Ourselves to Live Longer, published on 10 April.

Part polemic, part autobiographical, Ehrenreich who holds a PhD in cellular immunology casts a skeptical, sometimes witty, and scientifically rigorous eye over the beliefs we hold that we think will give us longevity.

She targets the medical examinations, screenings and tests were subjected to in older age as well as the multibillion-dollar wellness industry, the cult of mindfulness and food fads.

These all give us the illusion that we are in control of our bodies. But in the latter part of the book, Ehrenreich argues this is not so. For example, she details how our immune systems can turn on us, promoting rather than preventing the spread of cancer cells.

When Ehrenreich talks of being old enough to die, she does not mean that each of us has an expiration date. Its more that theres an age at which death no longer requires much explanation.

That thought had been forming in my mind for some time, she says. I really have no hard evidence about when exactly one gets old enough to die, but I notice in obituaries if the person is over 70 theres not a big mystery, theres no investigation called for. Its usually not called tragic because we do die at some age. I found that rather refreshing.

In 2000, Ehrenreich was diagnosed with breast cancer (she wrote the critical, award-winning essay Welcome to Cancerland about the pink ribbon culture).

The experience of cancer treatment helped shape her thoughts on ageing, she says.

Within this last decade, I realised I was not going to go through chemotherapy again. Thats like a year out of your life when you consider the recovery time and everything. I dont have a year to spare.

In Natural Causes, Ehrenreich writes about how you receive more calls to screenings and tests in the US including mammograms, colonoscopies and bone density scans as you get older. She claims most fail the evidence-based test and are at best unnecessary and worst harmful.

Ehrenreich would rather relax with family and friends or take a long walk than sit in a doctors waiting room. She lives near her daughter in Alexandria, Virginia, and likes to pick up her 13-year-old granddaughter from school and hang out with her a while.

Work is still a passion too. She fizzes with ideas for articles and books on subjects that call for her non-conformist take.

Once a prominent figure in the Democratic Socialists of America, she is also busy with the Economic Hardship Reporting Project she founded, which promotes journalism about inequality and poverty in the US, and gives opportunity to journalists who are struggling financially. (The Guardian often partners with the organisation.)

Ehrenreich, who is divorced, has talked to her children Rosa, a law professor, and Ben, a journalist and novelist about her realisation she is old enough to die, but not in a grim way. That wouldnt be her style. While a sombre subject, she chats about it with a matter-of-fact humour.

I just said: This is bullshit. Im not going to go through this and that and the other. Im not going to spend my time, which is very precious, being screened and probed and subjected to various kinds of machine surveillance. I think theyre with me. I raised them right, she laughs.

The last time I had to get a new primary care doctor I told her straight out: I will come to you if I have a problem, but do not go looking for problems.

She pauses: I think I beat her into submission.

Natural Causes is Ehrenreichs 23rd book in 50 years. Much of her work is myth-busting, such as Bright-sided, which looks at the false promises of positive thinking; other work highlights her keen sense of social justice. For her best-selling 2001 book Nickel and Dimed, she went undercover for three months, working in cleaning, waitressing and retail jobs to experience the difficulties of life on a minimum wage.

A recent exchange with a friend summed up what Ehrenreich hoped to achieve with Natural Causes.

I gave the book to a dear friend of mine a week ago. Shes 86 and shes a very distinguished social scientist and has had a tremendous career. She said: I love this, Barbara, its making me happy. I felt wow. I want people to read it and relax. I see so many people my age and this has been going on for a while who are obsessed, for example, with their diets.

Im sorry, Im not going out of this life without butter on my bread. Ive had so much grief from people about butter. The most important thing is that food tastes good enough to eat it. I like a glass of wine or a bloody mary, too.

Barbara
Barbara Ehrenreich: Cancer is a cellular rebellion. Photograph: Stephen Voss for the Guardian

Yet despite her thoughts on the wellness industry with its expensive health clubs (fitness has become a middle-class signifier, she says) and corporate wellness programs (flabby employees are less likely to be promoted, she writes), Ehrenreich wont be giving up the gym anytime soon. She works out most days because she enjoys cardio and weight training and lots of stretching, not because it might make her live longer.

That is the one way in which I participated in the health craze that set in this country in the 70s, she says. I just discovered there was something missing in my life. I dont understand the people who say, Im so relieved my workout is over, it was torture, but I did it. Im not like that.

In Natural Causes, Ehrenreich uses the latest biomedical research to challenge our assumption that we have agency over our bodies and minds. Microscopic cells called macrophages make their own decisions, and not always to our benefit they can aid the growth of tumours and attack other cells, with life-threatening results.

This was totally shocking to me, she says. My research in graduate school was on macrophages and they were heroes [responsible for removing cell corpses and trash the garbage collector of the body]. About 10 years ago I read in Scientific American about the discovery that they enable tumour cells to metastasise. I felt like it was treason!

She continues: The really shocking thing is that they can do what they want to do. I kept coming across the phrase in the scientific literature cellular decision-making.

This changed her whole sense of her body, she says.

The old notion of the body was like communist dictatorship every cell in it was obediently performing its function and in turn was getting nourished by the bloodstream and everything. But no, there are rebels I mean, cancer is a cellular rebellion.

Ehrenreich, an atheist, finds comfort in the idea that humans do not live alone in a lifeless universe where the natural world is devoid of agency (which she describes as the ability to initiate an action).

When you think about some of these issues, like how a cell can make decisions, and a lot of other things I talk about in the book, like an electron deciding whether to go through this place in a grid or that place. When you see theres agency even in the natural world. When you think about it all being sort of alive like that, its very different from dying if you think theres nothing but your mind in the universe, or your mind and Gods mind.

Death becomes less a terrifying leap into the abyss and more like an embrace of ongoing life, she believes.

If you think of the whole thing as potentially thriving and jumping around and having agency at some level, its fine to die, she adds reassuringly.

Read more: https://www.theguardian.com/lifeandstyle/2018/apr/07/barbara-ehrenreich-natural-causes-book-old-enough-to-die

China Now Has the Most Valuable AI Startup in the World

  • It becomes the world’s richest-valued private AI startup
  • The company drives China’s ambition to dominate global AI

SenseTime Group Ltd. has raised $600 million from Alibaba Group Holding Ltd. and other investors at a valuation of more than $3 billion, becoming the world’s most valuable artificial intelligence startup.

The company, which specializes in systems that analyze faces and images on an enormous scale, said it closed a Series C round in recent months in which Singaporean state investment firm Temasek Holdings Pte and retailer Suning.com Co. also participated. SenseTime didn’t outline individual investments, but Alibaba was said to have sought the biggest stake in the three-year-old startup.

With the deal, SenseTime has doubled its valuation in a few months. Backed by Qualcomm Inc., it underscores its status as one of a crop of homegrown firms spearheading Beijing’s ambition to become the leader in AI by 2030. And it’s a contributor to the world’s biggest system of surveillance: if you’ve ever been photographed with a Chinese-made phone or walked the streets of a Chinese city, chances are your face has been digitally crunched by SenseTime software built into more than 100 million mobile devices.

The latest financing will bankroll investments in parallel fields such as autonomous driving and augmented reality, cover the growing cost of AI talent and shore up its computing power. It’s developing a service code-named “Viper” to parse data from thousands of live camera feeds — a platform it hopes will prove invaluable in mass surveillance. And it’s already in talks to raise another round of funds and targeting a valuation of more than $4.5 billion, according to people familiar with the matter.

“We’re going to explore several new strategic directions and that’s why we shall spend more money on building infrastructure,” SenseTime co-founder Xu Li said in an interview. The company turned profitable in 2017 and wants to grow its workforce by a third to 2,000 by the end of this year. “For the past three years the average revenue growth has been 400 percent.”

Read more: China’s Plan for World Domination in AI Isn’t So Crazy After All

AI Supremacy

China houses some of the world's largest privately backed artificial intelligence startups

Source: CB Insights, Bloomberg

Note: Mobvoi's valuation is based on a 2015 investment

Alibaba, the e-commerce giant that’s also the country’s biggest cloud service provider, could help with its enormous infrastructure needs. SenseTime plans to build at least five supercomputers in top-tier cities over the coming year to drive Viper and other services. As envisioned, it streams thousands of live feeds into a single system that’re automatically processed and tagged, via devices from office face-scanners to ATMs and traffic cameras (so long as the resolution is high enough). The ultimate goal is to juggle 100,000 feeds simultaneously.

Police can use Viper to track everything from vice and accidents to suspects on blacklists. While civil libertarians say such systems have been used to track activists and oppress minorities in places like the Western region of Xinjiang, Xu believes the technology is essential and deployed in various ways by authorities around the world. China’s police forces and surveillance footage are also important sources of training data for SenseTime’s image recognition systems — it claims to work with 40 city authorities in the country.

“It will not affect privacy because only authorized persons can access it,” he said.

SenseTime claims some 400 clients and partners including Qualcomm, chipmaker Nvidia Corp. and smartphone maker Xiaomi Corp. For 2018, it’s projecting several billion yuan in revenues, Xu said. The startup’s expanding its reach across augmented reality, popularized by services like Snapchat that impose digital stickers and images on the real world. And it’s working with Honda Motor Co. to develop autonomous driving systems and is in talks to work with health institutions.

“In China there is an advantage in areas like facial recognition because of the privacy that exists in the U.S. and elsewhere in the EU, and some of the very best facial recognition technology in the world that I’ve seen is in China,” said Breyer Capital founder Jim Breyer, an indirect investor in SenseTime through IDG.

SenseTime is the largest, according to CB Insights, of a plethora of private AI outfits. Fellow facial-recognition startup Megvii Inc. raised $460 million last year, while smaller niche players from Yitu to Malong Technologies have also won funding. A key partner, Hangzhou Hikvision Digital Technology, is one of the world’s biggest suppliers of security cameras and developing its own competing AI technology.

Xu says its ability to work across wide datasets and diverse products sets it apart from rivals. “What is difficult is if you’re dealing with different video streams in different formats,” he said. While the company has long considered an initial public offering, Xu said those plans are on hold pending rules that facilitate tech listings in mainland China.

“We are still waiting for a fixed rule to come up with a new strategy,” he said. “Probably not this year.”

Read more: http://www.bloomberg.com/news/articles/2018-04-09/sensetime-snags-alibaba-funding-at-a-record-3-billion-valuation

How The Milk Of The Humble Platypus Could Help Us Beat Antibiotic-Resistant Superbugs

The first scientists to examine a platypus in 1799 dismissed it as a fake made of different animals sewn together. Centuries later, this bizarre Aussie critter continues to confound us. Now, it appears its milk could be our savior in the fight against antibiotic resistance.

Back in 2010, scientists realized that platypus moms produce very special milk. They found that it has unique bacteria-fighting properties that could be used to kill superbugs. Unlike most other mammals, including us, these weird creatures don’t have teats, so their milk is expressed onto their stomachs where their babies lap it up. This exposes it to the bacteria-filled world so bug-fighting properties are pretty useful.

The duck-billed platypus is certainly an oddity in nature. Although it’s a mammal, it doesn’t really follow mammalian rules. It has the bill of a duck, it lays eggs, and it has venomous spurs poking out of its feet. It belongs to an Australian group of animals called monotremes, which also includes the prickly but adorable pointy-nosed echidnas, otherwise known as spiny anteaters. Monotremes certainly wouldn’t look out of place in an episode of Pokémon.  

But as well as being bizarre and adorable in equal measures, the platypus could help us treat infections at a time when overprescribing antibiotics is a serious threat to humanity.

Antibiotics are used to treat bacterial infections, which include many ailments from a chest infection to life-threatening meningitis. They’re our best defense against bacterial diseases, but recently we’ve encountered a problem. Antibiotics are used so much that ever-evolving bacteria are mutating to be resistant to them, forming nasty superbugs like MRSA. If a serious antibiotic-resistant disease broke out and spread across the world, we’d be in pretty big trouble.

Enter the duck-billed platypus.

To investigate exactly how a platypus’ milk is so potent, a team of researchers from CSIRO took a closer look. They managed to isolate the monotreme lactation protein and analyze its structure, which, like the platypus itself, is totally unique.

“Platypus are such weird animals that it would make sense for them to have weird biochemistry,” lead author Dr Janet Newman said in a statement.

“We’ve characterised a new protein that has unique antibacterial properties with the potential to save lives.”

The team’s findings are published in the journal Structural Biology Communications. They replicated the protein in the lab so they could get a good look at it and discovered a strange never-before-seen 3D fold. This is important because the shape of a protein controls its function.

Adding to this story’s strangeness, the protein has a ringlet-style structure, so obviously, the researchers decided to call it Shirley Temple, after the child-star’s golden locks.

Excitingly, discovering the unique structure of the “Shirley Temple” protein will help scientists in their quest to find alternatives to antibiotics. Platypuses, we salute you.  

Saving Lives with Platypus Milk from CSIRO on Vimeo

Read more: http://www.iflscience.com/plants-and-animals/how-the-milk-of-the-humble-platypus-could-help-us-beat-antibioticresistant-superbugs-/

Dead lizard found in bag of Trader Joe’s kale

If you’ve ever groaned at the prospect of eating kale, now you have the perfect excuse to back away from your health-conscious friend’s green smoothie. 

Grace Goldstein opened her fresh bag of Trader Joe’s kale on Tuesday, only to discover a dead lizard nestled among the leafy greens. Her friend shared the mildly gross image on Twitter to the joy of all those who reject the superfood. 

Goldstein told People magazine that after she made the shocking discovery, there was a lot of “asking [her] boyfriend to see the bag of kale and identify the lizard and shrieking and pushing it away and refusing to go near it…and then asking to see it again.” 

An understandable reaction to this grotesque find.

If you wanted a closer look at the unexpected salad guest, Goldstein also shared the photo on Instagram.

Goldstein told People that she reached out to Trader Joe’s corporate. The chain is investigating, but there have been no further updates. 

Trader Joe’s responded to Mashable’s inquiry about the incident:

“We are committed to providing customers with great products of the highest quality and are currently working with our vendor to look into and address the matter.”

Hopefully, this is an isolated incident that does not speak for all Trader Joe’s stores, bags of kale, or corporate-minded lizards.

UPDATE: April 5, 2018, 3:05 p.m. EDT This story was updated with comments from Trader Joe’s.

Read more: https://mashable.com/2018/04/05/trader-joes-kale-lizard/