China Now Has the Most Valuable AI Startup in the World

  • It becomes the world’s richest-valued private AI startup
  • The company drives China’s ambition to dominate global AI

SenseTime Group Ltd. has raised $600 million from Alibaba Group Holding Ltd. and other investors at a valuation of more than $3 billion, becoming the world’s most valuable artificial intelligence startup.

The company, which specializes in systems that analyze faces and images on an enormous scale, said it closed a Series C round in recent months in which Singaporean state investment firm Temasek Holdings Pte and retailer Suning.com Co. also participated. SenseTime didn’t outline individual investments, but Alibaba was said to have sought the biggest stake in the three-year-old startup.

With the deal, SenseTime has doubled its valuation in a few months. Backed by Qualcomm Inc., it underscores its status as one of a crop of homegrown firms spearheading Beijing’s ambition to become the leader in AI by 2030. And it’s a contributor to the world’s biggest system of surveillance: if you’ve ever been photographed with a Chinese-made phone or walked the streets of a Chinese city, chances are your face has been digitally crunched by SenseTime software built into more than 100 million mobile devices.

The latest financing will bankroll investments in parallel fields such as autonomous driving and augmented reality, cover the growing cost of AI talent and shore up its computing power. It’s developing a service code-named “Viper” to parse data from thousands of live camera feeds — a platform it hopes will prove invaluable in mass surveillance. And it’s already in talks to raise another round of funds and targeting a valuation of more than $4.5 billion, according to people familiar with the matter.

“We’re going to explore several new strategic directions and that’s why we shall spend more money on building infrastructure,” SenseTime co-founder Xu Li said in an interview. The company turned profitable in 2017 and wants to grow its workforce by a third to 2,000 by the end of this year. “For the past three years the average revenue growth has been 400 percent.”

Read more: China’s Plan for World Domination in AI Isn’t So Crazy After All

AI Supremacy

China houses some of the world's largest privately backed artificial intelligence startups

Source: CB Insights, Bloomberg

Note: Mobvoi's valuation is based on a 2015 investment

Alibaba, the e-commerce giant that’s also the country’s biggest cloud service provider, could help with its enormous infrastructure needs. SenseTime plans to build at least five supercomputers in top-tier cities over the coming year to drive Viper and other services. As envisioned, it streams thousands of live feeds into a single system that’re automatically processed and tagged, via devices from office face-scanners to ATMs and traffic cameras (so long as the resolution is high enough). The ultimate goal is to juggle 100,000 feeds simultaneously.

Police can use Viper to track everything from vice and accidents to suspects on blacklists. While civil libertarians say such systems have been used to track activists and oppress minorities in places like the Western region of Xinjiang, Xu believes the technology is essential and deployed in various ways by authorities around the world. China’s police forces and surveillance footage are also important sources of training data for SenseTime’s image recognition systems — it claims to work with 40 city authorities in the country.

“It will not affect privacy because only authorized persons can access it,” he said.

SenseTime claims some 400 clients and partners including Qualcomm, chipmaker Nvidia Corp. and smartphone maker Xiaomi Corp. For 2018, it’s projecting several billion yuan in revenues, Xu said. The startup’s expanding its reach across augmented reality, popularized by services like Snapchat that impose digital stickers and images on the real world. And it’s working with Honda Motor Co. to develop autonomous driving systems and is in talks to work with health institutions.

“In China there is an advantage in areas like facial recognition because of the privacy that exists in the U.S. and elsewhere in the EU, and some of the very best facial recognition technology in the world that I’ve seen is in China,” said Breyer Capital founder Jim Breyer, an indirect investor in SenseTime through IDG.

SenseTime is the largest, according to CB Insights, of a plethora of private AI outfits. Fellow facial-recognition startup Megvii Inc. raised $460 million last year, while smaller niche players from Yitu to Malong Technologies have also won funding. A key partner, Hangzhou Hikvision Digital Technology, is one of the world’s biggest suppliers of security cameras and developing its own competing AI technology.

Xu says its ability to work across wide datasets and diverse products sets it apart from rivals. “What is difficult is if you’re dealing with different video streams in different formats,” he said. While the company has long considered an initial public offering, Xu said those plans are on hold pending rules that facilitate tech listings in mainland China.

“We are still waiting for a fixed rule to come up with a new strategy,” he said. “Probably not this year.”

Read more: http://www.bloomberg.com/news/articles/2018-04-09/sensetime-snags-alibaba-funding-at-a-record-3-billion-valuation

France wants to become an artificial intelligence hub

Emmanuel Macron and his government are launching a big initiative around artificial intelligence today. They want to turn France into one of the leading countries when it comes to artificial intelligence.

“[Artificial intelligence] is a technological, economical, social and obviously ethical revolution,” Macron said in a speech. “This revolution won’t happen in 50 or 60 years, it’s happening right now. There are new opportunities and we can choose to follow some innovations or not.”

Some of the best mathematics and engineering schools are in France, and some of the best data scientists and AI researchers come from France. Many of them now work in California or London for Facebook, Deepmind, etc. And the French government wants to capitalize on that soft power to make an AI push.

And yet, how do you attract engineers and scientists? France’s answer is quite complicated because the government doesn’t want to inject a ton of public money and call it a day. It’s all about creating an AI ecosystem with strong pillars.

France’s AI strategy

First, many private companies have opened or plan to open AI research centers in France. Facebook and Google already work with hundreds of researchers in Paris. Today, Samsung, Fujitsu, DeepMind, IBM and Microsoft all announced plans to open offices in France to focus on AI research.

This represents tens of millions of dollars in investments and hundreds of employees. “Everybody is saying that Silicon Valley is overflowing right now,” a source close to the French President told me. That’s why big tech companies need to find talent outside of the U.S.

Foreign companies creating hundreds of jobs isn’t going to foster public research and European tech giants though — these companies are just tapping the smartest brains they can find. That’s why the French government wants to make it easier to work on fundamental research papers when you work for a private company.

The INRIA is going to create a national AI research program with four or five partners. The goal is quite simple — Macron said that there should be twice as many people studying and researching AI projects in France. Researchers will also be able to access and use some cloud computing capacities for their work.

It’s also going to get easier if you want to create a startup based on your research work or if you want to work for a private company during your PhD.

Second, France is going to set some new boundaries when it comes to data. French administrations are going to share new data sets so that anyone can build AI services using those data sets.

When it comes to health data, it looks like France wants to avoid another NHS/DeepMind scandal. While multiple French governments have worked on some kind of health data hub, Macron announced that this time it’s going to happen for real. The INDS is going to make sure that services and public institutions respect your privacy and grant authorizations on a case-by-case basis.

Third, when it comes to regulation, companies will be able to experiment in multiple industries. And it starts with autonomous vehicles. Companies will be able to test level 4 self-driving cars in 2019.

Overall, France is going to invest $1.85 billion (€1.5 billion) in AI projects, from public research to startup investments. Last year, Macron announced that there would be a new $11.2 billion (€10 billion) public fund managed by Bpifrance. Macron said today that AI startups should be the first priority of this new fund.

Making AI as neutral as possible

Arguably, the most interesting part of Macron’s speech was about the moral impact of artificial intelligence. As algorithms become more prominent, there’s a risk that they become black boxes that decide for you.

The French administration already has to share all its algorithms and data that they use following Axelle Lemaire’s law. But that’s still not the case for some touchy subjects. Macron said it’s still a work in progress.

Research projects or companies financed with public money will also have to share everything — this could impact public infrastructure companies for instance.

But it’s more complicated when you’re talking about private companies. Macron said GDPR was a step in the right direction. And now, he wants to go further.

He doesn’t have any practical suggestion for now, but he said that there should be an international certification authority. For instance, this authority could make sure that there’s no bias in training data sets. I don’t think Facebook or Google would enjoy this new regulation.

Finally, you introduce a bias if your staff is not diverse enough. That’s why schools and universities should make sure that they train a diverse group of people.

Not the first AI push

As Next INpact pointed out, there have been multiple reports on artificial intelligence over the past few years — FranceIA, the CNIL, the OPECST and the European Economic and Social Committee all wrote their own recommendations when it comes to AI policies.

Today, Fields medal winner and parliament member Cédric Villani shared a new report on artificial intelligence. It’s always an interesting read, and it was the inspiration for Macron’s speech today.

According to a source close to the French President, multiple ministers now have to focus on artificial intelligence for their own industries.

Today’s report feels more like a starting point than a conclusion. The French President thinks that AI is something important but not urgent. Now, it’s all about convincing the rest of the government to put aside all the urgent tasks for a minute and look at what’s important.

Read more: https://techcrunch.com/2018/03/29/france-wants-to-become-an-artificial-intelligence-hub/

Atomwise, which uses AI to improve drug discovery, raises $45M Series A

Atomwise, which uses deep learning to shorten the process of discovering new drugs, has raised a $45 million Series A. The round was led by Monsanto Growth Ventures, Data Collective (DCVC) and B Capital Group. Baidu Ventures, Tencent and Dolby Family Ventures, which are all new investors in Atomwise, also participated, as well as returning investors Y Combinator, Khosla Ventures and DFJ.

This means Atomwise, which was founded in 2012, has now raised more than $51 million in funding. The company, which aims to reduce the amount of money and time researchers spend on finding compounds for medications, says it now has more than 50 molecular discovery programs. Atomwise’s technology is also being used to develop safer, more effective agricultural pesticides.

In a press statement, Monsanto Growth Ventures partner Dr. Kiersten Stead said “We chose to invest based on the impressive results we saw from Atomwise in our own hands. Atomwise was able to find promising compounds against crop protection targets that are important areas of focus for agrochemical R&D.”

Atomwise’s software analyzes simulations of molecules, reducing the time researchers need to spend synthesizing and testing compounds. The company says it currently screens more than 10 million compounds each day. Atomwise’s AtomNet system uses deep learning algorithms to analyze molecules and predict how they might act in the human body, including their potential efficacy as medication, toxicity and side effects, at an earlier stage than in the traditional drug discovery process.

In an email, Atomwise chief executive officer Dr. Abraham Heifets told TechCrunch that the company’s vision “is to become one of the most prolific and diverse life science research groups in the world, working at a scale that is truly unprecedented. This is a large Series A and we will use these resources to grow our technical and business organization. We may eventually find ourselves simulating hundreds of millions of compounds per day. The ultimate upshot is more shots on goal for the many diseases that urgently need new treatments.”

Lead optimization “has historically been the most expensive step in the pharma pipeline,” Heifets added, adding that it also has a very high failure rate, with “about two-thirds of projects failing to even make it to the clinic and it takes five and a half years to get that far.”

When Atomwise launched six years ago, its technology seemed almost like something out of science fiction. Now there is a roster of companies using artificial intelligence and machine learning to analyze molecules and fix bottlenecks in the drug discovery process, including Recursion Pharmaceuticals, BenevolentAI, TwoXAR, Cyclica and Reverie Labs.

Heifets said one of Atomwise’s main advantages is the large number of projects it has worked on, which in turn improves its AI systems. The company’s clients include four of the top 10 biggest pharmaceutical companies in the United States, including Merck, Monsanto, more than 40 major research universities (Harvard, Duke, Stanford and Baylor College of Medicine among them) and biotech firms.

He added that Atomwise also differentiates in its focus.

“There are two distinct problems in drug discovery: biology and chemistry,” he said. “If you’re working on biology, you’re trying to decide which disease protein is the best one to target.  A lot of AI companies in drug discovery are working on this target identification problem. Once you’ve chosen a target, you can start working on chemistry problems: how to deliver a non-toxic molecule that can hit the chosen disease protein. Atomwise is focused on these chemistry problems; specifically, Atomwise invented the use of deep neural networks for structure-based drug design.”

Read more: https://techcrunch.com/2018/03/07/atomwise-which-uses-ai-to-improve-drug-discovery-raises-45m-series-a/

Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning

Sundar Pichai, the chief executive of Google, has said that AI “is more profound than … electricity or fire.” Andrew Ng, who founded Google Brain and now invests in AI startups, wrote that “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”

Their enthusiasm is pardonable. There have been remarkable advances in AI, after decades of frustration. Today we can tell a voice-activated personal assistant like Alexa to “Play the band Television,” or count on Facebook to tag our photographs; Google Translate is often almost as accurate as a human translator. Over the last half decade, billions of dollars in research funding and venture capital have flowed towards AI; it is the hottest course in computer science programs at MIT and Stanford. In Silicon Valley, newly minted AI specialists command half a million dollars in salary and stock.

But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you’ve seen you’ve seen it, you can’t un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities.

Jason Pontin (@jason_pontin) is an Ideas contributor for WIRED. He is a senior partner at Flagship Pioneering, a firm in Boston that creates, builds, and funds companies that solve problems in health, food, and sustainability. From 2004 to 2017 he was the editor in chief and publisher of MIT Technology Review. Before that he was the editor of Red Herring magazine, a business magazine that was popular during the dot-com boom.

To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. Such networks possess inputs and outputs, a little like the neurons in our own brains; they are said to be “deep” when they possess multiple hidden layers that contain many nodes, with a blooming multitude of connections. Deep learning employs an algorithm called backpropagation, or backprop, that adjusts the mathematical weights between nodes, so that an input leads to the right output. In speech recognition, the phonemes c-a-t should spell the word “cat;” in image recognition, a photograph of a cat must not be labeled “a dog;” in translation, qui canem et faelem ut deos colunt should spit out “who worship dogs and cats as gods.” Deep learning is “supervised” when neural nets are trained to recognize phonemes, photographs, or the relation of Latin to English using millions or billions of prior, laboriously labeled examples.

Deep learning’s advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren’t classification problems at all. “People naively believe that if you take deep learning and scale it 100 times more layers, and add 1000 times more data, a neural net will be able to do anything a human being can do,” says François Chollet, a researcher at Google. “But that’s just not true.”

Gary Marcus, a professor of cognitive psychology at NYU and briefly director of Uber’s AI lab, recently published a remarkable trilogy of essays, offering a critical appraisal of deep learning. Marcus believes that deep learning is not “a universal solvent, but one tool among many.” And without new approaches, Marcus worries that AI is rushing toward a wall, beyond which lie all the problems that pattern recognition cannot solve. His views are quietly shared with varying degrees of intensity by most leaders in the field, with the exceptions of Yann LeCun, the director of AI research at Facebook, who curtly dismissed the argument as “all wrong,” and Geoffrey Hinton, a professor emeritus at the University of Toronto and the grandfather of backpropagation, who sees “no evidence” of a looming obstacle.

According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data. Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks. They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.

These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience,” explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. “Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch.” In January, Facebook abandoned M, a text-based virtual assistant that used humans to supplement and train a deep learning system, but never offered useful suggestions or employed language naturally.

What’s wrong? “It must be that we have a better learning algorithm in our heads than anything we’ve come up with for machines,” Domingos says. We need to invent better methods of machine learning, skeptics aver. The remedy for artificial intelligence, according to Marcus, is syncretism: combining deep learning with unsupervised learning techniques that don’t depend so much on labeled training data, as well as the old-fashioned description of the world with logical rules that dominated AI before the rise of deep learning. Marcus claims that our best model for intelligence is ourselves, and humans think in many different ways. His young children could learn general rules about language, and without many examples, but they were also born with innate capacities. “We are born knowing there are causal relationships in the world, that wholes can be made of parts, and that the world consists of places and objects that persist in space and time,” he says. “No machine ever learned any of that stuff using backprop.”

Other researchers have different ideas. “We’ve used the same basic paradigms [for machine learning] since the 1950s,” says Pedro Domingos, “and at the end of the day, we’re going to need some new ideas.” Chollet looks for inspiration in program synthesis, programs that automatically create other programs. Hinton’s current research explores an idea he calls “capsules,” which preserves backpropagation, the algorithm for deep learning, but addresses some of its limitations.

“There are a lot of core questions in AI that are completely unsolved,” says Chollet, “and even largely unasked.” We must answer these questions because there are tasks that a lot of humans don’t want to do, such as cleaning toilets and classifying pornography, or which intelligent machines would do better, such as discovering drugs to treat diseases. More: there are things that we can’t do at all, most of which we cannot yet imagine.


AI Anxieties

  • You can stop panicking about a superhuman AI. As Kevin Kelly writes, that’s a myth.

  • Another worry you can cross off your list? The fear that robots will take all of our jobs. It’s not nearly that simple.

  • But AI is becoming an ever-more integral factor in the future of work. Say hello to your new AI coworkers.

Photograph by WIRED/Getty Images

Read more: https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning/

Element AI opens London outpost with focus on AI for good

Element AI — which last year raised $102 million to build an incubator-meets-consultancy to work with multiple businesses as they launch new services and systems based on artificial intelligence — is entering the next phase of its growth this week.

The Canadian startup — co-founded by Jean-François Gagné, Nicolas Chapados, and Yoshua Bengio — is opening an outpost in London, its first international expansion. One main focus will be to work with charities, non-governmental organizations and others on “AI for good,” alongside Element AI’s existing work in finance, cybersecurity, manufacturing, logistics and robotics.

The idea is smart and timely: at a moment when many might see the advances of AI as potentially more dystopian and encroaching than beneficial — not to mention reserved for the select few who have the financial and human resources to build AI systems — Element AI wants to position itself, and AI, as something that can be used to help everyone.

“There are so many exciting people in the world of academia and artificial intelligence wanting to do interesting stuff. How can we make sure our tools are not just meeting the needs of big companies? There are others who could also use them,” Dr Julien Cornebise, who will be leading Element AI’s offices in London, said in an interview.

Notably for a startup that is looking to take on the likes of Google in a bid to “democratize” the intellectual and operational capital needed to build out innovative products based on AI, Cornebise himself comes to Element AI from DeepMind, the UK startup Google acquired back in 2014 that now plays a central role in the company’s AI efforts.

He had been one of the first employees at DeepMind, and since leaving in 2016 — in search of, he tells TechCrunch, an environment that felt more like a smaller startup again — he’s been working pro bono at Amnesty International.

He wouldn’t get too specific about what he’s been doing at the human rights organization, except to say that it involved work in “conflict resolution” and working out a way to build algorithms that could help identify bad practices on a larger scale than the organisation could do with limited human resources. (It sounds like the aim is to launch this service later this year.)

While the Amnesty work has pre-dated Element AI coming to London, the idea, he said, would be to use it as a template for how the startup hopes to collaborate with groups.

“I want to be super careful here because I’ve seen too many enthusiastic machine learning people say, ‘Stand back! We are going to save the world!’” Cornebise said (a comment you can imagine might apply just as easily to a more controversial project at DeepMind as it might to the best/misguided intentions of Facebook’s algorithms). “We’ve actually got the easy part. There are people who are going to jail for doing their jobs. I want now to say, ‘How can we help?’”

The startup’s move to London underscores how the UK has put itself on the map when it comes to AI research and applications, with universities like Oxford, Cambridge, Imperial College and others all breaking ground and producing leading thinkers and startups in areas like computer vision, deep learning and robotics.

“Element AI’s establishment of a London office is testament to the strengths of the UK research and innovation ecosystem in AI, and the potential for industrial collaborations with our businesses,”noted Matt Sansam, Innovation Lead for Digital at Innovate UK, in a statement.

Element AI’s plan is to hire about 20 engineers and developers to start off, based in London. But it also wants to start building out a bigger global network. This second wave of people might be start out initially in London and Montreal (where the company is based), before returning to more spread-out locations to expand work and intellectual capital elsewhere, potentially counteracting the ongoing issue of “brain drain” where the brightest people often leave to migrate to where the jobs and money are.

“Why would it be someone in London who is trying to solve a problem in Burundi, for example?” Cornebise says. “There are tons of minds who we just don’t see from here because they are not part of the academic fabric.”

Element AI itself has raised at least $102 million in funding to date (there was also an undisclosed Seed round in addition to the large Series A). The Series A was led by Data Collective, with Microsoft, Intel, Nvidia and others participating.

Cornebise and a spokesperson for Element AI only smiled, but would not comment, when I asked if the startup would be raising more money to help finance this new aspect of the business and its London expansion. Watch this space.

Read more: https://techcrunch.com/2018/01/23/element-ai-opens-london-outpost-with-focus-on-ai-for-good/

After beating the worlds elite Go players, Googles AlphaGo AI is retiring

Googles AlphaGo the AI developed to tackle the worlds most demanding strategy game is stepping downfrom competitive matches after defeating the worlds best talent. The latest to succumb isGostop-ranked player,Ke Jie, who lost 3-0 in a series hosted in China this week.

The AI, developedby London-based DeepMind, which wasacquired by Google for around $500 million in 2014,also overcomea team of five top playersduring a week of matches. AlphaGofirst drew headlines last year when it beatformer Go world champion Lee Sedol, and theChina event took things to the next level with matches against19-year-old Jie, and doubles with and against other top Go pros.

Challengers defeated,AlphaGohas cast its last competitive stone, DeepMind CEO Demis Hassabis explained.

This weeks series of thrilling games with the worlds best players, in the country where Go originated, has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.

The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials.

Go is revered as the planets most demanding strategy game, and thats why it made for an ideal field to both develop AI technology and plot machines against humans. Beyond Google, Tencent is among other tech firmsto have unleashed AIs on the game. While it whips up curiosity and attention,the game simple servesas a stepping stone for future plans which is why DeepMind says it is moving on.

Indeed, the British companyhas already made a foray into more practical everyday solutions. Last year, it agreed to a data-sharing partnership with the UKs National Health Service, however the partnership has been criticized for givinga for-profit company access topersonally identifiable health data of around1.6 million NHS patients.The original arrangement remainsunder investigation by the UKs data protection watchdog, the ICO.

Those snafus arent a reflection on the technology itself, however, andHassabis remains bullishon the impact his firm can make.

If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We cant wait to see what comes next, he said.

While AlphaGo is bowing out at the top, it isnt done with Go altogether. DeepMind is planning to publish a final review paper on how the AI developed since its matches with Lee Sedol last year. It is also developinga teaching tool to help newcomers pick up the ropes of the highlycomplicated game, and to enable more experienced handsto learnthe new and innovative moves that Go has introduced. Top players, even Ke Jie himself, studied up on AlphaGos moves andadded someto their arsenal.

Read more: https://techcrunch.com/2017/05/27/googles-alphago-ai-is-retiring/