From AI to Russia, Heres How Estonias President Is Planning for the Future

At 48 years old, Kersti Kaljulaid is Estonia’s youngest president ever, and its first female president. A marathon runner with degrees in genetics and an MBA, she spent a career behind the scenes—mostly as a European government auditor—before being elected by Estonia’s legislature in 2016. Two years later, she’s continuing Estonia’s push for global digital security while deflecting military and cyber threats from Russia, which occupied Estonia for 50 years until its liberation in 1991.

Known for its digital government, tax, and medical systems, Estonia is planning for the future. The country’s “e-resident” program—which allows global citizens to obtain a government-issued ID card and set up remotely-operated businesses in Estonia—has attracted 35,000 people since 2014. Now the government is discussing a proposal to grant some rights to artificially intelligent systems. The law could make it easier to regulate decision-making by autonomous systems, robots, or driverless cars.

This week, Kaljulaid visited the White House along with the leaders of Latvia and Lithuania, to meet with President Donald Trump about issues including security along the Russian border. The visit coincided with the 100th anniversary of Baltic independence after World War I, and Trump took the opportunity to reaffirm the US's commitment to protecting the Baltic States in accordance with the NATO Treaty. After attending the US-Baltic Trade Summit and laying a wreath at Arlington National Cemetery on Wednesday, Kaljulaid sat down with WIRED’s Eric Niiler for an interview at the Estonian Embassy in Washington.

EN: With various efforts over the past decade, Estonia is moving from a traditional state to a digital society in many ways. Where does that effort stand now and what do you hope to see happen during your next few years in office?

KK: Digital society is born when your people refuse to use paper. And in our country we know that our people refuse to use paper. If you arrive at such a point in your development, you have to make your digital state always secure. You need several alternatives if something goes wrong. All the time you need to worry about security; it doesn’t differ much from your paper archives.

We have already a society which is digitally disrupted. We also see that it changes how people think about technology and work and what possibilities the internet can offer for new types of careers. For example, people don’t need enterprises to work; they can sell their skills online independently.

In our case, also the government is within in the digital sphere. We recognize that there is the need to think about tax systems if people work in five different companies in five different countries at the same time. This needs to be sorted out. We cannot sort it alone, we need to sort it globally.

Estonian citizens seem to trust their government when it comes to sharing digital information. Here in the US, we trust Facebook and Amazon to a point, but with the government, it’s quite the opposite. How have you done this?

The way we have created our trust is because our people are not anonymous on the internet. It has always been secure. If you try to transact with someone online, you would not do it with an email and pay with a credit card. What we do instead is create an encrypted channel and sign a contract that is time stamped. Estonians are much more used to internet banking rather than an online credit card. You can create trust, but you have to create tools and the legal space that supports the security for these tools. The state has to promise people to keep them safe on the internet. I find it astonishing that globally businesses are on the internet. Very few states have followed them.

What about external threats? What other sort of steps might be needed to prevent Russian aggression in places like Ukraine, or the kind of cyber-attacks and hacking that have occurred in the United States during the 2016 presidential election?

With conventional aggression, since we got the sanctions in place, Russia has not made any further advances in any other region. In cyber, we must not get narrowly concentrated on Russia only. Cyber attacks rain down on us from many places. You have to make your systems secure and safe and teach your people cyber hygiene. If you are able to attribute some attacks, it's good to be open about it as the United States has been. We need to have an understanding globally about how international rules apply in the internet sphere. Right now, that is massively missing.

What do you mean, global rules?

There’s lot of academic work on this, for example the Tallinn Manual 1 and 2. For example, we don’t attack each other’s sovereignty. Could attacking some vital electronic systems be considered an attack? What are the rights of the defender in that case? What are the rights where you fall under attack from a country you can identify, but not from the government? And if this government cannot go after the attacker because it is too weak—what are your rights then?

Speaking of rights, Estonia is looking to become perhaps the first nation to grant legal rights to artificial intelligence agents, such as fully autonomous robots or vehicles. How will it affect ordinary Estonians?

The discussion centers on whether we need to create a special legal entity for autonomous systems. If you regulate for AI, you also regulate for machine learning, self-acting and autonomous systems. We want our state to be proactive to offer services to people. You need to carefully think how to make this offer safe to our people and their private data. We want AI to be safely grown in Estonia.

Was this pushed by the advent of driverless cars?

No, it's pushed by the Estonian people's demand to get more proactive state services. For example, if a couple has a child, they are entitled to universal child support. In the Estonian people’s minds, it is unnecessary to apply for this. They say, “I had my baby, just pay me." For that, this is proactive. People demand efficiency from an automated system that is making decisions. We have to regulate. Once you go digital, you are constantly pushed by your people to provide better services.

You’ve just launched a new genetic testing program for 100,000 Estonian citizens adding to the 52,000 who have already been tested. How will this information be used to improve public health? And what kind of safeguards are there to prevent possible genetic discrimination by employers, for example?

This information belongs to those people whose genome has been analyzed. This information does not belong to the Estonian Genome Bank or the government, and it's not shared with other individuals. People’s genetic data is in an anonymous form. The aim of this program is so people will know their diabetes risk, or their heart attack risk. They can share this information with their family doctor, but they are not obliged to. They can keep it to themselves, but most people will probably share it with their doctor.

Are there any other big things on the horizon in Estonia that we should be looking for?

I wouldn’t tell you if I had. The genome bank and the digital society are the projects that have flied. I am sure there are others that have not. Our people are willing to work with the government on new technologies. Now it’s a habit; every Estonian looks at it as part of our national identity. We understand that this allows us to provide better services to our people than our money would allow.

Read more: https://www.wired.com/story/from-ai-to-russia-heres-how-estonias-president-is-planning-for-the-future/

France wants to become an artificial intelligence hub

Emmanuel Macron and his government are launching a big initiative around artificial intelligence today. They want to turn France into one of the leading countries when it comes to artificial intelligence.

“[Artificial intelligence] is a technological, economical, social and obviously ethical revolution,” Macron said in a speech. “This revolution won’t happen in 50 or 60 years, it’s happening right now. There are new opportunities and we can choose to follow some innovations or not.”

Some of the best mathematics and engineering schools are in France, and some of the best data scientists and AI researchers come from France. Many of them now work in California or London for Facebook, Deepmind, etc. And the French government wants to capitalize on that soft power to make an AI push.

And yet, how do you attract engineers and scientists? France’s answer is quite complicated because the government doesn’t want to inject a ton of public money and call it a day. It’s all about creating an AI ecosystem with strong pillars.

France’s AI strategy

First, many private companies have opened or plan to open AI research centers in France. Facebook and Google already work with hundreds of researchers in Paris. Today, Samsung, Fujitsu, DeepMind, IBM and Microsoft all announced plans to open offices in France to focus on AI research.

This represents tens of millions of dollars in investments and hundreds of employees. “Everybody is saying that Silicon Valley is overflowing right now,” a source close to the French President told me. That’s why big tech companies need to find talent outside of the U.S.

Foreign companies creating hundreds of jobs isn’t going to foster public research and European tech giants though — these companies are just tapping the smartest brains they can find. That’s why the French government wants to make it easier to work on fundamental research papers when you work for a private company.

The INRIA is going to create a national AI research program with four or five partners. The goal is quite simple — Macron said that there should be twice as many people studying and researching AI projects in France. Researchers will also be able to access and use some cloud computing capacities for their work.

It’s also going to get easier if you want to create a startup based on your research work or if you want to work for a private company during your PhD.

Second, France is going to set some new boundaries when it comes to data. French administrations are going to share new data sets so that anyone can build AI services using those data sets.

When it comes to health data, it looks like France wants to avoid another NHS/DeepMind scandal. While multiple French governments have worked on some kind of health data hub, Macron announced that this time it’s going to happen for real. The INDS is going to make sure that services and public institutions respect your privacy and grant authorizations on a case-by-case basis.

Third, when it comes to regulation, companies will be able to experiment in multiple industries. And it starts with autonomous vehicles. Companies will be able to test level 4 self-driving cars in 2019.

Overall, France is going to invest $1.85 billion (€1.5 billion) in AI projects, from public research to startup investments. Last year, Macron announced that there would be a new $11.2 billion (€10 billion) public fund managed by Bpifrance. Macron said today that AI startups should be the first priority of this new fund.

Making AI as neutral as possible

Arguably, the most interesting part of Macron’s speech was about the moral impact of artificial intelligence. As algorithms become more prominent, there’s a risk that they become black boxes that decide for you.

The French administration already has to share all its algorithms and data that they use following Axelle Lemaire’s law. But that’s still not the case for some touchy subjects. Macron said it’s still a work in progress.

Research projects or companies financed with public money will also have to share everything — this could impact public infrastructure companies for instance.

But it’s more complicated when you’re talking about private companies. Macron said GDPR was a step in the right direction. And now, he wants to go further.

He doesn’t have any practical suggestion for now, but he said that there should be an international certification authority. For instance, this authority could make sure that there’s no bias in training data sets. I don’t think Facebook or Google would enjoy this new regulation.

Finally, you introduce a bias if your staff is not diverse enough. That’s why schools and universities should make sure that they train a diverse group of people.

Not the first AI push

As Next INpact pointed out, there have been multiple reports on artificial intelligence over the past few years — FranceIA, the CNIL, the OPECST and the European Economic and Social Committee all wrote their own recommendations when it comes to AI policies.

Today, Fields medal winner and parliament member Cédric Villani shared a new report on artificial intelligence. It’s always an interesting read, and it was the inspiration for Macron’s speech today.

According to a source close to the French President, multiple ministers now have to focus on artificial intelligence for their own industries.

Today’s report feels more like a starting point than a conclusion. The French President thinks that AI is something important but not urgent. Now, it’s all about convincing the rest of the government to put aside all the urgent tasks for a minute and look at what’s important.

Read more: https://techcrunch.com/2018/03/29/france-wants-to-become-an-artificial-intelligence-hub/

Lily raises $2M from NEA and others for a personal stylist service that considers feelings, not just fit

One of the reasons recently IPO’d Stitch Fix became so popular among female shoppers is because of how it pairs the convenience of home try-on for clothing and accessories with a personal styling service that adapts to your tastes over time. But often, personal stylists bring their own subjective takes on fashion to their customers. A new startup called Lily aims to offer a more personalized service that takes into account not just what’s on trend or what looks good, but also how women feel about their bodies and how the right clothing can impact those perceptions.

The company has now closed on $2 million in seed funding from NEA and other investors to further develop its technology, which today involves an iOS application, web app and API platform that retailers can integrate with their own catalogs and digital storefronts.

To better understand a woman’s personal preferences around fashion, Lily uses a combination of algorithms and machine learning techniques to recommend clothing that fits, flatters and makes a woman feel good.

At the start, Lily asks the user a few basic questions about body type and style preferences, but it also asks women how perceive their body.

For example, if Lily asks about bra size, it wouldn’t just ask for the size a woman wears, but also how they think of this body part.

“I’m well-endowed,” a woman might respond, even if she’s only a full B or smaller C – which is not necessarily the reality. This sort of response helps to teach Lily about how the woman thinks of her body and its various parts, to help it craft its recommendations. That same woman may want to minimize her chest, or she may like to show off her cleavage, she may say.

But as she shops Lily’s recommendations in this area, the service learns what sorts of items the woman actually chooses and then adapts accordingly.

This focus on understanding women’s feelings about clothing is something that sets Lily apart.

“Women are looking for clothes to spotlight the parts of their body they feel most comfortable with and hide the ones that make them feel insecure,” explains Lily co-founder and CEO, Purva Gupta. “A customer makes a decision because based on whether a specific cut will hide her belly or downplay a feature they don’t like. Yet stores do nothing to guide women toward these preferences or take the time to understand the reasons behind their selections,” she says.

Gupta came up with the idea for Lily after moving to New York from India, where she felt overwhelmed by the foreign shopping culture. She was surrounded by so much choice, but didn’t know how to find the clothing that would fit her well, or those items that would make her feel good when wearing them.

She wondered if her intimidation was something American women – not just immigrants like herself – also felt. For a year, Gupta interviewed others, asking them one question: what prompted them to buy the last item of clothing they purchased, either online or offline? She learned that those choices were often prompted by emotions.

Being able to create a service that could match up the right clothing based on those feelings was a huge challenge, however.

“I knew that this was a very hard problem, and this was a technology problem,” says Gupta. “There’s only one way to solve this at scale – to use technology, especially artificial intelligence, deep learning and machine learning. That’s going to help me do this at scale at any store.”

To train Lily’s algorithms, the company spent two-and-half years building out its collection of 50 million plus data points and analyzing over a million product recommendations for users. The end result is that an individual item of clothing may have over 1,000 attributes assigned to it, which is then used to match up with the thousands of attributes associated with the user in question.

“This level of detail is not available anywhere,” notes Gupta.

In Lily’s app, which works as something of a demo of the technology at hand, users can shop recommendations from 60 stores, ranging from Forever 21 to Nordstrom, in terms of price. (Lily today makes affiliate revenue from sales).

In addition, the company is now beginning to pilot its technology with a handful of retailers on their own sites – details it plans to announce in a few months’ time. This will allow shoppers to get unique, personalized recommendations online that could also be translated to the offline store in the form of reserved items awaiting you when you’re out shopping.

Though it’s early days for Lily, its hypothesis is proving correct, says Gupta.

“We’ve seen between 10x to 20x conversion rates,” she claims. “That’s what’s very exciting and promising, and why these big retailers are talking to us.”

The pilot tests are paid, but the pricing details for Lily’s service for retailers are not yet set in stone so the company declined to speak about them.

The startup was also co-founded by CTO Sowmiya Chocka Narayanan, previously of Box and Pocket Gems. It’s now a team of 16 full-time in Palo Alto.

In addition to NEA, other backers include Global Founders Capital, Triplepoint Capital, Think + Ventures, Varsha Rao (Ex-COO of Airbnb, COO of Clover Health), Geoff Donaker (Ex-COO of Yelp), Jed Nachman (COO, Yelp), Unshackled Ventures and others.

Read more: https://techcrunch.com/2017/12/13/lily-raises-2m-from-nea-and-others-for-a-personal-stylist-service-that-considers-feelings-not-just-fit/

TrueFace.AI is here to catch the facial recognition tricksters

TrueFace.AI knows if it's looking at a real face or just a photo of one.
Image: ian waldie/Getty Images

Facial recognition technology is more prevalent than ever before. It’s being used to identify people in airports, put a stop to child sex trafficking, and shame jaywalkers.

But the technology isn’t perfect. One major flaw: It sometimes can’t tell the difference between a living person’s face and a photo of that person held up in front of a scanner.

TrueFace.AI facial recognition is trying to fix that flaw. Launched on Product Hunt in June, it’s meant to detect “picture attacks.”

The company originally created Chui in 2014 to work with customized smart homes. Then they realized clients were using it more for security purposes, and TrueFace.AI was born.

Shaun Moore, one of the creators of TrueFace.AI, gave us some more insight into the technology.

“We saw an opportunity to expand our reach further and support use cases from ATM identity verification to access control for data centers,” said Moore. “The only way we could reach scale across industries would be by stripping out the core tech and building a platform that allows anyone to use the technology we developed.”

“We knew we had to focus on spoof detection and how we could lower false positives.”

TrueFace.AI can detect when a face or multiple faces are present in a frame and get 68 raw points for facial recognition. But its more unique feature is spoof detection, which can tell real faces from photos.

“While working on our hardware, we tested and used every major facial recognition provider. We believe that doing that (testing every solution available) and applying facial recognition to a very hard use case, like access control and the smart home, allowed us to make a better, more applicable solution,” said Moore. “All of these steps led us to understand how we could effectively deploy technology like ours in a commercial environment.”

They made their final product by using deep learning. They trained classifiers with thousands of attack examples they collected over the years, and liked the results.

A “freemium” package is available to encourage the development community that helped TrueFace.AI come up with a solution. Beyond that, the Startup Package is $99 per month while the Scale Package is $199 per month. An Enterprise Plan is available via a custom agreement with TrueFace.AI.

While Moore couldn’t divulge exactly which companies are using the technology, he did say some of them are in the banking, telecommunications, and health care industries.

It’s a service that could become increasingly valuable as companies turn to facial recognition technology.

Read more: http://mashable.com/2017/07/07/trueface-ai-facial-recognition-photo-attack-detection/

Apple working on dedicated AI chip for iOS devices, report says

Image: lili sams/mashable

Apple’s reportedly working on a new kind of chip potentially for future iOS devices that’ll be used just for processing AI, Bloomberg reports.

Bloomberg says the chip’s called the Apple Neural Engine internally, and could be used for “offloading facial recognition in the photos application, some parts of speech recognition, and the iPhones predictive keyboard to the chip.”

By moving AI processing to a dedicated chip, battery life in devices could also see a boost since the main CPU and GPU wouldn’t be crunching as much data and gobbling as much power.

The report says Apple plans to integrate the chip into its devices, but it’s unclear when that’ll happen, and if any iOS devices launching this year will have it.

Apple’s work on an AI chip shouldn’t surprise anyone who’s paying attention to the competition. Virtually every tech company is working on improving AI processing on mobile devices.

Qualcomm’s latest Snapdragon 835 chip, which is already in devices like the Samsung Galaxy S8, has a special module dedicated to processing AI tasks.

Years ago, Apple started designing its own mobile processors to improve performance and reduce power consumption, and it’s really paid off.

Despite having fewer cores, the iPhone 7 still crushes the Galaxy S8 when it comes to sheer performance.

iPhones and iPads also come with an Apple-designed “M”-branded motion coprocessor to collect sensor data from the various included sensors (accelerometer, gyroscope, compass, etc.). It’s this M chip that helps with tracking health and fitness data.

Furthermore, in addition to the main Intel processors in the new MacBook Pros, there’s also a small Apple-made “T1” chip for powering the Touch Bar. Apple’s AirPods also have a custom W1 chip that helps with pairing them to iOS devices.

Clearly, Apple loves making custom chips for things. We’re all for it, especially if that means longer battery life.

If the future is AI everywhere (and it definitely looks like that’s where things are headed), it’s in Apple’s best interests to control the stack (like it always does) with its own AI chip.

Read more: http://mashable.com/2017/05/26/apple-artificial-intelligence-chip-iphone/