1- The Dark Side of AI

AI and the Superficiality Paradox

Artificial Intelligence, what a concept!

Some say that AI might be one of the Great Filters for Mankind, i.e an existential risk that might cause the entire human species to go extinct. If we are all going to die, you might be interested in knowing how that might happen, so here is a very educated take by Tim Urban and here is a more dystopian one by Elon Musk. General Artificial Intelligence will dramatically change the way we live but fortunately or not, it still seems to be at least a few decades ahead of us. On the other hand, applied AI, built by startups to create value, is already reshaping our lives and, as an investor, is my main area of interest.

I joined Elaia Partners 18 months ago boosted by the intellectual challenge that was awaiting. The track record, the team and the investment focus made it even clearer and more exciting: I was there to dig up the best startups among those that had a strong technological edge or rare skills. Ambitious plan.

My relationship with AI started as with any other area of interest but as I kept digging, it soon turned into a love story. A love story is made of ups and downs and I wanted to share them with you in this paper.

Recently it has been made evident by many blog posts and public announcements that AI is the next (current?) big thing like “2.0”, “User Generated Content”, “Mobile First”, “IoT” or “Big Data” used to be. Concepts that now sound outdated when said out loud.

Every wave of innovation has its buzzwords and in hindsight, the dots connect pretty well. The “2.0” and “UGC” paradigms turned people into data providers, “Mobile First” and “IoT” made this collection of data ubiquitous and “Big Data” provided infrastructure and services to store, clean and exploit this data. AI now appears as the last block where meaning and relevance are extracted. But AI also seems a little different from its ancestors, it appears to be so global and to have such an infinite potential that nothing could be built upon it to supplant it. As if there could, from now on, just be better and better AI innovation ’til the end of times.

As Sundar Pichai, Google’s CEO illustrated this April: “We will move from mobile first to an AI first world”, we are building a world where AI will be pervasive in businesses and societies.

However, over the past 18 months, I’ve seen AI being treated just like another business segment, a buzzword among others. This is what I call the Superficiality Paradox.

Even though opinion leaders might disagree on the terminology, like Satya Nadella, Microsoft’s CEO who thinks that “We are not building an AI-first world, we are building a people-first world with AI everywhere”, everyone seems to agree on one thing:

AI capability is unprecedented; it is the result of a series of major innovations that took place over the last decades and might be an ultimate evolution.

Despite this consensus, it feels like we are only scratching the tip of a huge iceberg. Yes, AI is everywhere. Yes, everyone is doing AI. But AI is not standardized yet and the many investments in AI we see in this first wave might result in a series of disappointments.

Events everywhere, the trend is here

The first symptom of the Superficiality Paradox is the explosion of AI-related events.

Sure enough, these events (Meetups, Corporate Events, Startup Weekends, …) are awesome for the ecosystem because they contribute to build and maintain a momentum around AI in France by:
 1- Acknowledging that AI is a serious topic and that society needs to brace itself.
 2- Enabling a better knowledge sharing between entrepreneurs, experts, students and curious. We need a knowledgeable base of entrepreneurs, employees and clients.
 3- Breaking the ice and turning wannabes into entrepreneurs. We need less friction at ignition among our talent base.

September was a studious month for AI lovers: Machine Learning Paris meetup, AI.labs, Paris.ai and France is AI.

All have been great regarding their respective objectives. Some presented some profound and technical discussions while others like France is AI gathered all the Gotha from Paris AI scene to highlight the effervescence of such ecosystem. I attended France is AI where I had happily been invited (special thanks to Paul Strachman) eager to hear from so many brilliant minds about the inevitability and impact of an AI-lead transformation of the society. I may have been a little frustrated by the lack of depth of the talks and the quick pace at which one panel was replacing the other but this was not the point. The first edition was a successful landmark and calls for more.

So, let’s get out there, let’s gather and give our entrepreneurs, students and corporate partners some frameworks and some material to sleep upon.

AI is everywhere but is it AI?

Today, two major trends are transforming the economy: Uberisation and AI. You might have often heard that France is a heaven for AI startups because of its huge talent base highly qualified in mathematics. You might also have heard that big companies started to put some of their research and AI eggs in Paris. Well you heard right. Between these two major trends; Uberisation and AI; at Elaia, we have been betting on the second for a while now.

Over the past 18 months, I have met many companies that said they were implementing AI in their products. Either an explicit mention to AI areas like “Machine Learning” or “Deep Learning” or implicit mentions to specific problems that are now only tackled with AI such as “Natural Language Processing” or “Image Recognition”.

Among these companies, some have built impressive products that confirms the potential we see in AI.

Some of them (most?), on the other hand, fell at least in one of these three categories:

a) Misconception of what AI means and its difference with classical algorithms
As it used to be with its older brothers, the word “AI” gets overused for marketing purposes, often in the wrong way. That is why you can easily meet with an “AI company” that is actually barely sorting e-commerce listings by prices or houses by locations. This certainly brings value to someone, but it has been mastered decades ago.

Here, the problem seems to be semantic. “Intelligence” refers to either basic or more complex tasks accomplished by the human brain among which you can find computation. So, any machine that can compute (you can call that a computer) can be seen as a form of machine intelligence. AI companies are not talking about this kind of “Intelligence”.

The fundamental difference is that AI companies are building machines that can learn by themselves.

The complexity of your algorithm has little relevance here, as long as you are implementing a deterministic algorithm that will produce the same output every time you run it, you are just building a nice-looking excel spreadsheet. Only the developer intelligence is involved here.
In the previous sorting examples, Intelligence is about finding the meta-criteria to be weighted and the corresponding optimum weights to produce a desired output. Sometimes, this set of criteria and weights is hard-coded by a developer (Human Intelligence or HI) and sometimes; the machine finds it by itself (AI).

HI is about coding the world; AI is about coding the brain.

b) A company using AI to incrementally augment the value proposition
Some companies deliver value through a non-AI product or service but actually use AI-based features on the side. They may do so to marginally improve the value proposition with a little personalization for instance. Think for instance about Uber that uses AI to dispatch drivers.
This is definitely smart and any companies with the internal resources to look into such strategies should give it a try, but you cannot sell that as a competitive advantage. 
If you are doing so, you are probably tarnishing your true value proposition by forcing yourself into the wrong box.
Talking about superficiality: here, you take the risk of having to explain and sell only the tiny little part of the value you create just because you are focusing the attention on it.

c) A company using AI to improve its internal processes
Another smart move from some companies is to use AI to improve their internal processes. It can be for instance an analytics tool to learn from their user base to either update the product roadmap accordingly or better segment their prospects. Go for it if you have the resources and the know-how, you will probably save a few months iterating but don’t say you’re an AI company.

That being said, I am not being totally fair to AI. Some French AI companies are already nailing it at a national or a global level. If you agree with me that any future business will all have a layer of machine intelligence at some point, the questions to ask are more “what are the characteristics shared by the most promising AI startups today?” and “what are the areas most likely to see a fast emergence of AI players?”

The basic common characteristic shared by every AI startup we are going to discuss is that it implemented somewhere a learning process.

AI Ambition

What I mean by AI ambition can be at first summed up in the distinction between better-than-human-AI and almost-as-good-as-human-AI.

In the better-than-human approach, you are building a decision making system which will produce less errors than a human making similar decisions or that will produce a similar output with much less resources (time, money, …). On the opposite, almost-as-good-as-human approach accepts to be making more mistakes as long as the resource economy or the brand new possibilities it enables are worth it.

I definitely think that better-than-human approach is more defensible over time and will more likely lead to a successful business.

Why is that so? And why is it important to make this distinction?

First, in today’s AI, some companies are trying to reproduce elementary features of the human brain. This is the case for instance of Computer Vision or Natural Language Processing fields. These tasks may seem elementary for humans, partly because it is almost perfectly done by most humans when they get out of elementary school but also because on a daily basis, they require basically no energy or particular focus.
Evolution (in its Darwinist form) is no stranger to that matter as it fine-tuned our brain over millions of years to interpret light or sound signals. We have developed over time a competitive advantage over machines on those tasks. This is typically the tasks that go inside the almost-as-good-as-human category. To be able to catch up only a little bit, machines need to be exposed to insanely huge data sets and that is why today, only big companies with mainstream products that gather proprietary data sets are able to compete in this area.

Until someone finds a significantly better way to approach the human brain architecture in terms of learning methodology, these tasks will be best done by the like of Google, Facebook, Amazon, Apple or Microsoft. But in the end, a cat is a cat and we are nowhere near a machine better spotting cats in pictures than a human being.

But, there is plenty of remaining tasks on which we did not develop any particular advantage overtime. Often, it’s because we only recently created the datasets when we invented specific measurement tools. Think about electrocardiograms, seismographs, clicks on a web page or even resumes. Each of these are signals built by humans to best approximate a much more complex phenomenon: the beating of a heart, the diffusion of waves in the ground, the intentions of a person in front of a screen or someone’s professional life. On the Evolution time frame, these signals are brand new to us and we haven’t had time to develop any particular ability to process or decode them. The machines can compete on an equal footing. The analysis of such signals in order to extract meaning are tasks that can typically go inside the better-than-human category.

The above list is obviously non exhaustive and can also include complex decision making situations involving several tasks to be performed at once like driving for instance which is also a brand new behavior for human beings.

There are a lot of areas where machines already are or will soon be better than humans. This is where you should dig and more specifically where humans have developed on purpose a competitive disadvantage.

Come on, there are no such things as competitive disadvantage, right?

Well actually there are. Humans are limited in their information processing capabilities by the power of their brain. To bypass that, they have developed over time shortcuts or heuristics that produce almost correct results almost every single time but at almost no cognitive cost, which is great because it means easy and real-time. These heuristics may induce errors in reasoning or decision making processes. The origin of those errors is what we call cognitive biases.

Such biases include for instance the Confirmation Bias; which states that you tend to overestimate the validity of arguments that confirms what you already think or the Availability Heuristics through which you overestimate the importance of easily retrievable information.

Let’s focus on a specific bias called Misconception of Chance. You are misconceiving chance when you tend to think that small samples are representatives of the larger datasets they are drawn from or; said differently; when you think that chance is going to correct itself overtime. As a consequence, you tend to underestimate the probability of an event occurring again if it just occurred and overestimate its probability if it has been a long time since it last occurred. To illustrate that, let’s say there is a volcano that has been erupting randomly over time but on average every 500 years. Would you move to live next to it if it last erupted 50 years ago? Would you go if it were 499 years ago? In the end, you are exposing yourself to the same level of risk in both cases but most of us will feel more confortable in the first situation.

This bias is very present in everyone’s daily decision making processes and sometimes it can be more dramatic than others. One of the best startups I saw at France is AI is Cardiologs which develops an AI that go through patient’s electrocardiograms (ECG) to detect heart conditions. As it turns out, a cardiologist is equally affected as all of us by the misconception of chance bias and when he looks through your ECG to find a heart condition, he underestimates the probability of you having the same conditions he just detected on the previous patients. Even more, statistical outliers that occur, let’s say once in a million times usually just go unnoticed. It is like if you had a disease so rare that only Doctor House could find it but you went to your regular real-life physician to be treated. Chances are you are doomed. In this specific case, we probably need a real-life Doctor House and if you need AI to build it, so be it!

In this situation as in a myriad of others, an Artificial Intelligence is potentially way better than a Human Intelligence.

AI > HI is what I’m looking for.

Level of Innovation

We indirectly and very partially discussed the attractiveness of different sectors for an AI company; let’s discuss the business paradigms available.

I see three main ones:
 1) The Last-Mile: Using mainstream AI technology developed by others to focus your efforts on the last mile value creation.
 2) The Full-Stack: Building the whole stack to adapt the existing AI paradigms to a specific use case.
 3) The Infrastructure: Being the horizontal player selling the AI capabilities to first-approach companies.

In the Last-Mile play, you bet on a commoditization of the technology that will either be open sourced or provided by a few big players like IBM. The bet is similar to the one many companies made when they early moved their servers to AWS for instance.
This approach may be less complex but definitely not easier. It requires a deep understanding of how the customer wants the value to be delivered for you to make up for the commoditized efficiency of the solution. It is what we saw during the big data movement with a company like Docker who packaged a better product based on an open source technology. 
One way to defend this bet is by securing access to proprietary data sets to turn commoditized AI tools into more business-case specific engines. The access to First Party Data is definitely a key asset for an AI company.

The Full-Stack play requires a deeper understanding of the algorithmic part and its subtlety. You identify an addressable business case and you build the stack bottom-up, optimizing every single layer to the specificities of your business case. At Elaia Partners, we have been betting early on this particular approach for over a decade now by being the first institutional investor in Criteo (AI applied to advertising performance), adomik (AI applied to price optimization on ad exchanges), Shift Technology (AI applied to detect Insurance Fraud) and Tinyclues (AI applied to behavior prediction in eCommerce). The complexity here is to adapt general AI paradigms to various constraints. Be it a fragmented and diverse data set, a need for real-time response, a legal restriction, particular in-house processes from your clients, you have to build both your engine and the top layer delivering the value hand in hand.

Alternatively, with the Infrastructure approach, you can bet that you will be one of these horizontal players selling the commodity but be careful, there won’t be room for everybody and some players like IBM with Watson already have a significant edge on the AI as a Service market.

For now, the Full-Stack approach seems more relevant in France where the insanely qualified talent pool seems to make it the perfect spot to gather a team able to handle a full-stacked AI product.
The infrastructure part of the equation (AI as a Service) seems bound to be provided by tech giants and their acquisition pace in AI won’t say the opposite.

Only better AI ’til the end of times?

First, let’s all remember that this discussion stands “until someone finds a significantly better way to approach the human brain architecture in terms of learning methodology”. For now, even if the machines can learn by themselves, they are still tackling issues presented by their developers and answering specific questions they have been asked. These questions can be “who will most likely buy this product from me?” or “is this claim fraudulent?” But, open problems; like “what will you do if you were elected president?” cannot be addressed by AI yet.

That is why a frontier is yet to break for AI: Machine Creativity.

Machine creativity is when they will not only be able to find the best answer at a specific problem but they will also be able to find problems to address by themselves. One recent result that made the headlines was Watson, the AI developed by IBM who watched a movie and made a movie trailer out of it. The articles said “Watson created a movie trailer” but make no mistake: here, we are talking about content creation and not creativity. To do that, Watson had to watch thousands of movie trailers to understand the geometry of a good one before being able to produce its own. Doing so, it absorbed all kind of explicit or implicit trailer-creation rules many humans have been using for a while now.

Even if there are many AI-related academic challenges ahead, I believe that AI applied to specific business cases has reached a point where it can significantly impact the world. The initial challenge I took when I joined Elaia is becoming more and more exciting every day as France is quickly turning into a reference and a central hub for high-tech B2B companies aiming at a global leadership. AI is likely to lead the way and we are lucky enough to be there at this turning point.

If we are not at The Singularity yet, let’s at least try to make a singularity out of France!