fbpx

Deep Learning And Machine Learning Simply Explained

January 17. 2016. 3 mins read
Table of contents

In a recent article, we demystified some of the technical jargon that’s being thrown around these days like “artificial intelligence”, “SaaS, “the cloud”, and “deep learning”. While the techies can debate among themselves the difference between “machine learning” and “deep learning”, we’re going to consider the two terms synonymous and henceforth just talk about “deep learning”. So just what is “deep learning”? We wanted to understand more, so we came across this excellent TED talk given by Jeremy Howard which finally explains in layman’s terms just what deep learning is. If you have 20 minutes, watch the video now and no need to read any further. If you don’t have time to watch the video, here’s what we learned.

When you use Google Images to search for a “grey cat”, Google Images shows you grey cats. Is this because Google can recognize what a grey cat looks like? No. This is simply because Google searches text to find grey cat images. So how can we train Google to identify grey cats by only looking at images? Here’s how we do it.

Let’s start with a sample of 10 million random pictures from Facebook and teach Google how to learn. The first part entails scanning this massive set of pictures using an algorithm developed by a software developer at Google. What does this algorithm do? It looks at the relationships of pixels in a digital photo and tries to find objects of a similar shape. Let’s try this with a simple example.

Let’s say the pictures were black and white and composed of circles, triangles and squares. You could quite easily imagine an algorithm that could first identify the differences in color (every color is actually a unique code in software) and then start to map sharp differences in color that would denote shapes. The shapes could then be described by the direction of the lines as either circles, triangles, or squares. You could even go ahead and make them color pictures. The computer can now point out a “red triangle” or even a “beige circle”. Without even having to do much coding, the computer now has the intelligence of a small child when it comes to identifying shapes.

Now let’s take this to the next level. Let’s take a sophisticated deep learning algorithm and feed it 100 million pictures from Facebook. Let’s tell the algorithm to try and find similar objects in this “big data” set and then group them. These groups are displayed to a developer who can then label them. Humans would perhaps be the most obvious and frequent object that the computer would identify. The developer would then be shown 50 humans the computer identified and could start to label sets within the group like “old person”, “baby, “Chinese person” or “freckled person”.

After many many iterations, the algorithm then starts to recognize patterns in the mistakes that it makes. It can now recognize the difference between a young person with their hair dyed grey, and an old person with grey hair. It knows that because a developer pointed out a facial feature called “wrinkles”, and the algorithm now associates wrinkles with an old person.

Once the algorithm has learned sufficiently from all that “big data”, it can then be fed pictures which it can label through visual identification as seen in the below example:

Machine_Learning_Example

Conclusion

If you’re still reading this, it probably means you didn’t watch the TED talk so there are a few more takeaways you should know. “Deep learning” is expected to take over 80% of service jobs globally. That’s not a typo. “Deep learning” is expected to be so disruptive that 80% of service jobs will be replaced by deep learning machines. Here’s perhaps the most  compelling proof of how powerful “deep learning” is. It’s industry agnostic, meaning that deep learning developers don’t have to know anything about the industry they are evaluating. The author of that TED talk, Jeremy Howard, has started a company which can detect malignant lung nodules in X-ray scans 50% better than humans with the developers having no medical background at all. How incredible is that?

Share

Leave a Reply

Your email address will not be published.

  1. So what about the deep learning that deals in with human cognition and understanding, such as empathy, emotion, feelings, and holding sympathy for someone. These are imo, considered “Deep learning” by understanding and filling the connection of these traits, being able to put yourself in another’s circumstance.
    In short, I want to hear more about human thinking & interaction in this “Deep learning” topic. After all, the biggest difference I see between AI and humans is simply said, machines can only categorize what they have been programmed to look for & identify. Human’s go beyond categorization. I think you’re overestimating the intelligence of a robot when comparing it to the sensitive and moral feelings/beliefs a human can have. Therefore, I oppose the idea of %80 of service jobs will be lost to AI.