Robots are learning how to learn. And soon might be outsmarting us.

Google claims to be on the verge of achieving human-level AI, according to this article in the Independent. So we decided to dig in the science behind machine learning, and when AI will be human level.

Independent: ‘The Game is Over’: Google’s DeepMind says it is on verge of achieving human-level AI

Today’s machine learning is strikingly similar to human learning

Machine learning is in many ways similar to the way we humans learn. Roughly speaking, robots can learn new things in three ways:

  • Under complete supervision, under no supervision, or somewhere between the two. Learning under supervision is very structured, with a human teacher teaching them all there’s to know about ABC.
  • Unsupervised learning is the least developed form of learning, where robots learn totally independently.
  • But the method that is most often used, is semi-supervised learning. This works the way kids learn. The idea is that the robot first behaves in random ways and that it evaluates how successfully each of these behaviours has worked. And he will be supported in this by human intervention. Once he’s found the best behavior, he’ll repeat it with some changes in parameters, like environment. This kind of learning is called reinforcement learning.

… but machines have their own unique skills and struggles.

When we look at these ways machines learn, it’s very appealing to compare a robot to a kid. But while the similarities, but there are also some major differences, right? Like e.g. it’s so unbelievable how much data a robot needs to process to learn a new skill. Take Google Translate, arguably one of the best translator programs available. It took about 15 years for them to get to the level where it’s now. And although it’s translation skills are really impressive, it’s still far from perfect.

And this is one of the major obstacles for machine learning, that’s the huge amount of data they need to learn. And one of the ways to deal with this, scientists realized, is to simplify the data that’s used to feed algorithms. And in many ways, that’s again similar to the way kids learn. Cartoons show simplified versions of reality. Parents talk to kids in a simplified language.

Wanna know how Google Home and Microsoft’s Cortana different learning approaches led to stark differences in skills? You hear more about it in the podcast!

Listen from 11:10

Machines are specialists

So going back to the comparison between machines and kids… You’ll realize how limited machines are compared to humans, in terms of their total skill set. Although Google Translate knows a billion languages (109 to be exact), but that’s really the only skill it has, translating. It’s may come in different formats, via voice, via photo, or simple text, but in the end it’s just a translator.

Typically, machines are developed to be really good at one specific task. Google doesn’t speak 100% naturally as humans do, it has a lot of flaws, but it’s capable of doing an 80% good job for 109 languages. A robot arm can only pick a certain range of things, but it can pick them 24/7, and faster than any human picker.

So for bots like Google Translate, photos and voice ad complexity to their task. But remember, Google Translate is just a virtual bot. Which brings us to the third major obstacle in machine learning: the complexity of the real world. A robot is more than a computer; it’s a computer linked to a physical body. A robot has physical interaction with its environment.

It is much more complex to learn from a dynamic environment, than it is to learn just processing static data like pictures and language samples through algorithms. A moving robot keeps seeing the world a little differently. And it has to make real-time decisions that are also precise and reliable. If a robot makes a mistake, then it’s a lot more costly in the physical world than in the virtual world.

To stay with the example of “language”, what happens if you take it to the next level, and have a machine translate real-time in a real life situation? You have different voice inputs. Surrounding noises. Material context. Etc. And for a robot to handle this successfully… That’s a whole other level.

Wrapping it up

  • Robots learn in a similar way to kids, but
  • It seems like the main difference is that robots can handle large amounts of data, like 109 languages, but they don’t manage to make it 100%
  • Complexity of the physical world, added to the challenge of learning about robots
  • Higher cost of making mistakes in the physical world

A question for everyone

Would you feel excited or frightened that your virtual assistant is becoming smart every day?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: