Lex Fridman Interview – Pieter Abbeel

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on whatsapp
Share on telegram
Share on pocket

For those of you who are in the field of Artificial Intelligence, you would have come across Lex Fridman from MIT. He has a series of videos where he interviews many prominent figures in the field of Artificial Intelligence. These interviews provide many nuggets of information, for instance, research directions, constraints and concerns in the AI field. If you have not started digesting it, here is the YouTube playlist to go through.

Pieter Abbeel is a professor at UC Berkeley, director of Berkeley Robot Learning Lab, and is one of the top researchers in the world working on how to make robots understand and interact with the world around them, especially through imitation and deep reinforcement learning. (Description taken from Lex Fridman site.)

This talk, although short, contains many nuggets on Robotics. The video started off with an interesting question.

How long does it take to build a robot that can play as well as Roger Federer?

So immediately, Prof Pieter was thinking about a bi-pedal robot and one of the questions that comes to mind is, “Is the robot going to play on a clay or grass court?”. For those not familiar with tennis, there is a difference between the two, where clay allows sliding and grass does not.

Lesson 1: We Bring Assumptions into Our Thought Process

So the assumption made was that the robot is bi-pedal until Lex pointed out that it need not be so. It just needs to be a machine. This showed me that sometimes when we are thinking about how to solve problems, we might unknowingly bring in certain assumptions. To effectively solve the challenge, it might be worthwhile to take a step back and check our assumptions.

Lesson 2: Signal-to-Noise Training

I found it interesting that Prof Pieter, when looking at how to train a robot to solve a particular problem, approached it from a signal-to-noise point of view. What that means is, how can one send as much of the signal to the robot, so that it can learn and perform a task better and faster. For instance, looking at the autonomous driving problem. Is it better to have the robot drive and learn at the same time (through reinforcement learning), or observe how a human drive and, through observation, pick up the necessary rules of driving? Or can simulation be used to train the robot to a certain level and then move the learning over to the actual environment?

Such a thought process tells me that Artificial General Intelligence (AGI) is still a distance away because human design/decision is still needed to ensure that our AI learns the correct behavior and in an efficient way.

Lesson 3: Building Suitable Level of Knowledge Abstraction

There was a discussion on building reasoning methods into AI so that they can learn the existing world better. I am on the same page here. In my opinion, what is stopping our current development from moving AI to AGI is the knowledge representation of the world. How can we present the world in terms of symbols and various abstraction levels, to teach the AI to move through these different abstraction level so as to continue the necessary reasoning.

For instance, when do we need to know that an apple is a fruit and when do we need to know that an apple might not be a fruit but a provider of vitamins and, continuing, this apple provides Vitamin A which is an antioxidant? How do we move through the different entity/label and their representation so that we can build a smarter AI?

I am very interested to understand knowledge representation/abstraction and how we can build it into our Artificial Intelligence but let us see if there is a suitable opportunity to pursue this research direction. 🙂

Lesson 4: Successful Reinforcement Learning is about Designing

Can we build kindness into our robots?

That was, I believe, the last question asked and Pieter mentioned that it is possible, which I concur. What we need is to build “acts of kindness” into our objective function and ensure that we send back the right signal/feedback to ensure these “acts” stay with the robot.

We have come very far when it comes to Reinforcement Learning, given the development on the deep learning front. But I feel that at the end of the day, what is going to make reinforcement learning agents perform to specification will greatly depend on the AI scientist, how they design the objective function, how fast we can send the feedback to the agent, how the agent understands the signals and many more. Designing the agent behavior and environment is an iterative and experimental process. There is only a very small chance that we get it right on the first try, so be prepared to work on it iteratively.

If you are interested to discuss this article or have any AI related questions, feel free to link up with me on LinkedIn.

Leave a Comment

Previous

Retrieving the Right Information About COVID-19 With Golden Retriever

Can Your Problems Be Solved by AI?

Next