Letter #19 - In which we learn to be dexterous in order to get to where we want to go.
If you really want to get where you set out to go, you’re going to have to get off the wrong bus.
Our approach, domain randomization, learns in a simulation which is designed to provide a variety of experiences rather than maximizing realism. This gives us the best of both approaches: by learning in simulation, we can gather more experience quickly by scaling up, and by de-emphasizing realism, we can tackle problems that simulators can only model approximately.
…By building simulations that support transfer, we have reduced the problem of controlling a robot in the real world to accomplishing a task in simulation, which is a problem well-suited for reinforcement learning. While the task of manipulating an object in a simulated hand is already somewhat difficult, learning to do so across all combinations of randomized physical parameters is substantially more difficult.
This should be kept in mind for future projects, too:
Generally, we found better performance from using a limited set of sensors that could be modeled effectively in the simulator instead of a rich sensor set with values that were hard to model.
“Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement”
Not sure if 3D-printed diffractive deep neural networks are the future of AI, but being able to run DNNs from custom hardware sure looks cool.
P.S. You may have noticed the lack of any letter in the previous week. This is by design :), as from now on, Quotes, Songs, and Machine Learning will appear on a bi-weekly basis, due to time constraints on my part. Do let me know if that’s an issue.
Not sure what you just read? Take a look at this post.
Enjoyed it? Subscribe to my tiny newsletter and get new issues straight to your inbox. Every week, on Tuesdays.