EPISODES:
Two Minute Papers: Perfect Virtual Hands – But At A Cost! Two Minute Papers: Virtual Characters Learn To Work Out … and Undergo Surgery Two Minute Papers: This is What Abraham Lincoln May Have Looked Like! Two Minute Papers: This AI Learned Boxing … with Serious Knockout Power! Two Minute Papers: Everybody Can Make Deepfakes Now! Two Minute Papers: AI Learns To Compute Game Physics In Microseconds! Two Minute Papers: DeepFake Detector AIs Are Good Too! Two Minute Papers: This AI Clones Your Voice After Listening for 5 Seconds Two Minute Papers: This AI Does Nothing In Games … And Still Wins! Two Minute Papers: OpenAI Five Beats World Champion DOTA2 Team 2-0! Two Minute Papers: 6 Life Lessons I Learned From AI Research Two Minute Papers: DeepMind’s AlphaStar Beats Humans 10-0 (or 1) Two Minute Papers: OpenAI Plays Hide and Seek … and Breaks The Game! Two Minute Papers: 4 Experiments Where the AI Outsmarted Its Creators Two Minute Papers: AI Learns to Animate Humanoids Two Minute Papers: Ken Burns Effect, Now In 3D! • Two Minute Papers: This AI Creates Human Faces From Your Sketches! • Two Minute Papers: Google’s New AI Puts Video Calls On Steroids! • Two Minute Papers: New AI Research Work Fixes Your Choppy Videos! • Two Minute Papers: Can an AI Learn Lip Reading? • Two Minute Papers: Two Shots of Green Screen Please! • Two Minute Papers: This AI Creates Dessert Photos … and more! • Two Minute Papers: NVIDIA’s AI Dreams Up Imaginary Celebrities #207 • Two Minute Papers: Beautiful Gooey Simulations, Now 10 Times Faster • Two Minute Papers: DeepMind’s New AI Dreams Up Videos on Many Topics • Two Minute Papers: How Do Genetic Algorithms Work? #32 • Two Minute Papers: AI Makes 3D Models From Photos #122 • Two Minute Papers: What is De-Aging? • Two Minute Papers: This AI Made Me Look Like Obi-Wan Kenobi! • Two Minute Papers: DeepMind’s AI Learns Locomotion From Scratch | Two Minute Papers #190 Two Minute Papers: DeepMind’s WaveNet, 1000 Times Faster | Two Minute Papers #232 Two Minute Papers: This is How You Hack A Neural Network Two Minute Papers: We Can All Be Video Game Characters With This AI ★★★★★ Two Minute Papers: DeepMind’s New AI Helps Detecting Breast Cancer • Two Minute Papers: Artistic Style Transfer For Videos #68 • Two Minute Papers: OpenAI’s Whisper Learned 680,000 Hours Of Speech! Two Minute Papers: Ubisoft’s New AI: Breathing Life Into Games! Two Minute Papers: How To Get Started With Machine Learning? #51 Two Minute Papers: Google’s New AI: Fly INTO Photos! Two Minute Papers: NVIDIA’s AI Removes Objects From Your Photos | Two Minute Papers #255 Two Minute Papers: Stable Diffusion Is Getting Outrageously Good! Two Minute Papers: OpenAI Dall-E 2 – AI or Artist? Which is Better? Two Minute Papers: Google’s New AI Learns Table Tennis! Two Minute Papers: NVIDIA’s New AI: Video Game Graphics, Now 60x Smaller! Two Minute Papers: New AI Makes Amazing DeepFakes In a Blink of an Eye! Two Minute Papers: This New AI Is The Future of Video Editing! Two Minute Papers: How Does Deep Learning Work? #24 •

Two Minute Papers: Google’s New AI Learns Table Tennis!

In this AI video ...

Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Naifahir. Do you see this new table tennis robot? It barely played any games in the real world, yet it can return the ball more than a hundred times without failing. Wow! So, how is this even possible? Well, this is a seem-torial paper, which means that first the robot starts learning in a simulation. Open AI did this earlier by teaching the robot hand in a simulated environment to manipulate this ruby cube and Tesla also trains its cars in a computer simulation. Why? Well, in the real world some things are possible, but in a simulated world anything is possible. Yes, even this. And the self-driving car can safely train in this environment, and when it is ready, it can be safely brought into the real world. How cool is that? Now how do we apply this concept to table tennis? Hmm, well, in this case the robot would not move, but it would play a computer game in its head if you will. But not so fast. That is impossible. What are we simulating exactly? The machine doesn’t even know how humans play. There is no one to play against. Now check this out. To solve this, first the robot asks for some human data. Look, it won’t do anything, it just observes how we play. And it only requires the short sequences. Then, it builds a model of how we play and embeds us into a computer simulation where it plays against us over and over again without any real physical movement. It is training the brain, if you will. And now comes the key step. This knowledge from the computer simulation is now transferred to the real robot. And now, let’s see if this computer game knowledge really translates to the real world. So can it return this ball? It can? Well, kind of. One more time. Okay, better. And now, well, it missed again. I see some signs of learning here, but this is not great. So is that it? So much for learning in a simulation and bringing this knowledge into the real world. Right? Well, do not despair because there is still hope. What can we do? Well, now it knows how it failed and how it interacted with the human. Yes, that is great. Why? Because it can feed this new knowledge back into the simulation. The simulation can now be fired up once again. And with all this knowledge, it can repeat until the simulation starts looking very similar to the real world. That is where the real fun begins. Why? Well, check this out. This is the previous version of this technique and as you see, this does not play well. So how about the new method? Now hold on to your papers and marvel at this rally. Ety-2 hits and not one mistake. This is so much better. Wow, this seem to real concept really works. And wait a minute, we are experienced fellow scholars here, so we have a question. If the training set was built from data when it played against this human being, does it really know how to play against only this person? Or did it obtain more general knowledge and can it play with others? Well, let’s have a look. The robot hasn’t played this person before. And let’s see how the previous technique fares. Well, that was not a long rally. And neither is this one. Now let’s see the new method. Oh my, this is so much better. It learns much more general information from the very limited human data it was given. So it can play really well with all kinds of players of different skill levels. Here you see a selection of them. And all this from learning in a computer game with just a tiny bit of human behavioral data. And it can even perform a rally of over a hundred hits. What a time to be alive. So does this get your mind going? What would you use this seem to real concept for? Let me know in the comments below. If you’re looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Sign up and launch an instance and hold on to your papers because with Lambda GPU cloud, you can get on demand a 100 instances for $1.10 per hour versus $4.10 per hour with AWS. That’s 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to lambda-labs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support. I’ll see you next time.

AI video(s) you might be interested in …