Two Minute Papers: Perfect Virtual Hands – But At A Cost! Two Minute Papers: Virtual Characters Learn To Work Out … and Undergo Surgery Two Minute Papers: This is What Abraham Lincoln May Have Looked Like! Two Minute Papers: This AI Learned Boxing … with Serious Knockout Power! Two Minute Papers: Everybody Can Make Deepfakes Now! Two Minute Papers: AI Learns To Compute Game Physics In Microseconds! Two Minute Papers: DeepFake Detector AIs Are Good Too! Two Minute Papers: This AI Clones Your Voice After Listening for 5 Seconds Two Minute Papers: This AI Does Nothing In Games … And Still Wins! Two Minute Papers: OpenAI Five Beats World Champion DOTA2 Team 2-0! Two Minute Papers: 6 Life Lessons I Learned From AI Research Two Minute Papers: DeepMind’s AlphaStar Beats Humans 10-0 (or 1) Two Minute Papers: OpenAI Plays Hide and Seek … and Breaks The Game! Two Minute Papers: 4 Experiments Where the AI Outsmarted Its Creators Two Minute Papers: AI Learns to Animate Humanoids Two Minute Papers: Ken Burns Effect, Now In 3D! • Two Minute Papers: This AI Creates Human Faces From Your Sketches! • Two Minute Papers: Google’s New AI Puts Video Calls On Steroids! • Two Minute Papers: New AI Research Work Fixes Your Choppy Videos! • Two Minute Papers: Can an AI Learn Lip Reading? • Two Minute Papers: Two Shots of Green Screen Please! • Two Minute Papers: This AI Creates Dessert Photos … and more! • Two Minute Papers: NVIDIA’s AI Dreams Up Imaginary Celebrities #207 • Two Minute Papers: Beautiful Gooey Simulations, Now 10 Times Faster • Two Minute Papers: DeepMind’s New AI Dreams Up Videos on Many Topics • Two Minute Papers: How Do Genetic Algorithms Work? #32 • Two Minute Papers: AI Makes 3D Models From Photos #122 • Two Minute Papers: What is De-Aging? • Two Minute Papers: This AI Made Me Look Like Obi-Wan Kenobi! • Two Minute Papers: DeepMind’s AI Learns Locomotion From Scratch | Two Minute Papers #190 Two Minute Papers: DeepMind’s WaveNet, 1000 Times Faster | Two Minute Papers #232 Two Minute Papers: This is How You Hack A Neural Network Two Minute Papers: We Can All Be Video Game Characters With This AI ★★★★★ Two Minute Papers: DeepMind’s New AI Helps Detecting Breast Cancer • Two Minute Papers: Artistic Style Transfer For Videos #68 • Two Minute Papers: OpenAI’s Whisper Learned 680,000 Hours Of Speech! Two Minute Papers: Ubisoft’s New AI: Breathing Life Into Games! Two Minute Papers: How To Get Started With Machine Learning? #51 Two Minute Papers: Google’s New AI: Fly INTO Photos! Two Minute Papers: NVIDIA’s AI Removes Objects From Your Photos | Two Minute Papers #255 Two Minute Papers: Stable Diffusion Is Getting Outrageously Good! Two Minute Papers: OpenAI Dall-E 2 – AI or Artist? Which is Better? Two Minute Papers: Google’s New AI Learns Table Tennis! Two Minute Papers: NVIDIA’s New AI: Video Game Graphics, Now 60x Smaller! Two Minute Papers: New AI Makes Amazing DeepFakes In a Blink of an Eye! Two Minute Papers: This New AI Is The Future of Video Editing! Two Minute Papers: How Does Deep Learning Work? #24 •

Two Minute Papers: New AI Makes Amazing DeepFakes In a Blink of an Eye!

Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojone Fahid. Today we are going to transform ourselves into cartoon characters, and it is going to be amazing. But how? Well, for instance, let’s start out from Style Transfer. Style Transfer means mixing two images together, one for content, reimagined with the other one for style. This also works for video, what’s more, there are also computer graphics techniques that can update these virtual worlds in real time as we mark up these examples on a piece of paper. How cool is that? Now that is all well and good, but that’s nothing compared to what you’re going to see today, because now we are going to run Style Transfer on ourselves. That sounds great, but wait, that’s not exactly a new idea. Previous techniques have already tried that, so our question is, have they done it? Well, let’s have a look. Oh my, these are not so good, and even the better ones are not ready for prime time. However, hold on to your papers, because we now have a new technique where again, in goes a video of us and a target style, and we get this. Whoa, these are so much better. So cool. And if only we could try it ourselves right now, maybe that is possible, I’ll tell you in a moment. And it doesn’t stop there, this paper has a ton more in the tank. For instance, this slider is incredible. By using it, we can tell the AI how much should the style influence the video. These results are going by quick, so I’ll stop the process here and there so we can have a little more time to have a look together. I particularly like the fact that we have a ton of control over the jawline and the eyes, and of course, if we wish, these features can get exaggerated, a great deal, or we can be a little more subtle with them. And everything in between these two are also possible. And have a look at this one too. This is one of my favorite parts of the paper. Are you seeing it? Well, look here. Our input person has long hair, but the style reference has short hair. And I love how the technique re-imagines the input person’s hair as well in the style of the reference. It doesn’t even break a sweat. This is such an amazing usability feature. And it supports a variety of different styles, just choose the movie and the character of your liking. And there we go, I love it. And just think about the fact that a couple of papers before this one in 2020, this was possible. Even in painting, real images of human faces was quite challenging. And today, just a couple more papers down the line, and we can do so much better with so much more artistic control and all this for video. So, let’s pop the question. How long do we have to wait for such a result? And this is where I fell off the chair when reading this paper. And don’t blink. Why? Because that’s exactly how long it takes for each image. We are talking high-resolution images and we get 5 to 10 of those every second. 5 to 10. Wow. I would absolutely love to see what the amazing artists among U-Fell of Scholars will be able to do with this. This could make the job of virtual actors for animation movies easier. And I bet it will be a super fun tool for video conferencing with our friends and beloved ones. And even putting our copies into virtual worlds. What a time to be alive. Now, wait, all that is well and good. But when do we get to use it? Well, I have two good news for you. Good news number one, the source code of this project is available free of charge for everyone. And good news number two, as of the making of this video, you can also try it yourself online. The links are available in the video description and make sure to read the instructions carefully. Also note that the web app is a bit slower than running it locally, but of course it is so much more convenient for most people. Now, not even this technique is perfect, as for almost all deep-fake related techniques, teeth are usually a problem. And oh yes, sure enough it is a problem here too. But just think about how far we have come in just a couple papers. And imagine what we will be able to do a couple more papers down the line. My goodness. So, this was a paper from the amazing C-Graph Asia conference, which is one of the most prestigious venues in computer graphics research. Having a paper published there is perhaps the equivalent of the Olympic gold medal for a computer graphics researcher. Huge congratulations to the authors. So, what do you think? What would you use this for? Let me know in the comments below. If you are looking for inexpensive cloud GPUs for AI, Lambda now offers the best prices in the world for GPU cloud compute. No commitments or negotiation required. Just sign up and launch an instance. And hold onto your papers because with Lambda GPU cloud you can get on-demand A100 instances for $1.10 per hour versus $4.10 per hour with AWS. That’s 73% savings. Did I mention they also offer persistent storage? So join researchers at organizations like Apple, MIT and Caltech in using Lambda cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I’ll see you next time.

AI video(s) you might be interested in …