Greg Diamos Interview – Accelerating Deep Learning with Mixed Precision Arithmetic
In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The Next Generation of AI Chips.”
Greg’s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you’re interested in systems level thinking around scaling and accelerating deep learning, you’re really going to like this one. And of course, if you like this one, you’re also going to like TWiML Talk #14 with Greg’s former colleague, Shubho Sengupta, which covers a bunch of related topics.
This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.
The notes for this show can be found at twimlai.com/talk/97.