Podcast249 Videos


Machine Learning as a Software Engineering Enterprise with Charles Isbell – #441

As we continue our NeurIPS 2020 series, we’re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing. This year Charles gave an Invited Talk at this year’s conference, You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise. In our […]

AD as it relates to Differentiable Programming for ML @ TWiML Online Meetup Americas 20 March 2019

**SUBSCRIBE AND TURN ON NOTIFICATIONS** **twimlai.com** This video is a recap of our March 2019 Americas TWiML Online Meetup: Automatic Differentiation as it relates to Differentiable Programming for Machine Learning. In this month’s community segment, we discuss our upcoming April Meetups, NVIDIA’s Jetson Nano Platform, NVIDIA’s Cloud Strategy, attention in NLP, and Sam’s Kubernetes eBook. […]

Approaches to Fairness in Machine Learning with Richard Zemel – TWiML Talk #209

Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute. In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and […]

Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard – #514

Today we’re joined by Alexander Richard, a research scientist at Facebook Reality Labs, and recipient of the ICLR Best Paper Award for his paper “Neural Synthesis of Binaural Speech From Mono Audio.” We begin our conversation with a look into the charter of Facebook Reality Labs, and Alex’s specific Codec Avatar project, where they’re developing […]

Benchmarking ML with MLPerf w/ Peter Mattson – #434

Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons. In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used […]

Evolving AI Systems Gracefully with Stefano Soatto – #502

Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA. Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential […]

TWiML x Fast ai v3 Deep Learning Part 2 Study Group – Lesson 11 – Spring 2019 1080p

**SUBSCRIBE AND TURN ON NOTIFICATIONS** **twimlai.com** This video is a recap of our TWiML Online Fast.ai Deep Learning Part 2 Review Study Group. In this session, we review Lesson 11, on Data Blocks; Optimization; Weight Decay, and Transforms, of the Fast.ai v3 Deep Learning Part 2 course; plus a mini-presentation. It’s not too late to […]

Causal Modeling in Machine Learning with Robert Ness 5/27/21

Causality and probabilistic modeling are some of the hottest topics in machine learning. In early 2020 we launched a new cohort-based course on the topic with instructor Robert Osazuwa Ness. The course has received great feedback from students: “I liked the workshop very much. Robert did a great job of reaching out to students to […]

Machine Learning for Equitable Healthcare Outcomes with Irene Chen – #479

Today we’re joined by Irene Chen, a Ph.D. student at MIT. Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate […]

Deep Reinforcement Learning for Game Testing at EA with Konrad Tollmar – #517

Today we’re joined by Konrad Tollmar, research director at Electronic Arts and an associate professor at KTH. In our conversation, we explore his role as the lead of EA’s applied research team SEED and the ways that they’re applying ML/AI across popular franchises like Apex Legends, Madden, and FIFA. We break down a few papers […]

Advancing Machine Learning at Capital One with Dave Castillo – #328

Today we’re joined by Dave Castillo, Managing Vice President for ML at Capital One and head of their Center for Machine Learning. We caught up with David at re:Invent to discuss the aforementioned Center for Machine Learning, and what has changed since our last discussing with Capital One, which you can find at twimlai.com/talk/147. In […]

MOReL: Model-Based Offline Reinforcement Learning with Aravind Rajeswaran – #442

Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington. At NeurIPS, Aravind presented his paper “MOReL: Model-Based Offline Reinforcement Learning.” In our conversation, we explore model-based reinforcement learning and if models are a “prerequisite” to achieve something analogous to transfer […]

On George Floyd, Empathy, and the Road Ahead

Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. YouTube Source for this AI Video

Generating SQL [Database Queries] from Natural Language with Yanshuai Cao – #519

Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the […]

Common Sense as an Algorithmic Framework with Dileep George – #430

Today we’re joined by Dileep George, Founder and the CTO of Vicarious. Dileep, who was also a co-founder of Numenta, works at the intersection of AI research and neuroscience, and famously pioneered the hierarchical temporal memory. In our conversation, we explore the importance of mimicking the brain when looking to achieve artificial general intelligence, the […]

Noah Gift Interview – Growth Hacking Sports with Machine Learning

In this episode of our AI in Sports series I’m joined by Noah Gift, Founder and Consulting CTO at Pragmatic Labs and professor at UC Davis. Noah previously worked for a startup called Score Sports, which used machine learning to uncover athlete influence on social media and internet platforms. We look into some of his […]

Trends in Machine Learning & Deep Learning with Zack Lipton – #334

Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, a jointly appointed Professor in the Tepper School of Business and the Machine Learning Department at CMU. You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism, which you can find at twimlai.com/talk/285. In our […]

TWiML x Fast.ai Deep Learning Part 2 Study Group – Lesson 1

This is a recording of the TWIML revision study group session for the Fast.ai Deep Learning from the Foundations (aka part 2). This session covers Lesson 8a from part 2, Infrastructure Broadcasting. It’s not too late to join the study group. Just follow these simple steps: 1. Head over to twimlai.com/meetup, and sign up for […]

Helping Fish Farmers Feed the World with Deep Learning w/ Bryton Shang – #327

Today we’re joined by Bryton Shang, Founder & CEO at Aquabyte. We caught up with Bryton after his talk at re:Invent’s ML Summit to discuss: Aquabyte, a company focused on the application of computer vision fish farming. How Bryton identified the various problems associated with mass fish farming and how he eventually moved to Norway […]

The Future of Autonomous Systems with Gurdeep Pall – #450

Today we’re joined by Gurdeep Pall, Corporate Vice President at Microsoft. Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for business (and Teams) and being apart of the first team that shipped wifi as […]

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez – #378

Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size. We discuss the two main problems being solved; […]