#74 Dr. ANDREW LAMPINEN – Symbolic behaviour in AI [UNPLUGGED]

Please note that in this interview Dr. Lampinen was expressing his personal opinions and they do not necessarily represent those of DeepMind. Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Pod version: https://anchor.fm/machinelearningstreettalk/episodes/74-Dr–ANDREW-LAMPINEN—Symbolic-behaviour-in-AI-UNPLUGGED-e1h6far Dr. Andrew Lampinen is a Senior Research Scientist at DeepMind, and he thinks that symbols are subjective in the relativistic sense. Dr. Lampinen completed his PhD […]

#77 – Vitaliy Chiley (Cerebras)

Vitaliy Chiley is a Machine Learning Research Engineer at the next-generation computing hardware company Cerebras Systems. We spoke about how DL workloads including sparse workloads can run faster on Cerebras hardware. Pod: https://anchor.fm/machinelearningstreettalk/episodes/77—Vitaliy-Chiley-Cerebras-e1k1hvu [00:00:00] Housekeeping [00:01:08] Preamble [00:01:50] Vitaliy Chiley Introduction [00:03:11] Cerebrus architecture [00:08:12] Memory management and FLOP utilisation [00:18:01] Centralised vs decentralised compute […]

#TWIMLfest: Accessibility and Computer Vision

Digital imagery is pervasive today. More than a billion images per day are produced and uploaded to social media sites, with many more embedded within websites, apps, digital documents, and eBooks. Engaging with digital imagery has become fundamental to participating in contemporary society, including education, the professions, e-commerce, civics, entertainment, and social interactions. However, most […]

#TWIMLfest: Deep Learning for Time Series in Industry

A survey of the promise and barriers to leveraging deep time series models. In this session we will go over some of the latest research on using LSTMs, other RNNs, and transformer models to forecast multivariate time series data and dive into how they compare to more classical methods. This session aims to present attendees […]

#TWIMLfest: Live Keynote Interview with Jeremy Howard – #421

Join Sam and friend of the show Jeremy Howard, creator of the popular fast.ai coursework, for a Keynote Interview! Sam and Jeremy explore topics such as the importance of building a strong community as a foundation, the current landscape and future of AI education, and of course, the recently released fast.ai course: Practical Deep Learning […]

#TWIMLfest: Live Keynote Interview with Shakir Mohamed – #418

In this special #TWIMLfest edition of the podcast, we’re joined by Shakir Mohamed, a Senior Research Scientist at DeepMind. Shakir is also a leader of Deep Learning Indaba, a non-profit organization whose mission is to Strengthen African Machine Learning and Artificial Intelligence. In our conversation with Shakir, we discuss his recent paper ‘Decolonial AI,’ the […]

#TWIMLfest: Live Keynote Interview with Suzana Ilić

In this special #TWIMLfest episode, we’re joined by Suzana Ilić, a computational linguist at Causaly and founder of Machine Learning Tokyo (MLT). Suzana joined us as a keynote speaker to discuss the origins of the MLT community, but we cover a lot of ground in this conversation. We briefly discuss Suzana’s work at Causaly, touching […]

#TWIMLfest: Live Watch Party and AMA with Milind Tambe

In this session, Sam is joined by Milind Tambe, Director of AI for Social Good at Google Research India, and Director of the Center for Research in Computation and Society (CRCS) at Harvard University. In the conversation, we explore Milind’s various research interests, most of with fall under the umbrella of AI for Social Impact, […]

#TWIMLfest: Office Hours – Natural Language Processing

In the Office Hours series, we invite experts and practitioners in various topic areas for AMA (ask-me-anything) style sessions to answer community member questions. The intent is to answer technical questions and/or help participants advance their specific projects and interests. This week’s topic will be centered on NLP! For more sessions like this, visit https://twimlfest.com! […]

#TWIMLfest: Office Hours – Reinforcement Learning

In the Office Hours series, we invite experts and practitioners in various topic areas for AMA (ask-me-anything) style sessions to answer community member questions. The intent is to answer technical questions and/or help participants advance their specific projects and interests. This week’s topic will be centered on Reinforcement Learning! Resources: Show notebooks in Drive – […]
Kai-Fu Lee on the Future of Artificial Intelligence

★★★★★ Kai-Fu Lee on the Future of Artificial Intelligence

Kai-Fu Lee is one of the world’s leading Artificial Intelligence experts and a bestselling author. In conversation with Kamal Ahmed, former Editorial Director of the BBC, he discussed his new work of “scientific fiction”, AI 2041, co-authored with the celebrated novelist Chen Qiufan. He founded Microsoft Asia’s research lab that has trained CTOs and AI […]

039 – Lena Voita – NLP

Lena Voita is a Ph.D. student at the University of Edinburgh and University of Amsterdam. Previously, She was a research scientist at Yandex Research and worked closely with the Yandex Translate team. She still teaches NLP at the Yandex School of Data Analysis. She has created an exciting new NLP course on her website lena-voita.github.io […]

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST Yann LeCun thinks that it’s specious to say neural network models are interpolating because in high dimensions, everything is extrapolation. Recently Dr. Randall Bellestrerio, Dr. Jerome Pesente and prof. Yann LeCun released their paper learning in high dimensions always amounts to […]

100x Improvements in Deep Learning Performance with Sparsity with Subutai Ahmad – #562

Today we’re joined by Subutai Ahmad, VP of research at Numenta. While we’ve had numerous conversations about the biological inspirations of deep learning models with folks working at the intersection of deep learning and neuroscience, we dig into uncharted territory with Subutai. We set the stage by digging into some of fundamental ideas behind Numenta’s […]

2018 Isaac Asimov Memorial Debate: Artificial Intelligence

Isaac Asimov’s famous Three Laws of Robotics might be seen as early safeguards for our reliance on artificial intelligence, but as Alexa guides our homes and automated cars replace human drivers, are those Three Laws enough?

2022: The Year in which Virtual Reality goes Mainstream

The Future of Virtual Reality has been shown at CES 2022 in the form of retina display VR Headsets, full body tracking solutions and brain computer interfaces previewing what the future of full dive virtual reality could look like. Companies such as Meta/Facebook, Google, Apple and Valve are all investing millions into making Virtual Reality […]

A breakthrough unfolds – DeepMind: The Podcast (Season 2, Episode 1)

In December 2019, DeepMind’s AI system, AlphaFold, solved a 50-year-old grand challenge in biology, known as the protein-folding problem. A headline in the journal Nature read, “It will change everything” and the President of the UK’s Royal Society called it a “stunning advance [that arrived] decades before many in the field would have predicted”. In […]

A developer's guide to responsible AI review processes

From startups to corporations across industries, organizations are creating AI principles and ethics review processes to complement technical approaches to developing ML and AI responsibly. Listen to emerging socio-technical practices, ML tools, and lessons learned from Google’s ethics review teams who support developers as they build products. Resource: TensorFlow website → https://goo.gle/3KejoUZ Speakers: Madeleine Elish, […]

A friendly introduction to distributed training (ML Tech Talks)

Google Cloud Developer Advocate Nikita Namjoshi introduces how distributed training models can dramatically reduce machine learning training times, explains how to make use of multiple GPUs with Data Parallelism vs Model Parallelism, and explores Synchronous vs Asynchronous Data Parallelism. Mesh TensorFlow → https://goo.gle/3sFPrHw Distributed Training with Keras tutorial → https://goo.gle/3FE6QEa GCP Reduction Server Blog → […]

A friendly introduction to linear algebra for ML (ML Tech Talks)

In this session of Machine Learning Tech Talks, Tai-Danae Bradley, Postdoc at X, the Moonshot Factory, will share a few ideas for linear algebra that appear in the context of Machine Learning. Chapters: 0:00 – Introduction 1:37 – Data Representations 15:02 – Vector Embeddings 31:52 – Dimensionality Reduction 37:11 – Conclusion Resources: Google Developer’s ML […]

A journey to protect the Great Barrier Reef using Machine Learning

Explore how Google teamed up with CSIRO to enhance monitoring efforts of harmful species on the Great Barrier Reef. Through a Kaggle competition, machine learning developers collaborated to train ML models identifying crown-of-thorns starfish outbreaks degrading the coral reef ecosystem. This project was designed to protect the Great Barrier Reef for generations to come. Subscribe […]

A Multi-tool for your Quantum Algorithmic Toolbox (Quantum Summer Symposium 2020)

Shelby Kimmel of Middlebury College presents a tool that can be used to design all kinds of quantum algorithms. This presentation was recorded on Day 2 of Google’s Quantum Summer Symposium 2020 (July 23, 2020). Check out the playlist for more videos from QSS 2020. Google’s Quantum Summer Symposium 2020 playlist → https://goo.gle/2Z149sN Subscribe to […]

A Social Scientist’s Perspective on AI with Eric Rice – #511

Today we’re joined by Eric Rice, associate professor at USC, and the co-director of the USC Center for Artificial Intelligence in Society. Eric is a sociologist by trade, and in our conversation, we explore how he has made extensive inroads within the machine learning community through collaborations with ML academics and researchers. We discuss some […]