Su-In Lee (UW): Interpretable Machine Learning in Precision Medicine
Modern machine learning models can accurately predict patient progress and outcomes, however, they are not interpretable in the sense that they do not explain why selected features make sense or why a particular prediction result was made. I will talk about my group’s efforts to address these challenges by developing interpretable machine learning techniques for a wide range of applications, including treating cancer based on a patient’s own molecular profile, finding therapeutic targets for Alzheimer’s, predicting chronic kidney disease, preventing complications during surgery, enabling pre-hospital predictions for trauma patients, and improving our understanding of pan-cancer biology and genome biology. Among these, I will mainly focus on our work MERGE, which uses machine learning to enable targeted treatment of acute myeloid leukemia, published in Nature Communications earlier this year, and our explainable artificial intelligence system, Prescience, for preventing hypoxemia in patients under anesthesia, just featured on the cover of the most recent issue of Nature Biomedical Engineering.
We hope you will enjoy this and some our 14k+ other artificial intelligence videos. We keep adding new channels and playlists all the time, so the number of fresh videos keeps growing every day.
BTC 3KqW2c7wrhJDxAjBaywzj74mF2u5uZg665 (get a BTC wallet, get free BTC)