XAI: Learning Fairness with Interpretable Machine

Event Agenda:
10-min: Trends in Artificial Intelligence and Data Fairness: Tim Denley, Director of Board, Chief Solutions Officer, KPMG Ignition Tokyo
A talk on KPMG’s view on data reliability and results along with our effort at KPMG Ignition Tokyo.

30-min: Learning Fairness with Interpretable Machine Learning: Serg Masis, Author of “Interpretable Machine Learning with Python”; Climate & Agronomic Data Scientist, Syngenta
An overview of many methods employed to detect and mitigate bias and place guardrails to ensure fairness with Python examples

20-min: Q&A (please submit your questions on https://app.sli.do/event/duj7nyzp)

This event will be moderated by Haiyang Peng, a Senior Scientist at KPMG Ignition Tokyo.

Special thanks to KPMG Ignition Tokyo (https://home.kpmg/jp/en/home/about/kit.html) and Machine Learning Tokyo (https://machinelearningtokyo.com/) for co-hosting the event with us!

For anyone who’s interested, our sister company Workera.ai launched a Fairness in AI assessment, now available in Beta! This assessment addresses the foundations of AI fairness, data fairness (pre-processing), algorithmic fairness (in-processing), prediction fairness (post-processing), and fairness in the lifecycle of an AI project. After mastering the concepts assessed via expertly-curated learning resources, you’ll be able to a) define the core components of AI fairness, b) identify sources of unfairness, and c) understand methods to mitigate unfairness.

You may sign-up on Workera.ai (https://workera.ai/) to take the free assessment and begin learning!

Source of this AI Video

AI video(s) you might be interested in …