Detection of protected characteristics bias in Machine Learning using R and Shiny
Bias has emerged as a key topic in Machine Learning (ML). The detection of biases of different kinds are important controls in the governance of ML models. Detecting and eliminating unwanted bias is important not only for compliance with regulations but also for building and maintaining trust in ML and for ensuring commercial success in these applications. In this talk, we will present a methodology we’ve developed at Royal London for detecting protected characteristic bias in our ML models. We’ve also developed an R-Shiny application that makes it easy for our Data Scientists to run this bias detection against any of our models.
Gwilym leads an Analytics and Data Science team at Royal London, the UK's largest Mutual Life and Pensions company. He has been with Royal London for nine years and in his current role he applies the tools of Advanced Analytics, particularly Machine Learning, to solve complex problems in Life and Pensions. Before joining Royal London, Gwilym has worked in a variety of Analytics, Insight and Data Science roles across several industries, including Telecoms, Construction, Banking, and Insurance.