The use of algorithms to automate credit decisions, detect fraud, measure risk, and provide customer support has become widespread in the financial sector and is increasingly common in the financial inclusion industry. Artificial intelligence holds immense promise to develop and distribute financial products for previously underserved, low-income consumers by reducing costs and increasing efficiencies and scale.
Yet concerns about the fairness of machine learning systems and algorithmic bias are warranted. Technological systems can inherit the biases of the society where they are developed, through historically available data, training data selection, and choice of machine learning techniques.
- Algorithms are not designed to be interpretable or explainable, and are treated as trade secrets that are, often rightly, protected from calls for full transparency.
- Consumers are either unaware that they are the subject of an automated decision or they lack redressal.
- Government oversight is lacking.
As algorithms, machine learning, and other data analytics are deployed in developing countries, many of these risks are exacerbated due to data limitations and lower levels of digital literacy. Yet, the solution is not to halt innovation, but rather to set up thoughtful oversight measures that align with the technical development process and allow for interpretability of results by policymakers down the road.
The Smart Campaign has proposed what this oversight process could look like for individual financial service providers in the Draft Standards for Digital Credit. Other industries relying on AI and machine learning have begun to develop guidelines and technical capabilities to audit algorithms for bias and discrimination. This webinar will bring together experts with backgrounds in policy, data science, and financial inclusion to discuss the risks to consumers and emerging mitigation strategies (or theories) that can help ensure trust in digital financial services.