Imagine you’re a young mother living on the outskirts of Kampala, Uganda. Due to unexpected medical expenses for your son, you decide to apply for a digital loan. You download a credit app that you heard about from a neighbor, answer a few questions about yourself, consent to have data on your phone shared, and wait 30 minutes. Then, you receive a rejection message: “Your credit limit is 0 UGX.”

You heard that having an active mobile wallet would increase your chances of being approved for a loan and made sure your MTN mobile wallet was active before applying. You’re confused about why your loan application was rejected and wonder how to improve your future chances. Most importantly, you’re still unsure how you will cover your son’s medical expenses.

Understanding the reasons behind credit decisions is increasingly more difficult as algorithms, rather than front-line staff, make and communicate outcomes. Complex algorithms, fueled by alternative data, are driving finance decisions about and for customers — from underwriting to insurance to eligibility for social protection. As algorithms become more commonplace, we must look for ways to increase transparency around decisions and inform low-income customers about their options for rectification.

In the Dark on Automated Decisions

In honor of World Consumer Rights Day, CFI is researching how credit decisions are made and communicated to low-income consumers, the rights consumers deserve, and the tools they need to understand – and be able to challenge – those decisions.

In the not-so-distant past, most financial institutions relied on front-line staff to make and communicate decisions to customers. If a customer was rejected for a loan or insurance product, she could ask a loan officer to explain how her history was evaluated and could work with the officer to correct data errors. However, as digital models leveraging algorithms gain traction, the transparency that customers used to rely on from their in-person engagements with loan officers or agents has disappeared.

Because most customers are unaware of how an underwriting algorithm works or what data inputs are used, many customers are also unclear about how a decision is made or whom to ask for more information. This lack of clarity was evident from CFI’s survey-based research in Rwanda. Out of a sample of 30 digital borrowers in Rwanda, 16 respondents had been rejected for a digital loan but only 10 of those recalled an explanation for the denial. Of those 10 who received an explanation, six were dissatisfied with the provider’s communication; some received only a basic explanation such as, “Your credit limit is zero.”

When the full survey sample was asked to describe what an acceptable explanation of loan denial might look like, there was a clear ask for more specificity and more communication. A 52-year-old woman shared her experience: “They denied me a loan because I delayed to repay when I was in a hospital and very sick. If they had called to explain the reason for the denial, I would have explained my condition.” A 38-year-old woman reasoned: “When you delay to repay, they call to remind you about the loan. They should then do the same when you are denied a loan.”

Help for Consumers: Legislation, Awareness, Support

So, what can consumers do in this increasingly digital era? Most data protection frameworks give consumers the right to access and rectify their data. However, our Rwanda research suggests that consumers are largely unaware of what is being inputted in the first place. While data protection efforts – like the 2019 Kenyan Data Protection Act that gives individuals the right to request the rectification of personal data that is “inaccurate, out-of-date, incomplete or misleading” – are a good first step, more needs to be done to communicate to customers about the inputs, decisions, and digital rights.

Many nascent data protection frameworks offer consumers the right to be informed if they’ve been subjected to an automated decision by an algorithm, for example. For instance, the Rwandan Data Privacy Law mandates that individuals should be informed about the logic involved in their automated decision at the time of personal data collection. Brazil’s Data Protection Act gives consumers the ability to ask for a review of a decision made with their personal data taken solely through automated processing; included in the review should be the criteria and procedures used for the decision.

But what this will look like in practice, and how it will be enforced, remains to be seen. And given what we know about the challenges facing low-income consumers in accessing and feeling empowered to use grievance redressal mechanisms, there is concern that these data rights will not be fully exercised in a way that keeps companies accountable.

Consumer advocacy organizations are a potential avenue to help raise the voice of consumers vis-à-vis their digital rights. For instance, our research on government-to-person digital payments during COVID-19 found that civil society organizations often were the first to sound the alarm about digital systems failing or customers poorly served.

Whether it’s a mother in Kenya applying for a credit product to pay for medical expenses, or a farmer looking to insure his crops in the face of an increasingly unpredictable climate, we must support consumers’ voices and increase transparency around how digital decisioning is made.

Along with other leading consumer advocates around the world and in partnership with Consumers International, we’re excited to participate at the Fair Digital Finance Forum this week. Take a look at their program to get a lay of the land and stay up to date with CFI’s consumer protection workstream.

Alex Rizzi

Senior Research Director, Consumer Data Opportunities and Risks

Since joining CFI in 2012, Alex has been an advocate for consumer protection and evidence-based inclusive finance practices.

She manages the responsible data practices research portfolio at CFI which focuses on opportunities and risks for low-income consumers from data-driven financial services. Previously, she was a senior director for CFI’s Smart Campaign initiative, where she managed consumer-protection learning agendas and helped launch the Client Protection Certification Program.

Prior to joining CFI, Alex was a project development manager at Innovations for Poverty Action, where she helped create its microsavings and payments innovation initiative (now part of its global financial inclusion initiative), a research fund focused on impact evaluations of savings products and payment systems. She also worked at the Centre for Microfinance at IFMR Research (now Krea University) in Chennai, India, where she was the head of knowledge management and dissemination.

She has participated in multiple industry working groups on responsible finance, including an advisory group to GSMA’s mobile money certification program and the Microfinance Center’s social performance fund steering committee.

Alex is a graduate of Princeton University and holds a master’s degree from Georgetown’s foreign service school, as well as a certificate in digital money from the Digital Frontiers Institute and The Fletcher School at Tufts University. She speaks conversational French and needs to work on her Italian.

Jayshree Venkatesan

Senior Research Director, Consumer Protection & Responsible Finance

Jayshree leads research on fintechs, platforms, and other DFS providers to understand their incentives, practices, and how/if they serve low-income consumers and contribute to advocacy on consumer protection and other policy priorities for inclusive finance and co-leads the development of influence strategies and campaigns with the communications team.

Jayshree joins CFI after working for eight years as an independent global consultant to several partners including CGAP, World Bank, JICA, and ITAD. As a consultant, Jayshree worked on customer-centric business models and challenges faced by customers in accessing and using financial services. From 2009 to 2013, she was part of the founding team at IFMR Trust (now Dvara Trust), where she led a mezzanine fund that invested in microfinance institutions in India.

Jayshree is a senior policy fellow at the Leir Institute housed within the Fletcher School of Law and Diplomacy where she works on challenges faced by vulnerable segments such as migrants/refugees in accessing financial services. In 2014, Jayshree was awarded the Chevening fellowship for leadership by the Foreign and Commonwealth Office, UK. Jayshree earned an MA in international relations from the Fletcher School of Law and Diplomacy, an MBA from MDI Gurgaon (India), and a Bachelor’s in mathematics from Mumbai University, graduating at the top of her class.

Explore More

Privacy as Product, FSP designers
Toolkits and Guides

Privacy as Product: Privacy by Design for Inclusive Finance

Sign up for updates

This field is for validation purposes and should be left unchanged.