Imagine you’re a young mother living on the outskirts of Kampala, Uganda. Due to unexpected medical expenses for your son, you decide to apply for a digital loan. You download a credit app that you heard about from a neighbor, answer a few questions about yourself, consent to have data on your phone shared, and wait 30 minutes. Then, you receive a rejection message: “Your credit limit is 0 UGX.”

You heard that having an active mobile wallet would increase your chances of being approved for a loan and made sure your MTN mobile wallet was active before applying. You’re confused about why your loan application was rejected and wonder how to improve your future chances. Most importantly, you’re still unsure how you will cover your son’s medical expenses.

Understanding the reasons behind credit decisions is increasingly more difficult as algorithms, rather than front-line staff, make and communicate outcomes. Complex algorithms, fueled by alternative data, are driving finance decisions about and for customers — from underwriting to insurance to eligibility for social protection. As algorithms become more commonplace, we must look for ways to increase transparency around decisions and inform low-income customers about their options for rectification.

In the Dark on Automated Decisions

In honor of World Consumer Rights Day, CFI is researching how credit decisions are made and communicated to low-income consumers, the rights consumers deserve, and the tools they need to understand – and be able to challenge – those decisions.

In the not-so-distant past, most financial institutions relied on front-line staff to make and communicate decisions to customers. If a customer was rejected for a loan or insurance product, she could ask a loan officer to explain how her history was evaluated and could work with the officer to correct data errors. However, as digital models leveraging algorithms gain traction, the transparency that customers used to rely on from their in-person engagements with loan officers or agents has disappeared.

Because most customers are unaware of how an underwriting algorithm works or what data inputs are used, many customers are also unclear about how a decision is made or whom to ask for more information. This lack of clarity was evident from CFI’s survey-based research in Rwanda. Out of a sample of 30 digital borrowers in Rwanda, 16 respondents had been rejected for a digital loan but only 10 of those recalled an explanation for the denial. Of those 10 who received an explanation, six were dissatisfied with the provider’s communication; some received only a basic explanation such as, “Your credit limit is zero.”

When the full survey sample was asked to describe what an acceptable explanation of loan denial might look like, there was a clear ask for more specificity and more communication. A 52-year-old woman shared her experience: “They denied me a loan because I delayed to repay when I was in a hospital and very sick. If they had called to explain the reason for the denial, I would have explained my condition.” A 38-year-old woman reasoned: “When you delay to repay, they call to remind you about the loan. They should then do the same when you are denied a loan.”

Help for Consumers: Legislation, Awareness, Support

So, what can consumers do in this increasingly digital era? Most data protection frameworks give consumers the right to access and rectify their data. However, our Rwanda research suggests that consumers are largely unaware of what is being inputted in the first place. While data protection efforts – like the 2019 Kenyan Data Protection Act that gives individuals the right to request the rectification of personal data that is “inaccurate, out-of-date, incomplete or misleading” – are a good first step, more needs to be done to communicate to customers about the inputs, decisions, and digital rights.

Many nascent data protection frameworks offer consumers the right to be informed if they’ve been subjected to an automated decision by an algorithm, for example. For instance, the Rwandan Data Privacy Law mandates that individuals should be informed about the logic involved in their automated decision at the time of personal data collection. Brazil’s Data Protection Act gives consumers the ability to ask for a review of a decision made with their personal data taken solely through automated processing; included in the review should be the criteria and procedures used for the decision.

But what this will look like in practice, and how it will be enforced, remains to be seen. And given what we know about the challenges facing low-income consumers in accessing and feeling empowered to use grievance redressal mechanisms, there is concern that these data rights will not be fully exercised in a way that keeps companies accountable.

Consumer advocacy organizations are a potential avenue to help raise the voice of consumers vis-à-vis their digital rights. For instance, our research on government-to-person digital payments during COVID-19 found that civil society organizations often were the first to sound the alarm about digital systems failing or customers poorly served.

Whether it’s a mother in Kenya applying for a credit product to pay for medical expenses, or a farmer looking to insure his crops in the face of an increasingly unpredictable climate, we must support consumers’ voices and increase transparency around how digital decisioning is made.

Along with other leading consumer advocates around the world and in partnership with Consumers International, we’re excited to participate at the Fair Digital Finance Forum this week. Take a look at their program to get a lay of the land and stay up to date with CFI’s consumer protection workstream.


Authors

Alex Rizzi

Former Senior Research Director, Consumer Data Opportunities and Risks

During her time at CFI from 2012-2024, Alex was an advocate for consumer protection and evidence-based inclusive finance practices.

She managed the responsible data practices research portfolio at CFI which focuses on opportunities and risks for low-income consumers from data-driven financial services. Previously, she was a senior director for CFI’s Smart Campaign initiative, where she managed consumer-protection learning agendas and helped launch the Client Protection Certification Program. She has participated in multiple industry working groups on responsible finance, including an advisory group to GSMA’s mobile money certification program and the Microfinance Center’s social performance fund steering committee.

Alex is a graduate of Princeton University and holds a master’s degree from Georgetown’s foreign service school, as well as a certificate in digital money from the Digital Frontiers Institute and The Fletcher School at Tufts University.

Jayshree Venkatesan

Vice President, Consumer Protection & Strategic Industry Engagement

As Vice President of Consumer Protection and Strategic Industry Engagement, Jayshree leads CFI’s consumer protection research agenda and partnership strategy, contributing to a diverse global portfolio. Her work focuses on emerging risks at the intersection of technology and financial services, with a particular emphasis on human–technology interactions and their implications for consumer protection.

In her role, Jayshree oversees two flagship convenings: Financial Inclusion Week (FIW) and the Responsible Finance Forum (RFF). FIW is the sector’s largest virtual event, which drew more than 3,500 participants from over 140 countries in 2025. RFF is a global platform advancing responsible finance and addressing consumer protection challenges, held annually alongside the G20 Global Partnership for Financial Inclusion (GPFI) meetings.

With over two decades of experience spanning structured finance, innovative business models, consumer research, and policy engagement, Jayshree is deeply committed to advancing financial inclusion and economic development worldwide. Prior to joining CFI, she spent nearly a decade as an independent consultant, advising leading global institutions including CGAP, the World Bank, JICA, and ITAD on customer-centric approaches and barriers faced by low-income populations in accessing and using formal financial services. Earlier in her career, Jayshree was part of the founding team at IFMR (now Dvara Trust) in India, where she led the country’s first mezzanine fund for microfinance, which later evolved into an alternative investment fund. She began her professional journey at ICICI Bank, building a strong foundation in finance.

In addition to her work at CFI, Jayshree is a Senior Policy Fellow at the Leir Institute at the Fletcher School of Law and Diplomacy, where she focuses on financial inclusion challenges affecting vulnerable populations, including migrants and refugees. She has also served as adjunct faculty at the Fletcher School, teaching decision analysis for business. Jayshree is a recipient of the Chevening Fellowship for Leadership from the UK Foreign and Commonwealth Office, completed at King’s College, London.

She earned an MA in International Relations from the Fletcher School of Law and Diplomacy, an MBA from the Management Development Institute in Gurgaon, and an undergraduate degree in Mathematics from Mumbai University.

Explore More

Who pays for instant payments?
Brief

Who Pays for Instant Payments?

Sign up for updates

This field is for validation purposes and should be left unchanged.