Harms from the Use of Personal Information
These concerns are not only hypothetical. The collection and use of consumer data by financial services providers has already caused harm in both well-known and relatively obscure instances. Opportunistic providers have exploited customers’ personal information for their own ends with devastating effect. In Kenya, digital lenders published the details of defaulters on Facebook, using public humiliation as a debt collection tactic. In China, lenders have used information about students’ financial hardship to offer loans which are easy to access but include high interest rates and severe penalties for default. There was a spate of suicides among the students when they later defaulted on those loans.
Data changes power in relationships. If a customer surrenders large amounts of personal information to a company, they become vulnerable to numerous forms of intimidation and exploitation which they cannot anticipate or control. An inherently unequal power relationship has become more unequal.
These opportunities for abuse are greater in jurisdictions where there is no general data protection law or responsible lending law, or little effective enforcement of these laws, as is the case in many developing countries. But even with such laws in place the use of personal information often does consumers no favors.
Ryan Calo, Assistant Professor at the University of Washington School of Law, has described the practice of vulnerability-based marketing, which uses personal data to target consumers based on their particular weaknesses. But as Calo points out, in the online environment, companies can also engineer moments of vulnerability by designing the timing, context and interface of an online transaction in a way that creates frailty in that particular individual, influencing the consumer to act against their own best interests.
Providers who engage in such exploitative conduct exist, even on the frontiers of financial inclusion. I have heard the representatives of providers on stage at a conference or summit recite the mantra, “We would rather ask forgiveness than permission.”
I have heard the representatives of providers on stage at a conference or summit recite the mantra, “We would rather ask forgiveness than permission.”
But many providers and organizations are genuinely committed to the pursuit of financial inclusion and customer-centric business models, including the fair treatment of customer information.
Unfortunately, even responsible, well-intentioned players can expose their customers to risks. Here it is necessary to consider the risks of unintended harms and risks that arise directly from the collection of personal information.
The consequences of new types of collection and analysis of information enabled by rapid advances in technology are still being discovered. Many champion the use of algorithmic processing and particularly machine learning to gain insights from big data, and a number of the advances outlined earlier were achieved through such processing. However, researchers have also revealed that these algorithms may discriminate, exclude and produce otherwise inaccurate conclusions to the detriment of consumers.
Sometimes these tendencies are built into the program itself as a result of human bias. Algorithms are used to identify “high value” and “low value” consumers, presenting greatest risk for those who are already vulnerable and disadvantaged. Algorithms may also produce results which are plainly wrong when the data being processed is unreliable. This is a major issue in some developing countries where studies have shown that large percentages of the data held are inaccurate, incomplete or out of date.
In other cases, machine learning produces its own discriminatory tendencies. The fact of this flawed analysis cannot always be understood, particularly given increasing reliance on “black box” algorithms which produce results based on their own form of reasoning, not evident to their creators.
These data practices may bring the comfort of scientific terminology, quantitative analysis, and sharp-edged graphs and tables, but this does not make them immune from embedded bias, error and unjust outcomes.
Harms from Collection Alone
Acknowledging these risks and harms, some argue that we need only be concerned with the misuse of personal information. Collection alone is innocuous; misuse can be identified and addressed. This approach would permit businesses and governments to harvest personal data at will, unconfined by regulation, then determine at a later date how they might use the information and whether the proposed use is lawful and appropriate. This approach is flawed.
The simple act of collecting and storing an individual’s personal information significantly increases the risk of harm to that person. As soon as we collect and store personal information, we increase the “attack surface” – that is, we increase the opportunities for that information to be hacked, stolen or used without authorization. The more data is stored and the longer it is stored, the greater that risk becomes.
These breaches can cause severe harm. Identity theft can cause a lifetime of expense and exclusion for the individual concerned. And harm is not limited to the individual. These events can have drastic consequences for consumer confidence, trust in the company holding their information and trust in financial services more broadly, working in direct opposition to the goals of financial inclusion.
Major breaches of Equifax, Facebook and the US Office of Personnel Management illustrate the reputational harm and losses to corporations and government from data misuse. Bruce Schneier, a security technology expert at Berkman Center for Internet and Society, Harvard University, has long pointed out that, for the firm holding the information, data can be a “toxic asset”.
Even projects launched with the best intentions are subject to these risks. Taylor gives the example of the Harvard Signal Program on Human Security and Technology which aimed, in part, to identify forensic evidence of alleged massacres in Sudan with the advantage of unprecedented detail from satellite imagery analysis. However, researchers on the program discovered that hostile actors appeared to be hacking into the Harvard systems and using the project’s data and communications to target their enemy.
Information that companies store about a consumer may also be accessed by governments, which do not always have due regard for the rule of law. In the East and West alike, the media has revealed numerous occasions where governments have required companies to surrender information about individuals in secret and without due process.
We should not forget that in some countries it is illegal to express dissent or criticize the government, to practice a certain religion or engage in homosexual activity. Information that seems harmless viewed in isolation – a person’s transaction history, social media posts or location data – can reveal highly sensitive information, especially when combined with data from other sources.
The mere collection and storage of information can be profoundly unsafe for the individual concerned.