Global

The decision on whether to grant a loan to a given customer is an important one for society in general, but one that has with considerable consequences for the individual. By Mark Somers, director of research at 4most

For individuals, the decision over whether they are granted a loan can either enable or frustrate key life decisions, affecting everything from houses to livelihoods. For the economy, lending money facilitates the mechanism by which money is created safely. Prudent lending is a prerequisite for a stable and growing economy.

Consumer credit decisions have, for the past 50 years or so, been automated based on statistical models of past customer behaviour. Over that time, the data and computing power available to make these important decisions have grown exponentially. It is now quite possible to accurately infer some of the most private and sensitive characteristics (such as race and religion) from the data captured automatically as the exhaust of a digital lifestyle. As we look to choose which of the available data to use, it is essential for us do so fairly.

While fairness is a concept we learn from an early age, it is difficult to encode what it is, specifically, that justifies whether a decision is ‘fair’ from a legal perspective — or indeed to specify this as a robust mathematical statement. Lenders need a pragmatic approach to support good credit decisions that are both accurate and legitimately defendable, based on the available data.

Equality paradox

One of the most natural ways to tackle the issue of fairness is to attempt to exclude manifestly unfair decision processes. To this end, most societies have a legal framework that makes it illegal to discriminate based on protected characteristics, such as race, gender and religion. While equality laws have efficacy in eliminating examples of past unfair discrimination, there is less clarity on whether such rules would have the same effect on future cases.

However, there are several complicators that already spring to mind here:

  • There are multiple ways of considering equality, such as equality of outcome vs. accuracy vs. opportunity. It turns out that when specified mathematically, not all of these can necessarily be equal simultaneously, leaving decision makers with a paradox as to which measure of equality to choose to comply with, and which inequalities they are happy to live with.
  • While protected characteristics have been laid down in law, human society is constantly changing and therefore laws will over time change too. The current debates regarding the rights and treatment of non-binary genders and the differentiations around LGBT+ sexual orientation suggests that views are not fixed as to how protected characteristics should be defined. The fairness of a decision, however, should be resilient to such societal changes.
  • On a pragmatic level, to assess equality on a protected characteristic requires the capture, storage and analysis of this characteristic. That seems deeply personal and, for some, offensive. Certainly, to make it mandatory to disclose your protected characteristics would be very unfair, even if the objective was only ever to prove that decisions were not discriminatory. Making the disclosure voluntary would almost certainly introduce some material bias, as non-disclosers are likely to be over-represented by discriminated-against groups, and therefore render the whole exercise pointless.
Fair Enough chart

Concept of fairness

Given that fairness is not the same as equality, can we identify some other attributes of fairness that would provide a guide as to which equality measures are usable in different contexts? A review of the philosophical literature on the concept of fairness highlights the following attributes as being key considerations:

  • Causality — if something is a direct cause, it might be reasonable to base a fair decision on that basis, in line with the magnitude of the effect.
  • Relevance — even for attributes that are not causal, it at least needs to be associated to be fair, and foreseeable that the information would be used in a decision.
  • Volition — the subject is more likely to think a decision is fair if they have discretion over the behaviours being used to make decisions (for example, if the speed at which a person chooses to drive affected their car insurance).
  • Reliable — fair decisions must be as predictable as possible and not include unnecessary randomness from poor-quality data.
  • Non-private — subjects should have control over their private data, and decisions should not be made using information that they have not chosen to share.

Fairness defined

Rather than using equality measures, the following approach should be used to demonstrate fairness in decision-making models. Importantly for businesses, this approach meets the key requirements of the General Data Protection Regulation and the EU’s proposed Fair AI legislation.

  • Type 1: are there data items that are reasonably available and causal of the outcome being targeted? If so, the feature must be used equitably in the decision process. Causality can be determined based on causal discovery algorithms and/or supported by human opinion.
  • Type 2: Are there data items that are reasonably foreseeable and volitional? If so, they may be used equitably in the decision-making process.

All other characteristics should be excluded in a fair decision. Focusing on these general rules as to which characteristics can be used in models ensures that key aspects of fair decisions are implemented in a consistent and practical manner. This will naturally avoid legally protected classes, which tend not to be causal or volitional, and others that are not formally protected would be unfair. Providing the model is constructed using a statistically unbiased fitting criterion, it will also fulfill an equality of accuracy condition. In addition, normal good modelling practice should ensure that only good-quality and reliable data is used.

This article is free to read, request a no obligation trial access to Global Risk Regulator.