Financial Services Institutions (FSIs) manage billions of dollars worth of transactions per day and are increasingly adopting AI solutions to process these transactions. In particular, the MIT Technology Review Insights found that businesses in the Asia-Pacific region are quicker to adopt AI systems than any other part of the world. However, when not thoughtfully designed, AI systems can introduce new unintended harms and perpetuate or reinforce existing disadvantages, which in turn can create reputational, operational, and legal risks for businesses.
For example, soon after launching its credit card partnership with Goldman Sachs last year, Apple had to investigate its system for gender bias, which, if left unchecked, might have limited women’s access to credit, harming those potential customers and increasing risks of regulatory non-compliance for the business.
More heightened risks for businesses and customers arise whenever consequential decisions are being made at high speed or volume and can propagate biases at a pace much greater than before.
In an effort to help manage potential risks, the Monetary Authority of Singapore convened industry partners to first, develop Principles to Promote Fairness, Ethics, and Transparency (FEAT Principles), and then, to produce guidance for implementation based in financial services use cases. Element AI was selected — along with Ernst & Young, the Gradient Institute, HSBC Bank, IAG’s Firemark Labs, and UOB Bank Singapore — to collaborate on guidance for a fairness assessment. While the Gradient Institute, IAG’s Firemark Labs and HSBC Bank team tackled fairness for customer marketing systems, Element AI and UOB Bank Singapore focused on credit scoring for unsecured lending.
These use cases are particularly salient because they provide much to compare and contrast: credit scoring is a more studied and regulated use case whereas customer marketing is less so, yet seems to garner the most interest in terms of AI adoption moving forward (whereas 36% of respondents in 2019 used AI systems in sales and marketing, 61% plan to do so by 2022).
After nearly a year of work, the FEAT Fairness Assessment Methodology was presented at the Singapore FinTech Festival in December and published last week.
Here are the 5 key points underlying the Methodology:
1.Fairness is an essentially-contested concept. There are many definitions of fairness, even for a particular use case. In almost all cases, tradeoffs are inescapable. As a result, the goal of integrating a fairness methodology into an AI system is not to make it completely fair, but to be aware of biases and to improve the system’s operation with respect to a set of fairness objectives set by the FSI.
2. In order to be fair with respect to personal attributes, a FSI must not be blind to them. A common design approach to creating fairer AI systems is to remove personal attributes from a model to prevent the model using them to discriminate. This is ineffective because, in short, a model may infer personal attributes from other variables even if they are omitted, and when personal attributes are predictive of good outcomes for some groups, removing them can lead to more discriminatory outcomes.
As with the Apple credit card case discussed earlier, the better approach would have been to manage gender bias by constraining the system to be fair with respect to gender, and to do so, system designers would have needed to incorporate gender data.
3. The FEAT Fairness Assessment Methodology is a continuous improvement journey. The Methodology is designed to help FSIs implement the vision in the FEAT Fairness Principles by considering both the decision-making process and the outcomes, each with a set of questions and considerations as guidance. The five-part Methodology is as follows:
- Describe system objectives and context, through a harms and benefits lens
- Examine data and models for any unintentional biases
- Measure disadvantage by quantifying the harms and benefits identified
- Justify the use of personal attributes and examine the associated tradeoffs
- Examine system monitoring and review to ensure a more continuous process
4. A fairness assessment should be integrated into model risk management. FSIs will need to contextualize and operationalize the AI system governance in their own business models and structures. In doing so, FSIs should consider the risk level of an AI system by evaluating its materiality, impact on stakeholders in terms of potential harm, and complexity (i.e., data sources, the mathematical computation, or the nature of human intervention in decision-making) and integrate the framework into their existing model risk management lifecycle phase (Model Development, Validation and Monitoring).
5. System assessors should be independent from system owners and developers. As a matter of good governance, the associated segregation of duties between development, review/validation and independent evaluation also apply to AI systems. This does not necessarily mean that a FSI is required to hire external consultants to conduct a fairness assessment, but rather that the FSI takes appropriate measures to guarantee an independent assessment.
Introducing AI systems is not without risk, both to the business and to consumers. The purpose of the FEAT Fairness Assessment Methodology is not to inhibit the use of AI but instead to promote the responsible adoption of AI.
Managing fairness risk is not only “ethical, but also practical”: research shows better organizational performance and satisfaction from customers when decisions are perceived to be more fair. While the Asia-Pacific region leads the charge on responsible innovation in Finance, there is much left for the rest of the world to tackle across industries. Using the risk-based framing can make the seemingly daunting task more manageable.