AI in Financial Services — UK’s Financial Regulator Sets Out Its Approach
Regulating for Growth and International Competitiveness
At a Glance
- The UK’s Financial Conduct Authority (FCA) notes that stakeholders and respondents have generally been supportive of a technology-agnostic, principles-based and outcomes-focused approach regulating the use of AI in UK financial services.
- The FCA has applied its current rules and regulations to the five key principles that the UK government has identified to govern its use of AI. We summarize the FCA’s approach and how this regulatory approach maps on to the FCA’s existing rules.
- With the pressure on the FCA to regulate for growth and international competitiveness, it is likely that the approaches adopted in the United States and Europe will have a significant bearing on the UK’s attitude.
The United Kingdom does not currently have an omnibus all-encompassing approach to regulating artificial intelligence (AI). The UK government favours a more decentralised and less regimented approach: guidance, rather than legislation; sector-based, rather than cross-sector application; regulated at sector level, rather than centrally. This is intended to make the UK an attractive environment for AI innovation, with more flexible and pragmatic regulation. For background, please see our previous blog posting: The UK’s New AI Proposals - Discerning Data
In respect of the financial services industry, the UK financial services regulator, the Financial Conduct Authority (FCA) has recently set out its approach to regulating AI in its AI Update.
In the Update the FCA notes that stakeholders and respondents have generally been supportive of a technology-agnostic, principles-based and outcomes-focused approach to the regulation of the use of AI in UK financial services. The FCA will continue to closely monitor the adoption of AI across UK financial markets and the potential macro impacts of such adoption, keeping under review the question as to whether amendments to the current regulatory regime are required. The FCA highlights that financial services regulations do not usually mandate or prohibit specific technologies. Instead, the FCA’s approach is to identify and mitigate risks to its objectives and the regulatory outcomes the FCA is seeking, in line with the principle of proportionality.
The FCA intends that this outcomes-focused approach gives firms greater flexibility to adapt and innovate, and that any related regulation, which will help to protect consumers, can be applied more easily to technological change and market developments than detailed and prescriptive rules. In addition, many AI-related risks are not necessarily unique to AI and can therefore be mitigated by existing regulatory frameworks.
In the Update the FCA has applied its current rules and regulations to the five key principles that the UK government has identified to govern its use of AI. A summary of the FCA’s approach and how this regulatory approach maps on to the FCA’s existing rules is set out below.
1. Safety, Security and Robustness
AI systems should function securely, safely and robustly throughout their lifecycle, with risks continually identified, addressed and managed. The FCA’s approach to this principle is as follows:
- The FCA Principles for Business emphasize obligations such as conducting business with skill, care and diligence (Principle 2), and organizing affairs responsibly with adequate risk management systems (Principle 3).
- Firms must meet FCA Threshold Conditions, including having suitable business models that prioritize sound practices and consumer interests.
- The FCA’s Senior Management Arrangements, Systems and Controls (SYSC) sourcebook includes rules for risk controls (SYSC 7), sound security mechanisms for data (SYSC 4) and operational resilience (SYSC 15A). Firms must ensure that important business services (IBSs) remain within certain impact tolerances under plausible disruption scenarios.
- Outsourcing rules in respect of insurers under SYSC 8 require firms to avoid undue operational risks when relying on third parties, including AI providers.
- The FCA also expresses concerns about competition risks that may arise from the concentration of third-party technology services among Big Tech firms.
2. Transparency and Explainability
AI systems should be appropriately transparent and explainable, allowing users to understand their operation and impacts. The FCA’s approach to this principle is as follows:
- Firms must meet obligations under the Consumer Duty, such as providing consumers with clear, fair and not misleading information (Principle 7), and ensuring effective communication to meet customers’ needs for understanding.
- The UK General Data Protection Regulation (GDPR) requires data controllers to provide meaningful information about automated decision-making logic, significance and potential consequences (Articles 13 and 14). Data subjects must be informed about the existence of profiling and automated decision-making, particularly if it has legal or significant effects.
- Transparency is expected even when AI systems are integrated into financial services, ensuring consumers can make properly informed decisions.
3. Fairness
AI systems should not undermine legal rights, discriminate unfairly, or create inequitable market outcomes. The FCA’s approach to this principle is as follows:
- The Consumer Duty requires firms to proactively deliver good outcomes for retail customers, act in good faith and avoid foreseeable harm. It includes addressing discrimination and ensuring fair value for all customers, including vulnerable groups. Firms should be careful to not use AI technologies in a way that embeds or amplifies bias, leading to worse outcomes for some groups of consumers, unless such differences in outcomes can be objectively justified. AI can provide opportunities too. For example, AI chatbots can help customers understand products and services. However, it can also raise risks (for example, in excluding some customers from the market) and firms should consider their obligations under the Consumer Duty in respect of this.
- Firms must consider the needs of vulnerable consumers at all stages of product and service design, ensuring outcomes are as good as those for other consumers. Quality assurance processes should identify and mitigate risks of unintentional harm caused by AI.
- Safeguards under the UK GDPR, such as the principle of fairness in data processing, prevent AI systems from generating unfair outcomes.
- Equalities legislation, including the Equality Act 2010, prohibits discrimination based on protected characteristics and will need to be taken into account in interpreting the FCA rules.
4. Accountability and Governance
Governance measures should ensure effective oversight of AI systems, with clear lines of accountability throughout the AI lifecycle. The FCA’s approach to this principle is as follows:
- The SYSC sourcebook mandates robust governance arrangements, clear organizational structures and effective risk management processes (SYSC 4).
- Firms under the Senior Managers and Certification Regime (SM&CR) must have designated senior managers responsible for activities involving AI. These responsibilities include risk controls and operational functions.
- Senior managers must maintain a statement of responsibilities, ensuring accountability for AI-related decisions within their business areas.
- Boards are encouraged to nominate a board champion to oversee annual assessments of consumer outcomes, which may include AI-related impacts.
5. Contestability and Redress
Users, impacted third parties, and actors in the AI lifecycle should be able to contest AI decisions or outcomes that cause harm or material risks. The FCA’s approach to this principle is as follows:
- Firms must maintain complaints-handling procedures to address grievances about AI decisions promptly and fairly.
- Consumers dissatisfied with internal investigations can escalate complaints to the Financial Ombudsman Service for independent review and redress.
- Other mechanisms include mandatory firm-led redress schemes and the Financial Services Compensation Scheme (FSCS) for certain breaches.
- Data subjects have the right under the GDPR to contest automated decisions with legal or significant effects and to demand safeguards if exceptions apply.
Future Work
The next 12 months could be an important period in the FCA’s thinking about AI. It is currently undertaking a diagnostic assessment on the use of AI across financial services markets. It will be interesting to see what that reveals, and whether the FCA continues to take the view that its technology-agnostic approach is a suitable one.
The FCA recognises that what happens globally is likely to have a huge impact on the approach it adopts. Creating a supervisory framework which is pro-innovation while being protective of consumers and markets, and ensuring that its approach is equivalent for international recognition purposes, will be a difficult challenge. With the pressure on the FCA to regulate for growth and international competitiveness, it is likely that the approaches adopted in the United States and Europe will have a significant bearing on the UK’s attitude.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.