The UK’s Financial Conduct Authority (FCA) has recently published an update (the Update) on its approach to artificial intelligence (AI) following the UK government’s publication of its pro-innovation strategy in February 2024 (the Strategy).
The Strategy identified the following five principles as ‘key’ to the regulation of AI in the UK: (i) safety, security, robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress. The Update details, at a high-level, how the FCA’s existing approach is in line with each of the identified principles and describes itself as a ‘technology-agnostic, principles-based and outcomes-focused regulator’. In addition, the FCA details what it plans to do in the next 12 months, including:
- Continuing to further the FCA’s understanding of how AI is deployed in UK financial markets. The Update provides that this will ensure that any potential future regulatory interventions are effective, proportionate and pro-innovation. The FCA states that it is also re-running a third edition of its machine learning survey, jointly with the Bank of England (BoE), as well as collaborating with the Payment Services Regulator (PSR) to consider AI across systems areas.
- Building on the existing UK regulatory framework that covers firms’ use of technology, including AI. The FCA highlights that while the existing framework, in so far as it applies to firms using AI, aligns and supports the UK Government’s AI principles, it may actively consider future regulatory adaptations if needed.
- Continuing to collaborate closely with the BoE, PSR, and with other regulators through its Digital Regulation Cooperation Forum (DRCF) membership. The FCA will also closely engage with regulated firms, ‘civil society’, academia and its international peers.
- Prioritising its international engagement on AI in line with recent developments such as the AI Safety Summit (please refer to our recent articles available here and here, respectively) and the G7 Leaders’ Statement on the Hiroshima AI Process. The FCA highlights that it is closely involved in the work of the International Organization of Securities Commissions (IOSCO), including the AI working group, and supports the work of the Financial Stability Board (FSB). The FCA also states that it is a core participant in other multilateral forums on AI, including the Organisation for Economic Co-operation and Development (OECD), the Global Financial Innovation Network (GFIN) and the G7.
- Working with DRCF member regulators to deliver the pilot AI and Digital Hubs.
- Assessing opportunities to pilot new types of regulatory engagement and environments in which the design and impact of AI on consumers and markets can be tested and assessed without harm materialising. This includes exploring changes to the FCA’s innovation services that could enable the testing, design, governance and impact of AI technologies in UK financial markets within an AI Sandbox.
- Investing further into AI technologies in order for the FCA itself to proactively monitor markets, including for market surveillance purposes. The FCA is currently exploring potential further use cases involving Natural Language Processing to aid triage decisions, assessing AI to generate synthetic data or using LLMs to analyse and summarise text.
The Strategy and the Update found here and here respectively.