Newsroom

Regulation of AI in the financial sector: regulatory and academic perspectives in Asia and Europe

Régulation AI

In this fourth “AI and Finance Monday” organized by the ACPR and Télécom Paris on May 17, 2021, we took a broad international look at the regulation of artificial intelligence in the finance industry.

David Bounie, professor of economics at Télécom Paris, and Olivier Fliche, director of the Fintech-Innovation division at the ACPR, opened the conference by presenting the perspectives that would be compared during the session: regulators’ and the academics’ perspective on the one hand, and the European Union’s and Asia’s perspective, on the other hand.

We had the pleasure of receiving two regulators specialized in the financial services industry, Jan Ceyssens and Xuchun Li, from respectively the European Commission and the Monetary Authority of Singapore, and two academics, John Armour and Douglas Arner from respectively the University of Oxford and the University of Hong Kong.

The European approach to AI in finance – Jan Ceyssens

As the first speaker, we welcomed Jan Ceyssens, Head of the Digital Finance Unit at the European Commission, who helped us unpack the recent proposal of the European Commission on AI regulation. As part of its strategy to make Europe a leader in digital issues, the Commission published in 2021 a package on the European AI strategy with several objectives such as creating favourable conditions for the development of AI in the EU and ensuring that AI works to the benefit of people. Among these multiple strategies, the proposal defines a legal framework for AI, based on the general assumption that AI is beneficial to the public interest, but that it entails risks that must be managed. The legal framework proposes a risk-based approach, differentiating 4 levels of risks: unacceptable, high-risk, requiring transparency, and lastly minimal or no risk. To deal with high-risk systems, some requirements are needed in order to limit risks likely to negatively affect security and fundamental rights.

Mr. Ceyssens then presented how this proposal would be articulated in the financial sector. The proposal contains one use case in finance on creditworthiness. But the regulation of AI in finance goes beyond this horizontal supervisory framework: it must be integrated into the existing supervisory framework by supervisors, in particular EBA, EIOPA and ESA. As the next steps, the European Parliament and the European Council will negotiate the legislative proposal. Once adopted, there will be 2 years of transitional period before the regulation becomes directly applicable.

[Download the presentation]

A Methodology for Responsible Use of AI by MAS (Monetary Authority of Singapore) – Xuchun Li

The second speaker was Xuchun Li, Head of the AI Development Office at the Monetary Authority of Singapore (MAS). He first presented the MAS’s FEAT Charter for AI development and its 4 principles: Fairness, Ethics, Accountability and Transparency.

Second, he described the Veritas initiative, which consists of a practical implementation of the FEAT principles. The objective is to create a standardized and modular methodology for the implementation of the above principles, whose source code is to be made publicly available. The phase 1 of the Veritas approach took place in 2020 and focused on the Fairness principle and two use cases: credit risk scoring and customer marketing. The outputs were two whitepapers and the associated programming code. Phase 2 will take place this year and cover all 4 FEAT principles, and expand the use cases to insurance and fraud detection. The final output of the Veritas Approach will be a consolidated whitepaper presenting methodologies to assess alignment with FEAT principles.

[Download the presentation]

Aligning product regulation and corporate governance – John Armour

Next, we welcomed John Armour, Professor of Law and Finance at the University of Oxford. His presentation focused on the relationship between product regulation and corporate governance, using the proposal of the European Commission on AI regulation as an example. He explained that when new technologies are introduced to the market, new risks are also incurred. Some are foreseen by appropriate regulation, but others are not foreseen in advance: these are emerging risks. The question becomes how to manage these emerging risks in advance? One approach is the precautionary principle, widely used in pharmaceuticals, which aims at restricting the deployment of the new technology until it is proved that it can be used safely. But we must be proportionate as we apply these principles, so as not to forego the benefits that AI can yield.

The EU proposal is an endeavour to tackle the emerging risks: some AI use cases are prohibited, minimum standards requirements are set for high-risk applications, but the proposal also sets out a scheme to shift an AI system into the high risk category if problems and new risks emerge. The regime also encourages providers to develop voluntary standards for systems that are not classified as high-risk, using the high-risk standards as best practices even for low-risk applications. This approach is very similar to the concepts of product regulation and product governance. On the one hand, product regulation must calibrate the rules effectively through cost/benefit analyses and with the challenge to keep up with the rapid pace of innovation. On the other hand, firms are required to establish product governance processes, and have to demonstrate that their product yields benefits for consumers. For firms’ internal processes, it will be important to ensure an overarching approach on consumer benefit and avoid just a tick-box of minimum requirements.

[Download the presentation]

Regulating AI in Finance: Balancing Risks and Opportunities – Douglas Arner

The last speaker of the conference was Douglas Arner, Professor in Law at the University of Hong Kong. He started by outlining that finance is one of the most digitized regulated industries and, as such, represents very fertile ground for AI development. The sector harbours massive volumes of data, communication, computing power and is already relying on powerful analytics. In addition, a wide range of specific regulatory specifications is already in place in finance, which creates a context with rules, specific objectives and outcomes where AI can excel (take the example of chess for instance). Furthermore, financial players are the biggest spenders on technology, driven by a constant arms race for the best AI, as in the case of trading. AI introduces new risks in finance: for financial stability with collusion risks, for cybersecurity or for innovation because AI works best where there is a  massive concentration of data which favours “winner takes all” outcomes. Several regulation methodologies are possible including authorisation or e-personhood… Pr. Arner emphasized an approach based on human responsibility and accountability. To conclude, he outlined some differences between jurisdictions on approaches to data. China, originally based on a property-based approach to data, is turning to a datapool approach, where large masses of data would be made available to a wider range of actors to support AI development. In the EU, the market is tightly restricting data usage to protect fundamental rights.

[Download the presentation]

Roundtable discussion

The second part of the webinar was dedicated to the discussion, moderated by David Bounie and Olivier Fliche. Elements of the discussion included :

  • the differences of approach between Asia and Europe
  • how to link transversal and sectoral regulation: there is always the need for broader, all-encompassing measures, then those need to be integrated into existing structures (there are specific reasons why some sectors are regulated).
  • using the personal responsibility frameworks that already exist in regulated industries to increase accountability,
  • the importance of interactions (e.g. experimentations) between regulated firms and regulators
  • paying attention to the way we benchmark AI’s performance: against the ‘perfection’ baseline, the best-in-class model or the performance of humans?
  • on the benefits and risks of the human-in-the-loop approach.
The replay of the webinar is also available here.

 

Useful links to complete the reading: 

_____________________________________________________________________

Illustration: freepik.com