Astrid Bertrand is a PhD student at Télécom Paris – Institut Polytechnique de Paris, and within the consulting company PwC, under the supervision of David Bounie (Télécom Paris) and Winston Maxwell (Télécom Paris). She is working on measuring the efficiency of current AML systems and on the different explainability approaches to use AI in AML. She holds an Engineering degree from Centrale Lyon and a MSc in Sustainability and Social Innovation from HEC.
After a scientific preparatory school, I entered Centrale Lyon, a general engineering school with a varied and deep scientific program in computer science, mathematics, economics, civil engineering, environmental sciences… After a short first professional experience, I wanted to take stock of my professional plans, and explore other fields with a well established social utility by doing a master’s degree in social innovation at HEC. I then had the opportunity to do my master thesis between HEC and Télécom Paris on financial inclusion from payment data, as well as a certificate in data science. The combination of research, data science and a social interest project was very appealing to me. Then, a thesis topic came up on the explainability of AI for anti-money laundering, bringing together many of my interests: public interest research, responsible AI, data science, user experience …
PhD topic, issues and applications
My thesis is about the explainability of artificial intelligence for anti-money laundering. As I explained in an article, AI is being tested to improve anti-money laundering schemes. AI has the potential to greatly improve the effectiveness of these systems, but it also introduces new risks, such as the risk of being misinterpreted, misimplemented, or the risk of violating fundamental freedoms such as the right to explanation, recently introduced by the EU in 2018.
The stakes of this research are very concrete: they are to advance the algorithms for detecting anomalies in suspicious transactions, to ultimately block the financing of criminal financial activities. It is also a question of improving the handling of these new algorithms for detecting illicit flows so that man-machine interaction is as effective as possible. The topic of AI explainability goes beyond the field of LCB-FT: many applications of AI are dependent on a strong interaction with humans, and explainability is a key notion to make these models usable.
My thesis topic is at the interface of several disciplines: economics, law, computer science, psychology… which makes it abundant, and difficult to frame! To begin with, I will focus my research on two major axes: the first one is the use of graph modeling to detect laundering cases. This involves using the morphology of transaction networks to detect “suspicious” pieces of networks. The second line of research focuses on the form of explanations in AI models: how does the design of explanations affect users? Does the combination of several forms of explanations allow users to better interact with the AI? Can graphs help the interpretability of AI models?
In addition to these two concrete lines of research, I am interested more globally in questions about the fairness and regulation of AI, and a broader question about the economics of crime and money laundering: can AI reduce crime?
Plans for the future
For now, I’m going to focus on my thesis which is my short-term horizon for 3 years. I will then see what opportunities arise, whether in the private sector or elsewhere. I would love to work in an international or European institution, for example.
AI Explainability: Misdirection of XAI, technical solutions to user adaptationDigital Trust, Data Science & AI — 30/04/2022The 33rd French-speaking conference "Interaction Homme-Machine" (Human Computer [...]
More AI, and less box-ticking, says FATF in AML/CFT reportDigital Trust, Data Science & AI — 13/07/2021The FATF‘s new report on digital technologies for anti-money laundering and countering the financing [...]
Le pouvoir des algorithmes qui nous gouvernent— 01/07/2021Les algorithmes sont partout dans notre société. Au cœur du succès de Netflix, Facebook ou Google, ils inquiètent, ils fascinent, ils [...]
IA, partage de données, lutte contre le financement du terrorisme (Le Monde)Data Science & AI, Digital Economy, Faculty Members — 19/06/2021Astrid Bertrand, Winston Maxwell et Xavier Vamparys, chercheurs à [...]
Le partage de données pour l’IA en financeDigital Economy — 22/03/2021Cette 3ème session des lundis de l'IA et de finance avec l'ACPR et Telecom Paris a abordé les questions de partage de données [...]
Data sharing issues in the finance industryDigital Economy — 22/03/2021The 3rd session of AI and finance with the ACPR and Télécom Paris tackled data sharing issues in the finance industry. This seminar [...]
AI Ethics News : un regard interdisciplinaire sur l'IA éthiqueDigital Trust, Data Science & AI — 08/02/2021L'initiative de recherche Operational AI Ethics de Télécom Paris, lance une newsletter dédiée [...]
L’explicabilité de l’IA en finance— 16/11/2020Pour ce tout premier Lundi de l’IA organisé par l’ACPR et Télécom Paris, nous avons abordé le thème de l’IA explicable pour la finance.
L’explicabilité des algorithmes au service de la lutte contre le blanchiment d’argentDigital Trust, Data Science & AI — 27/10/2020D’après les estimations d’Europol, 200 milliards d’euros de fonds [...]
The ACPR’s guidelines on explainability: clarifications and ambiguitiesDigital Trust, Data Science & AI — 28/08/2020Machine learning can be of great help in the fight against money laundering by helping to [...]
Current AML techniques violate fundamental rights and AI would make things worseDigital Trust, Data Science & AI, Faculty Members — 09/07/2020Artificial intelligence can help banks better report suspicious [...]