Astrid BertrandPhD Student

Repères biographiquesShort Biography

Astrid Bertrand is a PhD student at Télécom Paris – Institut Polytechnique de Paris, and within the Autorité de contrôle prudentiel et de résolution (ACPR/Banque de France, which carry out the supervision of banking and insurance undertakings in France), under the supervision of David Bounie (Télécom Paris) and Winston Maxwell (Télécom Paris). She is working on measuring the efficiency of current AML systems and on the different explainability approaches to use AI in AML. She holds an Engineering degree from Centrale Lyon and a MSc in Sustainability and Social Innovation from HEC.

Activités : enseignement, recherche, projetsActivities : Teaching, Research, Projects

Academic background

After a scientific preparatory school, I entered Centrale Lyon, a general engineering school with a varied and deep scientific program in computer science, mathematics, economics, civil engineering, environmental sciences… After a short first professional experience, I wanted to take stock of my professional plans, and explore other fields with a well established social utility by doing a master’s degree in social innovation at HEC. I then had the opportunity to do my master thesis between HEC and Télécom Paris on financial inclusion from payment data, as well as a certificate in data science. The combination of research, data science and a social interest project was very appealing to me. Then, a thesis topic came up on the explainability of AI for anti-money laundering, bringing together many of my interests: public interest research, responsible AI, data science, user experience …

PhD topic, issues and applications

My thesis is about the explainability of artificial intelligence for anti-money laundering. As I explained in an article, AI is being tested to improve anti-money laundering schemes. AI has the potential to greatly improve the effectiveness of these systems, but it also introduces new risks, such as the risk of being misinterpreted, misimplemented, or the risk of violating fundamental freedoms such as the right to explanation, recently introduced by the EU in 2018.

The stakes of this research are very concrete: they are to advance the algorithms for detecting anomalies in suspicious transactions, to ultimately block the financing of criminal financial activities. It is also a question of improving the handling of these new algorithms for detecting illicit flows so that man-machine interaction is as effective as possible. The topic of AI explainability goes beyond the field of LCB-FT: many applications of AI are dependent on a strong interaction with humans, and explainability is a key notion to make these models usable.

Research interests

My thesis topic is at the interface of several disciplines: economics, law, computer science, psychology… which makes it abundant, and difficult to frame! To begin with, I will focus my research on two major axes: the first one is the use of graph modeling to detect laundering cases. This involves using the morphology of transaction networks to detect « suspicious » pieces of networks. The second line of research focuses on the form of explanations in AI models: how does the design of explanations affect users? Does the combination of several forms of explanations allow users to better interact with the AI? Can graphs help the interpretability of AI models?

In addition to these two concrete lines of research, I am interested more globally in questions about the fairness and regulation of AI, and a broader question about the economics of crime and money laundering: can AI reduce crime?

Plans for the future

For now, I’m going to focus on my thesis which is my short-term horizon for 3 years. I will then see what opportunities arise, whether in the private sector or elsewhere. I would love to work in an international or European institution, for example.

Dernières actualitésRecent News