Portrait
PhD Student

Repères biographiquesShort Biography

Dilia is a PhD student at Télécom Paris – Institut Polytechnique de Paris, in the Digital Finance Chair, under the supervision of David Bounie and Winston Maxwell in the SES department. Her thesis project is « Explicability of artificial intelligence in financial security: A multidisciplinary perspective ».


Activités : enseignement, recherche, projetsActivities : Teaching, Research, Projects

Educational background

I’ll preface this by saying that there was no set path to where I am today. I wish I could say that I always dreamed of doing a PhD in Economics, but that wouldn’t be true. Truth be told, I started this whole thing thinking I would become a radiologist.

Like many other nerdy immigrant kids in the United States, my career options were decided for me by my parents. I would be either a doctor or a lawyer when I grew up. While in high school, I decided that lawyers were sleazy (clearly not a fair assessment, but in my defense, I was in high school), so therefore I must have become a doctor. Except that once I began reading about the machines doctors used – MRI machines in particular – I became fascinated by the physics behind the technology. So when I got to Stanford, I declared physics as major as soon as I was allowed. I quickly developed a passion for astrophysics, and I dreamt of doing a PhD in astrophysics so I could study the universe and maybe one day becoming a professor to teach what I’ve learned to other curious minds.

So what am I doing here ?

The truth is that, for various reasons I won’t get into here, I never built up the confidence to apply for that physics PhD, and even up to the very end of senior year, I had no idea what to do after college. End of senior year, I still hadn’t finished my degree on time, my dream of going to graduate school was dead and buried, and though job prospects in the Bay Area were generally good for a Stanford graduate, I had become disillusioned with start-up culture in Silicon Valley. I felt it was trite, insincere, and manipulative (again, not the most fair assessment, but not unfounded). I was actually just afraid I wasn’t smart or competitive enough to make it in tech. In short, I had become jaded, which for a self-declared optimist like me is the ultimate nightmare. To make things worse, this was 2015, and racial conflict in the US was becoming overwhelming for those of us emotionally invested in ending racial injustice. The start of the presidential primaries that summer did not help the situation.

And so I found myself that summer with no degree, no job, completely lost, and losing faith in a country that had treated me so well. You can imagine my state of mind when I received an email – a real-life deus ex machina – informing me that I’d been accepted to participate in a program to teach English in France, and not just in France, I was going to Paris ! Suddenly I found myself buying a one-way trip to Charles de Gaulle airport and making a Pinterest board of places to visit in France.

By why France ?

This question deserves a longer answer – there are definitely cultural and moral factors at play – but in the interest of time, I’ll focus simply on my genuine curiosity regarding the French approach to problem-solving, in particular as a foil to the American mindset. Americans tend to have a « go-go-go » mentality to solving problems – try something, anything, and see if it sticks. If it doesn’t work, try something else. Just get in and fix it, dammit. And nine times out of ten, someone will be creative, innovative, or ingenious enough to find the answer, but the actual reasons for the problem in the first place are not discussed or understood. The French on the other hand, will debate an issue to within an inch of its life, and nine times out of ten, after much (and I mean much) debate, there is still no solution on the table. However, thanks to the constant debating and corrections and philosophizing, everyone leaves the table with a much more thorough understanding of the root causes of the problem at hand.

I personally believe that there is an optimal balance between these two approaches – a juste milieu, you might say. I knew that whatever work I ended up doing, I wanted my thought process to function in this space.

So I got to work finding work.

As it turns out an American four-year bachelor’s is not enough to find a job in STEM, I had to get a master’s to do that here in France. A friend of mine from my physics class back in California seemed to be doing well working as a data scientist with only a physics degree so I thought I could give data science a shot. At the time, it was only starting to gain ground, even in the U.S., so it seemed like a good field to apply the analytical thinking I’d learned as a physics major. But I had to first get a « masters spécialisé » in Data Science at business school (my dream of going to grad school came true, just not in the way I thought), which required a six-month internship. I didn’t really know what industry I wanted to work in, and accepted an interview for an internship a friend of mine recommended me for, in the newly-formed Data Lab at La Banque Postale.

I never envisioned working in a bank. I don’t really like banks. But I understood that La Banque Postale provided a necessary service to the most vulnerable among us, and I could genuinely support their mission. Most importantly, my boss was going to be an exceptionally good-natured and intelligent woman. It was a win all around for me.

That is how I found myself working on anti-money laundering algorithms for La Banque Postale, and how I was eventually offered this post to conduct research on the subject.

PhD topic, issues and applications

In recent years, the concept of data privacy has gone mainstream – Facebook in the U.S. and Cambridge Analytica in the U.K. are two of the most well-known names, but the truth is that data science and « big data » have become increasingly ubiquitous in nearly every industry. Everyone from your grocery store to your bank is using your data in some way or another.

EU regulators have been some of the most reactive to the backlash against data-farming and have imposed restrictions to the kind of data that can be used by companies. « Sensitive data » – or data that denotes race, gender, age, and political status – is very strictly regulated. More recently, attention is being paid to the algorithms themselves and our ability to interpret them. It’s no surprise that the algorithms used in detecting fraud and money laundering are incredibly complex and uninterpretable to a human. In the industry we call these types of algorithms « black boxes » – you put your data in and the predictions pop out, but you don’t really know what is happening in between. The opaque nature of these algorithms poses problems down the line when regulators ask for the exact reasoning behind a particular decision, especially if the model uses sensitive data.

My research is exploring the economic and social costs and benefits associated with increased or decreased transparency of these algorithms and the associated fluctuations in performance. In particular, I’m focusing on the AI algorithms used in anti-money laundering and anti-terrorism financing operations and the economic and social implications particular to the success of these activities.

A bank’s decision to freeze a small business’s account, for example, can have serious financial implications, and if AI informed that decision in any way, the bank should be able to thoroughly defend themselves. However, making these algorithms more transparent such that a human can interpret them usually means decreasing their performance, which can mean the difference between detecting fraud or a money laundering scheme in the first place. For La Banque Postale, a mistake like that two years ago cost them 50 million euros in fines, and a terrorist cell received a large sum of money to further their operations.

The goal would to find the optimal balance between explainability and performance for different use cases in anti-money laundering operations, such that banks, regulators, and clients alike are satified.

Research interests

Working out the « cost » of explainability – so to speak – transcends multiple fields. There are implications on fundamental human rights and equality, not just laws and regulations. The solution requires exploration of ethical, political, and economic theories, as well as technical developments. To me, this question touches on a basic philosophical idea – the social contract. This is a question of collective safety or financial benefit at the cost of individual privacy or social discrimination and vice versa. The specific field to which we are applying these questions might be rather niche and technical, but the concepts and global and the implications large. I hope my research can follow these questions further and spread awareness of these ideas in the data science and AI community.

Plans for the future

I can’t pretend I know where I’ll be and what I’ll be doing. History has taught me that plans are more like guidelines, anyways. I do know that I want to stay in France, and I hope to delve deeper into the economic and regulatory policy around data science, ethical AI, or even anti-terrorism work. I still dream of teaching someday, of course.