Open research positions in AI ethics31 March 2021
Explainable artificial intelligence for anti-money laundering operations (2 positions).
Researchers will be part of an interdisciplinary team to evaluate machine learning models and approaches to explainability for the detection of unusual or suspicious activities by bank customers, including possible money laundering or terrorist financing. Researchers will evaluate the compatibility of these approaches with the GDPR, fundamental rights law and banking regulations, as well as their contribution to a “human centric” approach. The impact of machine learning algorithms on the effectiveness of AML efforts, and barriers to uptake will also be studied. The candidate(s) should have a masters or PhD in law, economics, political science, or in business, as well as a solid understanding of machine learning algorithms.
Fair and explainable image recognition systems (2 positions).
The first position is either a PhD student or post-doc, who will be part of an interdisciplinary team evaluating different approaches to identifying and removing bias from image recognition algorithms, including facial recognition, as well as evaluating different approaches to explainability, particularly in light of regulatory requirements for fair and explainable image recognition systems. The candidate should have a masters or PhD in law, economics, political science, or business as well as a solid understanding of machine learning algorithms.
The second position is a post-doc, who will work on improving reliability and explainability in image recognition systems within the Signal, Statistics and Machine Learning team and the Operational AI Ethics working group. The candidate will develop novel algorithms for providing explainability by design and for estimating the confidence level on predictions. The candidate will propose solutions that meet regulatory requirements about trustworthy recognition systems and will closely work with experts in law. A PhD in Machine Learning, Computer Vision or more generally in AI/data science is required with an excellent track–record of scientific achievements (publications and conference presentations).
Developing fair search and recommendation engines in the public interest (1 position).
The researcher will work with data scientists to identify the fairness and neutrality obligations of an operator of an online platform (search and recommendation engine) in the public interest, both toward users of the platform who seek access to information and services, and toward users of the platform who propose information and services to citizens. The purpose of the research is to evaluate different user interfaces and approaches to fairness to determine their compatibility with public interest obligations, GDPR and applicable platform legislation. The candidate should have a masters in law, economics, political science, sociology, or business, as well as a solid understanding of machine learning algorithms.
To apply, please send by e-mail, with the subject line « Application for Research Position », a single dossier containing a statement of research interest, a CV, a copy of relevant certificates and a list of two references at firstname.lastname@example.org.
The Operational AI Ethics program
These offers are part of the Operational AI Ethics initiative of Télécom Paris.
The Operational AI Ethics initiative at Télécom Paris is a multidisciplinary research program that brings together professors and researchers from different academic branches with the ambition of creating operational artificial intelligence tools that integrate, from the design phase, ethical principles for the development of artificial intelligence in the service of the general interest.