Newsroom

AI and anti-discrimination law: Remarks on Prof. Sandra Wachter’s presentation

al-lego-rithmes

Header image I’m Tech

On November 9th, 2022, as part of the Law, Society & AI interdisciplinary research seminar organized by HEC Paris, Télécom Paris, and École Polytechnique, Joshua Brand and Mélanie Gornet, researchers at Télécom Paris’ Operational AI Ethics initiative, had the pleasure of listening to Prof. Sandra Wachter (University of Oxford) who presented her forthcoming paper The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law. In this research she examines the tension between AI used in decision-making processes and the rationale of Western anti-discrimination law.

Thoughts by Joshua Brand

Wachter’s first challenge is to examine why non-discrimination law exists in the first place. What and whom, precisely, do non-discrimination laws protect, and why? A key focus of her paper is on the importance of using immutability to determine which characteristics deserve legal protection. Immutability are features we can’t change – our sex, skin color, as opposed to our test scores. Wachter shows us, however, that algorithmic groups created by AI, such as dog owners, video gamers, or individual pixels in images, fail to fit the immutable definition traditionally accepted by Western law for purposes of defining “protected” groups.

Should non-discrimination laws be concerned about algorithmic decisions that classify us based on the kind of browser we use or how fast we type?
Wachter comes at algorithmic groupings from a different perspective and provides a novel approach to defining immutability that focuses on vagueness, opacity, instability, involuntariness, and the lack of social meaning. All of these characteristics fit algorithmic grouping and permit us to enlarge the scope of non-discrimination to address classifications based on features where we lack personal choice. Wachter, while not ultimately critiquing the use of immutability, consequently provides the opportunity to include AI into the moral and legal discussion on anti-discrimination through the argument of sustaining a sense of autonomy and control over decision-making processes.

While much can be said about Prof. Wachter’s arguments, what I found interesting was the distinction she raised between egalitarian and libertarian perspectives when considering the foundation of anti-discrimination law (although the paper she cites more broadly uses liberalism instead of libertarianism). The distinction between these two can differ in the political realm, not to mention that they also can be considered as compatible. The difference Wachter takes, however, is that the egalitarian treats everyone equally in a redistributive approach whereas the libertarian, or classical liberal, prefers the protection of individual rights, like autonomy, to deliver a just society. In other words, it is considering whether consequences or the process of action matters the most. The egalitarian approach considers to have achieved a just and fair society when everyone has the same resources and outcomes. Whereas the liberal approach prefers respecting established rights and duties; this entails that the process, or reasons, for actions must be known to ensure that the relevant rights were respected.

Wachter didn’t spend time analyzing the merit of either approach, aside from contending that anti-discrimination law has an inclination of ensuring autonomy, freedom, and liberty to pursue life goals. What I find useful, however, with recalling the various justificatory approaches to non-discrimination is that it reminds us, not just for anti-discrimination law, how we can approach defining new legal and moral courses of actions. By starting with the foundational theories we can coordinate new obligations that are in line with how we already structure our moral and legal obligations in light of new forms of interaction and technology.

What does this have to do with AI? If we take the liberal approach of respecting processes and rights, which Wachter seems to do with her concluding acknowledgement of the “right to reasonable inference”, this can give rise to specific operational AI requirements, and ties in to the debate on explainable AI decisions, and the duty to give valid reasons for algorithmic-assisted decisions. Refusing a loan because of the kind of internet browser you use might be an accurate explanation, but it would not be a valid nor justifiable reason.

 

Further reading: J. Brand, “Clarifying the Moral Foundation of Explainable AI”, The Digital Constitutionalist, 10 November 2022.

Thoughts by Mélanie Gornet

The distinction Wachter makes between egalitarian and libertarian approaches to non-discrimination is interesting from a computer science perspective. Indeed, the perception of non-discrimination by data scientists is often reduced to a principle of fairness that could supposedly be enshrined in the algorithm code. One of the issues this technocentric approach entails is that there is no consensus on the definition of fairness. Several formulas could be defined to approach algorithmic fairness, each of them bearing different or opposing moral theories.

How Wachter defines the egalitarian approach to non-discrimination is essentially the right for everyone to be treated equally.

This definition immediately made me think of the statistical notion of demographic parity where the test data representing the overall population is separated into different groups of people based on arbitrary characteristics that may range from protected attributes, like apparent gender or race, to more basic distinguishing features of individuals. In demographic parity, each of these population groups must have the same predictive performance when presented to AI for it to be considered a “fair” system. Wachter notes the limitations of this approach as it could be satisfied by simply lowering the score of the best performing groups. Consequently, a system that treats everybody badly would still ensure equality among those affected by the system, thus being “fair” and “non-discriminatory”. Wachter mentions the opposing approach to non-discrimination, libertarianism, where it does not matter what others have, but how you are treated as an individual. This vision is much more similar to an individual rights-based approach that guarantees a certain level of protection for everyone.

This discussion on the libertarian approach to non-discrimination led me to wonder whether, like the egalitarian approach and demographic parity, it has an algorithmic equivalent. While I cannot confirm that we will never develop its equivalent, it seems that an appropriate legal framework using human judgement will always be necessary to ensure that those rights have not been violated in practice. In that instance, we should not only protect groups though technical measures, but individuals through legal measures. Non-discrimination remains a fundamental human right that cannot and should not be restricted to algorithmic fairness.

 

Further reading: M. Gornet, W. Maxwell, “Intelligence artificielle: normes techniques et droits fondamentaux, un mélange risqué”, The Conversation, 28 September 2022 (In French).