How Privacy Rules Can Prevent Discrimination

Post by
Wednesday, December 18, 2019
Posted in Latest News

In a job interview, there are several things that potential employers are prohibited from asking job applicants. Think of rules precluding from asking about an applicant’s age, disabilities, marital status, or intention to have a child. The logic behind these rules is that decision-makers will be unable to discriminate if they lack the sensitive information used to discriminate.

Privacy rules can protect from discrimination by restricting access to information that antidiscrimination law deems harmful in a decision-making process. The traditional approach towards discrimination is reactive: it acts after a discriminatory act has caused harm. But, sometimes, laws and regulations take a preemptive approach: they prevent the decision-maker from acquiring the information about an individual’s protected class in the first place. Without such information, the decision-maker is prevented from acting in a way that antidiscrimination law would deem unlawful.

Discrimination is better prevented than compensated. While litigation is key to redressing past discrimination and deterring future discrimination, preemptive measures can aid in eradicating discrimination more directly. Regulating the acquisition of such information can therefore yield enormous benefits. These rules operate in several areas of the law, such as those that govern landlord-tenant relationships, healthcare, school admissions, loans, and employment. If these rules were properly understood, they could operate in many more areas.

Current laws and regulations do this, however, without a theory or framework to determine when it will be effective. Until now, we lack a way to determine when more information will help antidiscrimination efforts and when it will harm them. Shooting in the dark in such a way has led to mixed results, sometimes preventing discrimination and sometimes backfiring in unforeseen ways.

Two canonical case studies illustrate the tension between blocking information (privacy) and discrimination. The first is a well-known study conducted on orchestra auditions that incorporated a curtain between musicians and directors for reducing gender discrimination. Blocking personal information protected against discrimination. The second is the attempt to protect convicts from employment discrimination by banning a “box” in job application forms that asks whether the candidate has a criminal record. Here, the rule led to increased racial discrimination by employers, having the opposite effect than desired by giving rise to further discrimination.

Antidiscriminatory Privacy offers a new framework for determining when information should flow and when it should be blocked from decision-makers in order to prevent discrimination. The success of these measures depends on the relationship between precluded personal information, such as race, and the proxies for precluded information, such as people’s names and postal codes. What types of proxies exist for the information blocked will determine whether blocking information will be effective.

The first task for preventing discrimination through privacy rules is identifying and blocking data points that are used as proxies for protected categories. If gender is not declared, but decision-makers can gauge gender by people’s names, blocking information on gender will be ineffective.

However, not all proxies are equal. Some proxies reduce discrimination to a subgroup, some expand it, and some transfer it to an overlapping group (like ban the box). For antidiscriminatory privacy rules to be effective, they must (i) identify proxies that reduce or expand the group of people that are discriminated against to gauge those proxies’ usefulness in helping protected categories and (ii) block information that can be used as proxies that shift discrimination to other protected categories.

This framework introduces a series of theoretical and policy benefits over the current understanding of antidiscriminatory privacy rules. From a theoretical standpoint, it sheds light on the relationship between information privacy and discrimination, by showing when and how privacy rules protect wider social values: it helps predict the effectiveness of antidiscriminatory privacy rules. The applications are wide ranging: from traditional prohibitions against asking candidates about their intention to take maternity leave, to the treatment of genetic data in provincial statutes such as Quebec’s Act Respecting the Protection of Personal Information in the Private Sector and Manitoba’s Personal Health Information Act, to facially neutral measures such as Quebec’s Bill 21 and the US travel ban in Trump v. Hawaii, and to algorithmic discrimination.

From a policy standpoint, it creates a precautionary approach to antidiscrimination. By virtue of operating ex ante, it avoids the social harms created by discriminatory conduct—a problem that ex post compensation only partially solves. Moreover, it can address unconscious biases better than ex post antidiscriminatory efforts. It therefore casts a wider protection to minority groups, particularly those not protected by statute.

Doctrinally, it allows for disparate impact-like protection against facially-neutral decisions while remaining under the more conservative and evidentiary simpler logic of direct discrimination. This last element is crucial when the political climate leads to an aversion to extend indirect discrimination measures. This preemptive method is thus useful in cases where indirect discrimination is recognized but there are probatory difficulties for establishing it, such as when intent is required to establish discrimination but it is difficult to prove, or disparate impact itself is admitted as an evidentiary standard but it is difficult to prove.

In sum, the framework introduced here helps to determine the effectiveness of antidiscrimination measures based on information restrictions, and it explains how to design privacy rules that prevent discrimination. There are different ways to fight discrimination alongside litigation that operates ex post. There is, for example, affirmative action in education and quotas for board membership. By exploring information flows that are common in privacy norms and antidiscrimination norms, Antidiscriminatory Privacy develops another way to fight discrimination: by using privacy rules.


Professor Cofone was a guest speaker at CIAJ’s 2019 Annual conference on “The Impact of Artificial Intelligence and Social Media on Legal Institutions”. He is one of the  2019 recipients of the Future of Privacy Forum annual Award “Privacy Papers for Policymakers” for Antidiscriminatory Privacy.  Read his paper

About the author

Ignacio N. Cofone

Ignacio N. Cofone

Ignacio N. Cofone is an Assistant Professor at McGill University's Faculty of Law, where he teaches Privacy Law, Business Associations, and Artificial Intelligence Law.