Dec 25, 2017 — We live in a world of big data, where technological developments in the area of machine learning and artificial intelligence have changed

129 KB – 14 Pages

PAGE – 1 ============
FRA Focus1#BigData: Discrimination in data-supported decision makingHELPING TO MAKE FUNDAMENTAL RIGHTS A REALITY FOR EVERYONE IN THE EUROPEAN UNION FRA FocusContents We live in a world of big data, where technological developments in the area of machine learning and arti˜cial intelligence have changed the way we live. Decisions and processes concerning everyday life are increasingly automated, based on data. This affects fundamental rights in various ways. The intersection of rights and technological developments warrants closer examination, prompting the Fundamental Rights Agency to research this theme.This focus paper speci˜cally deals with discrimination, a fundamental rights area particularly affected by technological developments. When algorithms are used for decision making, there is potential for discrimination against individuals. The principle of non-discrimination, as enshrined in Article˚21 of the Charter of Fundamental Rights of the European Union (EU), needs to be taken into account when applying algorithms to everyday life. This paper explains how such discrimination can occur, suggesting possible solutions. The overall aim is to contribute to our understanding of the challenges encountered in this increasingly important˚˜eld.˜. Big data and fundamental rights implications ˚˚. Data-supported decision making: predictions, algorithms and machine˛learning . ˝˝. Computers filearning to discriminatefl .. ˙˙. Detecting and avoiding discrimination ˆˇ. Possible ways forward: addressing fundamental rights and big data ˜˘ References ˜˚

PAGE – 2 ============
#BigData: Discrimination in data-supported decision making21. Big data and fundamental rights implicationsIn the past decades, technological advancements have changed the way we live and organise our lives. These changes are inherently connected with the proliferation and use of big˜data.Big data generally refers to technological devel-opments related to data collection, storage, analy-sis and applications. It is often characterised by the increased volume, velocity and variety of data being produced (fithe three Vsfl), and typically refers (but is not limited) to data from the internet. 1 Big data comes from a variety of sources, including social media data or website metadata. The Inter-net of Things (IoT) contributes to big data, includ- ing behavioural location data from smartphones or ˚tness tracking devices. In addition, transaction data from the business world form a part of big data, such as providing information on payments and administrative data. 2 The increased availability of data has led to improved technologies for ana-lysing and using data Œ for example, in the area of machine learning and arti˚cial intelligence˜(AI).Arguably, big data can enhance our lives Œ for instance, in the health sector, where personalised diagnosis and medicine can lead to better care. 3 However, the negative fundamental rights impli-cations of big data-related technologies have only recently been acknowledged by public authorities and international organisations.4 The use of new technologies and algorithms, including machine learning and AI, affects several fundamental rights. These include, but are not limited to: the right to a fair trial, prohibition of discrimination, privacy, free-dom of expression, and the right to an effective rem -edy, as outlined by the Council of Europe in their 2017 report.5 In addition, the European Parliament adopted a 2017 resolution highlighting the need for action in the area of big data and fundamental rights implications.6 In this resolution, the European Par-liament uses the strongest language when refer-ring to the threat of discrimination through the use of algorithms. This serves to underpin the particu-lar focus of this paper.1 Additionally, terms such as increased fivariabilityfl with respect to consistency of data over time, fiveracityfl with respect to accuracy and data quality, and ficomplexityfl in terms of how to link multiple datasets can be added to the list of characteristics of big data (Callegaro, M. and Yang, Y. (2017)).2 Callegaro, M. and Yang, Y. (2017).3 Seitz, C. (2017), pp. 298-299.4 European Parliament (2017a); Council of Europe (2017a); European Data Protection Supervisor (EDPS) (2016); The White House (2016).5 Council of Europe (2017b).6 European Parliament (2017a).Other FRA reports have highlighted the fundamen-tal rights challenges posed by technologies that are built on big data. This includes FRA™s reporting on the oversight of surveillance by national intel-ligence authorities, as requested by the European Parliament. In addition, FRA has published reports on the use of biometric and related data in the EU™s large-scale IT˜databases. Starting with the present paper, FRA explores the implications of big data and AI regarding fundamental rights, focusing in the ˚rst instance on discrimination as a key area of EU legal competence, and also an area where FRA has undertaken extensive research to˜date.European Parliament resolutions on To use big data for commercial purposes and in the public sector, the European Parliament ficalls on the European Commission, the Member States and the data protection authorities to identify and take any possible measures to minimise algorith-mic discrimination and bias and to develop a strong and common ethical framework for the transpar-ent processing of personal data and automated decision-making that may guide data usage and the ongoing enforcement of Union˜lawfl.When it comes to the use of big data for law enforce-ment purposes, the Parliament fiwarns that [–] maxi -mum caution is required in order to prevent unlawful discrimination and the targeting of certain individ-uals or groups of people de˚ned by reference to race, colour, ethnic or social origin, genetic features, language, gender expression or identity, sexual ori -entation, residence status, health or membership of a national minority which is often the subject of ethnic pro˚ling or more intense law enforcement policing, as well as individuals who happen to be de˚ned by particular characteristicsfl.*Moreover, the European Parliament has high-lighted the need for ethical principles concern -ing the development of robotics and arti˚cial intel -ligence for civil use. It points out that a guiding ethical framework should be fibased on [–] the principles and values enshrined in Article˜2 of the Treaty on European Union and in the Charter of Fundamental Rights, such as human dignity, equality, justice and equity, non-discrimination, informed consent, private and family life and data protectionfl, among other principles.*** European Parliament (2017a).** European Parliament (2017b).

PAGE – 3 ============
FRA Focus3Focus on discriminationDirect or indirect discrimination through the use of algorithms using big data is increasingly considered as one of the most pressing challenges of the use of new technologies. This paper addresses selected fundamental rights implications related to big data, focusing on the threat of discrimination when using big data to support decision making.Previously, decisions and processes were under-taken with little support by computers. Nowadays, there is an increase in the use of sophisticated tech-niques of (statistical) data analysis to facilitate these tasks. However, this can lead to discrimination. For example, this may include the automated selection of candidates for job interviews based on predicted productivity. Another example is the use of risk scores in assessing the credit worthiness of individu-als applying for loans. Furthermore, in the course of a trial, the use of risk scores in the decision-making process on sentencing can lead to discrimination.The principle of non -discrimination is embedded in EU law. Article˜21 of the EU Charter of Fundamen -tal Rights prohibits discrimination based on sev-eral grounds, including: sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age, sexual orientation, and nationality. Informa -tion about or related to these attributes, once pro-cessed as data, is connected to the individual. By de˚nition, this makes them personal data that are protected under the data protection legal frame-work. This is particularly important, due to the grow-ing availability and processing of large amounts of data that also include information (potentially) indi -cating one or more of these characteristics relat – ing to individuals.At EU level, the General Data Protection Regulation (GDPR)7 addresses some of these new technologi-cal developments, including the potential for dis -crimination. The fundamental rights implications of using algorithms in decision making include consid-erations addressed in the GDPR, but also go beyond.The issues raised here highlight the potentially prob -lematic nature of the use of data for decision mak -ing. However, the use of data to inform decisions is also considered a positive development, as it poten-tially allows for more objective and informed deci-sions, in comparison to decisions that do not take into account available data. It also has the potential to limit discriminatory treatment based on human decision making that is derived from existing prej – udices. While the limits of data and data analysis need to be taken into account, decisions supported by data are potentially better decisions than those without any empirical support. Algorithms can, in turn, be used to identify systematic bias and poten-tially discriminatory processes. Therefore, big data also presents opportunities for assessing funda- mental rights compliance.2. Data-supported decision making: predictions, algorithms and machine˛learningWith the increased availability and use of data, deci-sions are increasingly being facilitated or some -times even completely taken over by using so -called predictive modelling methods Œ often referred to as the use of algorithms. Using data to predict incidents or behaviour is a major part of develop-ments related to machine learning and AI. A classic example of using algorithms based on data analy-sis, a tool most people experience every day, is the spam ˚lter. The algorithm has ‚learned™ Œ with some level of certainty Œ to identify whether an email is spam and to block˜it.A basic understanding of how algorithms support decisions is essential, to allow experts and prac-titioners from other ˚elds to enter the discussion and to increase awareness and technical literacy. Furthermore, it is important to be able to identify and ask the right questions about potential prob-lems that arise when using algorithms, particularly when it comes to discrimination.Creating algorithms to make predictions may involve different methods, all of which use so-called ‚train-ing data™ to ˚nd out which calculations predict a cer -tain outcome most accurately. For example, a set of several thousands of emails, identi˚ed as either ‚spam™ or ‚not spam™, is used to identify charac-teristics that de˚ne differences between the two 7 General Data Protection Regulation.

PAGE – 4 ============
#BigData: Discrimination in data-supported decision making4groups of emails. In this example, characteristics may include speci˚c words and combinations of words within emails. Through this process, rules for identifying spam are established. Many differ-ent calculations (i.e. algorithms) can be used and the best performing one is selected, i.e. the cal-culation that can categorise most cases correctly.It is critically important to note that the output of algorithms is always based on probability, which means that there is uncertainty attached to the clas-si˚cations made. As we can see in our daily lives, the methods can work quite well, but they are not infallible. Sometimes spam passes through to our email inbox, so-called ‚false negatives™ (i.e. errone-ously, it is not identi˚ed as spam). Less frequently, a fully legitimate email might be suppressed by the ˚lter, a so-called ‚false positive™. The rate of true positives, the rate of true negatives and the trade -off between these two rates are commonly used to assess a classi˚cation problem, such as detect-ing spam. There are several rates and indicators available that can be analysed for the assessment of how well an algorithm works.3. Computers filearning to discriminateflUsing data and algorithms for prediction can con-siderably facilitate decisions, as it allows for the revelation of patterns that cannot be otherwise identi˚ed. However, an algorithm can contribute to discriminatory decision making.As speci˚ed under EU legislation, discrimination is illegal with respect to several conditions related to employment, access to social services and education. It is also forbidden when it is ‚indirect™: fiwhere an apparently neutral provision, criterion or practice would put persons of a racial or eth-nic origin at a particular disadvantage compared with other persons, unless that provision, crite-rion or practice is objectively justi˚ed by a legit-imate aim and the means of achieving that aim are appropriatefl.8 Similarly, Article˜11˜(3) of the Police Directive (Directive˜(EU) 2016/680) explicitly prohibits any ‚pro˚ling™ that results in discrimina-tion on the basis of special categories of personal data, such as race, ethnic origin, sexual orientation, 8 Council Directive 2000/43/EC of 19˜July 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, OJ˜L˜180/22. See also FRA (2010); FRA (2011). What is an algorithm?The term ‚algorithm™ is widely used in the context of big data, machine learning and AI. An algorithm is a sequence of commands for a computer to transform an input into an output. For example, a list of per-sons is to be sorted according to their age. The computer takes the ages of people on the list (input) and produces the new ranking of the list (output).In the area of machine learning, several algorithms are used, which could also be referred to as ways of calculating desired predictions with the use of data. Many of these algorithms are statistical methods and most of them are based on so-called ‚regression methods™. These are the most widely used statisti-cal techniques for calculating the in˛uence of a set of data on a selected outcome. For example, consider calculating the average in˛uence of drinking alcohol on life expectancy. Using existing data, the aver-age amount of alcohol a person drinks is compared to their life expectancy. Based on these calculations, life expectancy can be calculated and predicted for other persons simply by taking into consideration the amount of alcohol a person drinks, assuming a correlation exists. The algorithm used depends on the way the data are presented (e.g. whether it is numerical or textual data) and the goal of the calculation (e.g. prediction, explanation, grouping of cases). In machine learning, often several algorithms are tested to see which one has the best performance in predicting the outcome.The creation of algorithms for prediction is a complex process that involves many decisions made by sev-eral people who are variously involved in the process. Therefore, it does not only refer to rules followed by a computer, but also to the process of collecting, preparing and analysing data. This is a human process that includes several stages, involving decisions by developers and managers. The statistical method is only part of the process for developing the ˚nal rules used for prediction, classi˚cation or decisions.According to the Racial Equality Directive (2000/43/EU), dis-crimination occurs fiwhere one person is treated less favour -ably than another is, has been or would be treated in a comparable situation on grounds of racial or ethnic originfl.

PAGE – 5 ============
FRA Focus5political opinion, or religious beliefs. 9 Whereas the term ‚discrimination™ was absent from the previous Data Protection Directive˜(95/46/EC), the need to prevent discrimination as a result of automated decision making is emphasised in the new GDPR. Automated decision making, including pro˚ling, with signi˚cant effects on the data subject is for-bidden, according to Article˜22 of the GDPR, sub -ject to speci˚c exceptions.fiIn order to ensure fair and transparent processing in respect of the data subject, (–) the controller should use appropriate mathematical or statistical procedures for the pro˜ling, implement technical and organisational measures appropriate to ensure, in particular, that factors which result in inaccuracies in personal data are corrected and the risk of errors is minimised, secure personal data in a manner that takes account of the potential risks involved for the interests and rights of the data subject and that prevents, inter alia, discriminatory effects on natural persons˚(–).fl General Data Protection Regulation, Recital˚71 Academics and practitioners are increasingly researching ways to detect and repair algorithms that can potentially discriminate against individuals or certain groups on the basis of particular attrib-utes Œ for example, sex or ethnic origin.10 This hap-pens when the predicted outcome for a particular group is systematically different from other groups and therefore one group is consistently treated dif-ferently to others. For example, in cases where a member of an ethnic minority has a lower chance of being invited to a job interview because the algo-rithm was ‚trained™, based on data where their par-ticular group performs worse, i.e. has worse out-comes than other groups. As a result, they may not be invited to a job interview. This can occur when the data used to train the algorithm include information regarding protected characteristics (e.g. gender, ethnicity, religion). Furthermore, so-called ‚proxy information™ is sometimes included in the data. This may include the height of a person, which correlates with gender, or a postcode, which can indirectly indicate ethnic origin in cases of segre-gated areas in cities, or more directly, a person™s country of birth. Unequal outcomes and differen -tial treatment, especially relating to proxy informa -tion, need to be assessed to see if they amount to discrimination.9 Directive (EU) 2016/680 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal o˚ences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA.10 There is increasing literature on discrimination by algorithms. See, for example, Zliobaite I., Clusters B. (2016); Kamiran, F., ˛liobait˝, I. and Calders, T. (2013); Sandvig C. et al. (2014).Moreover, discrimination might not only be based on differences in the outcomes for groups, but the choice of data to be used might not be neutral. If the data used for building an algorithm are biased against a group (i.e. systematic differences due to the way the data are collected), the algorithm will replicate the human bias in selecting them and learn to discriminate against this group. This is particularly dangerous if it is assumed that the machine-learn-ing procedure and its results are objective, without taking into account the potential problems in the underlying data. Data can be biased for several rea -sons, including the subjective choices made when selecting, collecting and preparing data. To give a real-life example: an automated description of images was trained, based on thousands of images that were described by humans. However, humans do not neutrally describe images. Namely, a baby with white skin colour was described as a ‚baby™, but a baby with a black skin colour was described as a ‚black baby™. This is biased data because it assigned additional attributes only to a certain group, while objectively either both cases should be described including the colour or none of them. If such biased information is included in training data that is used for the development of algorithms, it will be used for resultant predictions and is therefore not neutral.11In addition to the problematic use of data in rein-forcing bias against groups, low quality data as such can lead to poor predictions and discrimination. Data can be either poorly selected data or incomplete, incorrect or outdated.12 Poorly selected data might include ‚unrepresentative data™, which do not allow generalising about other groups. For example, if an algorithm was created using data on a certain group of job applicants, then the predictions for another group might not hold true.13 The quality of data and potential bias is particularly important in the age of big data as data is generated, often quickly, over the internet without any quality control. In statis-tics, this is often referred to as ‚garbage in, garbage out™: even the most well -developed methods for predictions cannot function effectively using low quality data.14 In this sense, data quality checks and the appropriate documentation of data and meta-data is essential for high quality data analysis and the use of algorithms for decision making.11 Miltenburg E. (2016).12 The White House (2016).13 For example, it was found that speech recognition did not work as well for women as for men. This di˚erence might come from the algorithms being trained using datasets which include more men than women. See an article on this issue. 14 For example, in a number of the EU™s large-scale IT databases that are used in the ˙eld of border management and asylum, incorrect or poor quality data has been highlighted as an area that needs to be addressed; see FRA™s report on data interoperability: FRA (2017c) and FRA™s report on biometrics, EU IT-systems and fundamental rights: FRA (2018).

PAGE – 6 ============
#BigData: Discrimination in data-supported decision making64. Detecting and avoiding discriminationAuditing algorithmsDetecting discrimination in algorithms is not an easy task Œ just as detecting forms of discrimination in general can be dif˚cult. Algorithms used in modern applications of machine learning and arti˚cial intel-ligence are increasingly complex. As a consequence, results are dif˚cult or almost impossible to interpret, in terms of which information in the data in˛uences predictions and in which way. This is related to the fact that huge amounts of data can be used to pre-dict certain outcomes. For example, frequently-used algorithms are based on so-called ‚Neural Networks™, which work with hidden layers of relationships and combinations of all different characteristics in the data. This makes it dif˚cult to assess whether or not a person is being discriminated against on grounds of their gender, ethnic origin, religious belief or other grounds. However, if a predictive algorithm is fed with information on different groups and it ˚nds a differ -ence according to this information, it may potentially provide an output that discriminates.Despite the complexity, algorithms need to be ‚audited™ to show that they are lawful. In other words, they must be shown to not process data in such a way that leads to discrimination. In this context, the term ‚auditing™ comes from ‚discrimination testing™ (or ‚situation testing™) in real-life situations. For example, in an experiment, two identical, ˚ctional job appli-cations are sent to employers and only the group membership of interest (e.g. ethnic origin) varies. Often simply the name of the applicant is changed by using names typically indicating ethnic origin.15 In this way an experimental situation is created and differences in call-back rates for a job interview can be directly interpreted as discrimination.Similar approaches can also be used for detecting discrimination in the use of algorithms. For example, recent advances in research on algorithmic discrimi-nation have suggested different ways to detect dis -crimination on internet platforms. Algorithms can be audited by obtaining full access to the computer soft -ware and code used for the algorithms, which could then be evaluated by a technical expert. However, this might not always be straightforward because the discrimination is not directly encoded in the syn-tax of the computer software. Other methods could include the creation of pro˚les on platforms which are randomised as ‚testers™ and sent repeatedly to see if the outcome differs according to characteristics 15 See, for example, Bertrand M. and Mullainathan S. (2003).that could in˛uence discrimination. However, if such tests are undertaken on real data platforms, this could lead to legal problems if the service in question is harmed Œ for example, through overloading the server with requests. Hence such tests have to be created in a way to avoid such problems.16 One example of such a test was used to detect how well gender clas-si˚cation algorithms work for different groups by using preselected images. The results of the study showed that gender recognition works considera-bly less well for darker skinned females, compared to white males.17Additionally, there are methods to extract informa-tion about which data contribute most to the out-come of the algorithm. This way it can be checked if information on protected grounds, e.g. ethnic ori-gin, is important for the predictions.18 Information on certain characteristics needs to be extracted from the algorithm to understand differences in the out-comes and results of the algorithm. If, for example, a difference in income explains why a person is not offered a loan, this might be reasonable.19 However, if group membership makes a difference for a deci-sion, e.g. on a loan, it might be discrimination.The easiest way to detect discrimination is when full transparency is granted, in the sense that the code and the data used for building the algorithm are accessible to an auditor. However, even in this case it is not always straightforward. Some results might appear discriminatory but a closer look shows that they are not.20 For example, a group may appear to be treated differently on an aggregate level, but breaking up the results into explainable differences shows that no discrimination exists.21To avoid violating fundamental rights, it is crucial that the automated tools used for making decisions on people™s lives are transparent. However, the data 16 Sandvig C. et al. (2014).17 Buolamwini and Gebru (2018).18 See, for example, a blog on the issue. 19 Wachter, S., Mittelstadt, B. and Russel C. (2017).20 Kamiran, F., ˛liobait˝, I. and Calders, T. (2013).21 This is related to one of the most well-known cases related to admissions to studies at the University of California Berkeley, where it was shown that, overall, women were much less likely to be admitted than men. However, when splitting up the admission rates by type of studies, it was shown that, for each of the di˚erent types of studies, women actually had a slightly higher chance of being admitted for all studies. This is a completely counter-intuitive result, which can occur under certain circumstances, related to the di˚erent magnitude of applications per type of studies and di˚ering admission rates. Bickel, P. J. et al. (1975). One explanation is that women tended to apply for more competitive studies as compared to˜men.

PAGE – 8 ============
#BigData: Discrimination in data-supported decision making8actually constitutes discrimination. In the United States, there was a case on hiring practices, which were ruled as unlawful even if the decision was not explicitly deter-mined based on race. The barriers for accessing a job with a test that was not directly related to the job requirements disproportionately put black applicants at a disadvantage.27 In this case, a certain level of dif-ferential treatment was reached that was deemed to constitute discrimination. At the same time, the United States Supreme Court acknowledged that there can-not be a general rule on the level at which differen-tial treatment is seen as discriminatory. Instead, each situation and application needs to be assessed sepa -rately.28 A detailed analysis of the way a group is dis-criminated against should be conducted, taking propor -tionality into account with respect to the impact that certain criteria have on different groups. For example, a speci˚c job requirement Œ such as having completed training in a particular country Œ may not be a genuine or occupational requirement for carrying out the job. If such a requirement puts certain groups at a disad-vantage (e.g. immigrants), a potential algorithm that sorts job applications should not use such information for determining eligibility for a˜job.Data protection impact assessment˛(DPIA)Algorithms may be so complex that the character-istics that will in˛uence the outcome might not be easily identi˚able. Article˜35 of the GDPR has made data protection impact assessment˚(DPIA) man-datory for all fiprocessing, in particular using new technologies, [which] is likely to result in a high risk to the rights and freedoms of natural personsfl. DPIAs are complex procedures that may require a combination of technical, legal and sociological knowledge. Institutional* and academic** actors have developed standards and guidance to help data scientists in the development and implemen-tation of these and standards are being developed by industry associations.*** In this context, guid -ance on the minimum requirements to be inserted in any DPIA, and established at regional level, would likely enhance the effectivity of DPIAs. Using DPIAs could be a way for controllers to increase their accountability and make sure from the outset that their applications are not discriminatory.* See, for instance, the Template for Smart Grid and Smart Metering Systems, developed by the European Commission, or the Guidance on Privacy by design and DPIA, developed by the Information Commissioner™s of˜ce.** See, for instance, the detailed roadmap developed by S. Spiekermann (2016).*** As referred to in the Council of Europe study on human rights dimensions of automated data processing techniques (Council of Europe 2017b).27 U.S. Supreme Court, Griggs v. Duke Power Co., 401 U.S. 424 (1971).28 Feldmann M. et al. (2015).Avoiding discrimination from the outset and repairing algorithmsFurther research and discussion are needed to develop methods to ensure that algorithms are not discriminating, and to rectify algorithms that are found to be discriminatory.There are several ways to assess whether or not a group is treated differently by an algorithm.29 In the presence of different outcomes for different groups (e.g. one group is more likely to be eligible for a job), there are inherent trade-offs in creating fair predictions on all accounts. It can be mathe-matically proven that it is not possible to have the same overall risk scores for different groups and a balanced prediction model, unless groups are not performing differently on a certain outcome. For example, women may fare worse on a scale used for hiring people (e.g. using income-related infor-mation as a proxy for past performance at work). As algorithms should not discriminate against a particular group, there needs to be an assessment of the implications of using different performance measurements (or their proxies). However, pre-cisely balanced predictions for both genders would not be possible. This limits the ability to make fair predictions for groups who, in reality, fare differ -ently on a certain outcome.30In addition, situations of potential bias or discrim-ination cannot be easily solved by simply exclud-ing information on the protected group from the dataset (e.g. just exclude information on gender or ethnic origin). There could be additional informa-tion that may be related to membership of a pro-tected group. For example, a given post code could indicate ethnic origin (as mentioned above). There are ways to address and detect indirect informa-tion on protected attributes (which can result in indirect discrimination) that can be used to repair algorithms. For example, testing whether pro-tected characteristics of individuals can be pre-dicted well from other information included in the dataset.31 However, research on the topic has only just begun and needs to progress further to develop standardised methods for detecting and 29 For example, the overall accuracy (e.g. one group is more likely to be hired, when controlling for other explanatory factors), the true positive rate (the algorithm more often selects people from one group among those who should be selected), and the true negative rate (e.g. the algorithm more often rejects people from one group among those to be rejected) are di˚erent ways to assess fairness. For a discussion of measuring bias and di˚erent fairness criteria, see Chouldechova (2017).30 Kleinberg J. et al. (2017a). For a more accessible description, see this blog focusing on quantitative issues. 31 Alder P. et al. (2016).

129 KB – 14 Pages