Risks of Discrimination through the Use of Algorithms

2019

- Fact sheet on the research project -

Reasons for the risks of discrimination through the use of algorithms

In the development of algorithms and models

  • Risks arise from the labelling of categories for the classification of individuals especially where those rely on subjective assumptions and interpretations. (example: characteristic: “suitability for a company”)

Through biased (training) data sets

  • If data sets are incomplete, no longer up-to-date or stem from situations in which there was or is unequal distribution or discrimination against individuals, this can lead to certain groups being over- or under-represented.
  • Due to the substitution of legally protected characteristics with seemingly “neutral” variables, risks of indirect discrimination can arise if there is a correlation between the variables and the legally protected characteristics. (example: correlation between place of residence and ethnicity)
  • Data mining and machine learning processes use more variables compared to “traditional” statistical methods. This increases the risk of (undetected) correlations.

In online platforms

  • Here, algorithms help users to rate and select one another. This can lead to restricted access to certain interactions and transactions of certain users.
  • In cases where algorithms are based on evaluations and rankings of other users, social inequalities can multiply.
  • Pricing and market mechanisms (e.g. auction mechanisms) which are used in advertising placements and customer selection might also be responsible for risks of discrimination.

The use of computer-based systems can conceal cases of deliberate discrimination.

Social risks of algorithm-based differentiation

  • Statistical discrimination does not categorise individuals based on their actual characteristics. Instead categorisation results from the processing of group data. Thus, the use of stereotypes generated by data processing determines the outcomes of decision-making processes. Unfair generalisation can occur, especially in “atypical” cases.
  • When they are being categorised and assigned to a certain group based on algorithms, individuals affected do not have the chance to agree or to disagree with the treatment they are being subjected to. This also poses a threat to the freedom of personal development, the right to self-expression and the protection of human dignity.
  • The risks connected with economic-rational differentiation might add up to cumulative disadvantages because algorithmic assessments and actions based on them could concentrate on individuals or groups who are already disadvantaged and thus appear more frequently in the data collection.
  • Equality and social political objectives may be weakened if societal considerations on differentiation (made possible with the help of algorithms and seemingly economically viable) proceed in a one-sided way in favour of efficiency and at the expense of equality.