Matches in SemOpenAlex for { <https://semopenalex.org/work/W3176768410> ?p ?o ?g. }
Showing items 1 to 74 of
74
with 100 items per page.
- W3176768410 endingPage "105567" @default.
- W3176768410 startingPage "105567" @default.
- W3176768410 abstract "In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in artificial intelligence (AI) and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been undertaken from an American legal perspective, comparatively little has mapped the effects and requirements of EU law. This Article addresses this critical gap between legal, technical, and organisational notions of algorithmic fairness. Through analysis of EU non-discrimination law and jurisprudence of the European Court of Justice (ECJ) and national courts, we identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. A clear gap exists between statistical measures of fairness as embedded in myriad fairness toolkits and governance mechanisms and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements used by the ECJ; we refer to this approach as “contextual equality.” This Article makes three contributions. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Many of the concepts fundamental to bringing a claim, such as the composition of the disadvantaged and advantaged group, the severity and type of harm suffered, and requirements for the relevance and admissibility of evidence, require normative or political choices to be made by the judiciary on a case-by-case basis. We show that automating fairness or non-discrimination in Europe may be impossible because the law, by design, does not provide a static or homogenous framework suited to testing for discrimination in AI systems. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Equivalent signalling mechanisms and agency do not exist in algorithmic systems. Compared to traditional forms of discrimination, automated discrimination is more abstract and unintuitive, subtle, intangible, and difficult to detect. The increasing use of algorithms disrupts traditional legal remedies and procedures for detection, investigation, prevention, and correction of discrimination which have predominantly relied upon intuition. Consistent assessment procedures that define a common standard for statistical evidence to detect and assess prima facie automated discrimination are urgently needed to support judges, regulators, system controllers and developers, and claimants. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. A ‘gold standard’ for assessment of prima facie discrimination has been advanced by the European Court of Justice but not yet translated into standard assessment procedures for automated discrimination. We propose ‘conditional demographic disparity’ (CDD) as a standard baseline statistical measurement that aligns with the Court's ‘gold standard’. Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law." @default.
- W3176768410 created "2021-07-05" @default.
- W3176768410 creator A5008943199 @default.
- W3176768410 creator A5075172090 @default.
- W3176768410 creator A5081516308 @default.
- W3176768410 date "2021-07-01" @default.
- W3176768410 modified "2023-10-11" @default.
- W3176768410 title "Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI" @default.
- W3176768410 doi "https://doi.org/10.1016/j.clsr.2021.105567" @default.
- W3176768410 hasPublicationYear "2021" @default.
- W3176768410 type Work @default.
- W3176768410 sameAs 3176768410 @default.
- W3176768410 citedByCount "60" @default.
- W3176768410 countsByYear W31767684102021 @default.
- W3176768410 countsByYear W31767684102022 @default.
- W3176768410 countsByYear W31767684102023 @default.
- W3176768410 crossrefType "journal-article" @default.
- W3176768410 hasAuthorship W3176768410A5008943199 @default.
- W3176768410 hasAuthorship W3176768410A5075172090 @default.
- W3176768410 hasAuthorship W3176768410A5081516308 @default.
- W3176768410 hasBestOaLocation W31767684102 @default.
- W3176768410 hasConcept C144024400 @default.
- W3176768410 hasConcept C151730666 @default.
- W3176768410 hasConcept C17744445 @default.
- W3176768410 hasConcept C190253527 @default.
- W3176768410 hasConcept C199539241 @default.
- W3176768410 hasConcept C2776889015 @default.
- W3176768410 hasConcept C2778272461 @default.
- W3176768410 hasConcept C2779343474 @default.
- W3176768410 hasConcept C2780623907 @default.
- W3176768410 hasConcept C41008148 @default.
- W3176768410 hasConcept C44725695 @default.
- W3176768410 hasConcept C71043370 @default.
- W3176768410 hasConcept C86803240 @default.
- W3176768410 hasConceptScore W3176768410C144024400 @default.
- W3176768410 hasConceptScore W3176768410C151730666 @default.
- W3176768410 hasConceptScore W3176768410C17744445 @default.
- W3176768410 hasConceptScore W3176768410C190253527 @default.
- W3176768410 hasConceptScore W3176768410C199539241 @default.
- W3176768410 hasConceptScore W3176768410C2776889015 @default.
- W3176768410 hasConceptScore W3176768410C2778272461 @default.
- W3176768410 hasConceptScore W3176768410C2779343474 @default.
- W3176768410 hasConceptScore W3176768410C2780623907 @default.
- W3176768410 hasConceptScore W3176768410C41008148 @default.
- W3176768410 hasConceptScore W3176768410C44725695 @default.
- W3176768410 hasConceptScore W3176768410C71043370 @default.
- W3176768410 hasConceptScore W3176768410C86803240 @default.
- W3176768410 hasFunder F4320306304 @default.
- W3176768410 hasFunder F4320315939 @default.
- W3176768410 hasFunder F4320320004 @default.
- W3176768410 hasFunder F4320334627 @default.
- W3176768410 hasLocation W31767684101 @default.
- W3176768410 hasLocation W31767684102 @default.
- W3176768410 hasLocation W31767684103 @default.
- W3176768410 hasLocation W31767684104 @default.
- W3176768410 hasOpenAccess W3176768410 @default.
- W3176768410 hasPrimaryLocation W31767684101 @default.
- W3176768410 hasRelatedWork W1578573981 @default.
- W3176768410 hasRelatedWork W2748952813 @default.
- W3176768410 hasRelatedWork W2767556809 @default.
- W3176768410 hasRelatedWork W2899084033 @default.
- W3176768410 hasRelatedWork W3135805858 @default.
- W3176768410 hasRelatedWork W3198010137 @default.
- W3176768410 hasRelatedWork W4232516390 @default.
- W3176768410 hasRelatedWork W4235422546 @default.
- W3176768410 hasRelatedWork W4313593195 @default.
- W3176768410 hasRelatedWork W807969 @default.
- W3176768410 hasVolume "41" @default.
- W3176768410 isParatext "false" @default.
- W3176768410 isRetracted "false" @default.
- W3176768410 magId "3176768410" @default.
- W3176768410 workType "article" @default.