Matches in SemOpenAlex for { <https://semopenalex.org/work/W3115613105> ?p ?o ?g. }
Showing items 1 to 67 of
67
with 100 items per page.
- W3115613105 abstract "AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals by propagating AI technologies that carry risks. Leadership must be defined not by actors who come up with new iterations of ethical guidelines, but by those who develop legal obligations regarding AI, which are anchored in and derived from a human rights perspective.A way to do this would be to reaffirm the human-centric nature of AI development and deployment that follows actionable standards of human rights law. The human rights legal framework has been around for decades and has been instrumental in fighting and pressuring states to change domestic laws. Nelson Mandela referred to the duties spelled out in the Universal Declaration of Human Rights while fighting to end apartheid in South Africa; in 1973 with Roe v. Wade the United States Supreme Court followed a larger global trend of recognizing women’s human rights by protecting individuals from undue governmental interference in private affairs and giving women the ability to participate fully and equally in society; more recently, open access to the Internet has been recognized as a human right essential to not only freedom of opinion, expression, association, and assembly, but also instrumental in mobilizing the population to call for equality, justice, and accountability in order to advance global respect for human rights. These examples show how human rights standards have been applied to a diverse set of domestic and international rules. That these standards are actionable and enforceable show that they are well-suited to regulate the cross-border nature of AI technologies. AI systems must be scrutinized through a human rights perspective to analyze current and future harms either created or exacerbated by AI, and take action to avoid any harm.The adoption of AI technologies has spread across borders and has had diverse effects on societies all over the world. A globalized technology needs international obligations to mitigate the societal problems being faced at an accelerated and larger scale. Companies and states should strive for the development of AI technologies that uphold human rights. Centering the AI discourse around human rights rather than simply ethics can be one way of providing a clearer legal basis for development and deployment of AI technologies. The international community must raise awareness, build consensus, and analyze thoroughly how AI technologies violate human rights in different contexts and develop paths for effective legal remedies. Focusing the discourse on human rights rather than ethical principles can provide more accountability measures, more obligations for state and private actors, and can redirect the debate to rely on consistent and widely accepted legal principles developed over decades." @default.
- W3115613105 created "2021-01-05" @default.
- W3115613105 creator A5052747063 @default.
- W3115613105 creator A5088059927 @default.
- W3115613105 date "2019-01-01" @default.
- W3115613105 modified "2023-09-25" @default.
- W3115613105 title "Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns" @default.
- W3115613105 doi "https://doi.org/10.2139/ssrn.3589473" @default.
- W3115613105 hasPublicationYear "2019" @default.
- W3115613105 type Work @default.
- W3115613105 sameAs 3115613105 @default.
- W3115613105 citedByCount "1" @default.
- W3115613105 countsByYear W31156131052021 @default.
- W3115613105 crossrefType "journal-article" @default.
- W3115613105 hasAuthorship W3115613105A5052747063 @default.
- W3115613105 hasAuthorship W3115613105A5088059927 @default.
- W3115613105 hasConcept C102938260 @default.
- W3115613105 hasConcept C108827166 @default.
- W3115613105 hasConcept C120665830 @default.
- W3115613105 hasConcept C121332964 @default.
- W3115613105 hasConcept C123201435 @default.
- W3115613105 hasConcept C127413603 @default.
- W3115613105 hasConcept C141972696 @default.
- W3115613105 hasConcept C14587133 @default.
- W3115613105 hasConcept C15744967 @default.
- W3115613105 hasConcept C169437150 @default.
- W3115613105 hasConcept C17744445 @default.
- W3115613105 hasConcept C192209626 @default.
- W3115613105 hasConcept C199539241 @default.
- W3115613105 hasConcept C2910231717 @default.
- W3115613105 hasConcept C39549134 @default.
- W3115613105 hasConcept C41008148 @default.
- W3115613105 hasConcept C55587333 @default.
- W3115613105 hasConceptScore W3115613105C102938260 @default.
- W3115613105 hasConceptScore W3115613105C108827166 @default.
- W3115613105 hasConceptScore W3115613105C120665830 @default.
- W3115613105 hasConceptScore W3115613105C121332964 @default.
- W3115613105 hasConceptScore W3115613105C123201435 @default.
- W3115613105 hasConceptScore W3115613105C127413603 @default.
- W3115613105 hasConceptScore W3115613105C141972696 @default.
- W3115613105 hasConceptScore W3115613105C14587133 @default.
- W3115613105 hasConceptScore W3115613105C15744967 @default.
- W3115613105 hasConceptScore W3115613105C169437150 @default.
- W3115613105 hasConceptScore W3115613105C17744445 @default.
- W3115613105 hasConceptScore W3115613105C192209626 @default.
- W3115613105 hasConceptScore W3115613105C199539241 @default.
- W3115613105 hasConceptScore W3115613105C2910231717 @default.
- W3115613105 hasConceptScore W3115613105C39549134 @default.
- W3115613105 hasConceptScore W3115613105C41008148 @default.
- W3115613105 hasConceptScore W3115613105C55587333 @default.
- W3115613105 hasLocation W31156131051 @default.
- W3115613105 hasOpenAccess W3115613105 @default.
- W3115613105 hasPrimaryLocation W31156131051 @default.
- W3115613105 hasRelatedWork W116288322 @default.
- W3115613105 hasRelatedWork W1571266652 @default.
- W3115613105 hasRelatedWork W1900333031 @default.
- W3115613105 hasRelatedWork W2112150999 @default.
- W3115613105 hasRelatedWork W2352347421 @default.
- W3115613105 hasRelatedWork W315838798 @default.
- W3115613105 hasRelatedWork W3165052494 @default.
- W3115613105 hasRelatedWork W583765920 @default.
- W3115613105 hasRelatedWork W1718441815 @default.
- W3115613105 hasRelatedWork W3090320789 @default.
- W3115613105 isParatext "false" @default.
- W3115613105 isRetracted "false" @default.
- W3115613105 magId "3115613105" @default.
- W3115613105 workType "article" @default.