Matches in SemOpenAlex for { <https://semopenalex.org/work/W3046050249> ?p ?o ?g. }
Showing items 1 to 82 of
82
with 100 items per page.
- W3046050249 endingPage "22076" @default.
- W3046050249 startingPage "22076" @default.
- W3046050249 abstract "This symposium examines key questions posed by teaching ethics to artificial intelligence for business settings. A general question is how to balance the benefits and risks of AI, which is a significant concern with technological change. That concern is made more severe by the large-scale implications of AI on human life, including our understanding of what it is to be a human being and what entities can be properly treated as right holders. More specifically, several topics arise in the intersection of AI and Ethics that this panel will address. Fairness in the use of AI for business: When AI is used at a large scale for business, there is always a concern that it may also lead to drastic and large scale discrimination against some groups of people. For example, deep-learning systems may deny mortgage loans to members of certain groups when others with comparable financial resources receive loans, and this may occur even if none of the training data indicate group membership. It is thus crucial to design tools that can monitor the AI’s performance to continuously test for bias. A second crucial goal is to design methods to mitigate any such biases to the maximum extent possible. This research direction will involve both making fundamental contributions to AI and statistics in terms of developing these tools, and impactful use in business in many applications. A large ethics literature has carefully analyzed concepts of fairness, and this body of thought can be applied to AI. Many statistical measures of bias have been proposed, some of which are inconsistent with others. An ethical analysis can help evaluate whether measures have normative justification. Ethically grounded value alignment: Deep learning systems are frequently designed to reflect human values so as to avoid recommending decisions inconsistent with these values. Values are typically ascertained, however, in the much the same empirical way as facts and predictions - in this case, by analyzing large datasets that reflect human beliefs and preferences. Yet the AI community is coming to realize that a purely empirical approach can reflect biases and prejudices as well as acceptable moral values. There is no substitute for grounding value alignment in ethical principles that are independently derived, a manoeuvre that avoids the philosophically famous “naturalistic fallacy” of deriving ethical conclusions from purely factual premises. The deontological tradition in ethics provides the intellectual resources to develop rigorously defined and grounded principles that can be used to screen training sets or otherwise direct learning procedures. Human-Centered Explainable AI (XAI): Many industry experts have pointed out the critical need for human oriented explanation by AI systems. According to an IBM survey, about 60% of 5,000 executives were concerned “about being able to explain how AI is using data and making decisions.” However, the most successful algorithms in use today are not transparent. All of these models are fundamentally “black boxes” that include many layers of complex, typically nonlinear, transformations of inputs. It can be quite difficult for anyone to understand the algorithm’s output and/or why the model makes key predictions. Given these challenges, efforts to develop more interpretable, explainable, or intelligible algorithms comprise a key area of current research. The explainability of an algorithm plays a key role in detecting, enabling, and improving auditability, fairness, trust, and reliability. However, the definition of interpretability and desiderata of what makes a good explanation remain elusive and different researchers use different, often problem- or domain-specific, definitions. More alarmingly, this XAI research rarely involves systematic investigation of human responses with regard to a “What is a good explanation for machine learning output?” AI generates a variety of ethical questions at three interconnected levels. The first is the legal dimension: what laws should be enacted to govern AI? Should some particular aspect of AI be subject to legal regulation at all? Do we need to fashion specific legislation to address AI issues or rely on more general legal standards? The second is the social dimension, which raises questions about the social morality that should be cultivated concerning AI. What sort of culture will develop in response to AI? A third level is concerned with issues that arise for individuals and associations in their engagement with AI. That connects with corporations and associations, which still need to exercise their own moral judgment." @default.
- W3046050249 created "2020-08-03" @default.
- W3046050249 creator A5000013295 @default.
- W3046050249 creator A5012944737 @default.
- W3046050249 creator A5019209057 @default.
- W3046050249 creator A5020723194 @default.
- W3046050249 creator A5023663122 @default.
- W3046050249 creator A5034757471 @default.
- W3046050249 date "2020-08-01" @default.
- W3046050249 modified "2023-09-23" @default.
- W3046050249 title "Artificial Intelligence and Innovation Ethics" @default.
- W3046050249 doi "https://doi.org/10.5465/ambpp.2020.22076symposium" @default.
- W3046050249 hasPublicationYear "2020" @default.
- W3046050249 type Work @default.
- W3046050249 sameAs 3046050249 @default.
- W3046050249 citedByCount "0" @default.
- W3046050249 crossrefType "journal-article" @default.
- W3046050249 hasAuthorship W3046050249A5000013295 @default.
- W3046050249 hasAuthorship W3046050249A5012944737 @default.
- W3046050249 hasAuthorship W3046050249A5019209057 @default.
- W3046050249 hasAuthorship W3046050249A5020723194 @default.
- W3046050249 hasAuthorship W3046050249A5023663122 @default.
- W3046050249 hasAuthorship W3046050249A5034757471 @default.
- W3046050249 hasConcept C105409693 @default.
- W3046050249 hasConcept C119232533 @default.
- W3046050249 hasConcept C121332964 @default.
- W3046050249 hasConcept C127413603 @default.
- W3046050249 hasConcept C146978453 @default.
- W3046050249 hasConcept C151730666 @default.
- W3046050249 hasConcept C154945302 @default.
- W3046050249 hasConcept C17744445 @default.
- W3046050249 hasConcept C2522767166 @default.
- W3046050249 hasConcept C2767350 @default.
- W3046050249 hasConcept C2777267654 @default.
- W3046050249 hasConcept C2778755073 @default.
- W3046050249 hasConcept C39549134 @default.
- W3046050249 hasConcept C41008148 @default.
- W3046050249 hasConcept C55587333 @default.
- W3046050249 hasConcept C56739046 @default.
- W3046050249 hasConcept C62520636 @default.
- W3046050249 hasConcept C64543145 @default.
- W3046050249 hasConcept C86803240 @default.
- W3046050249 hasConceptScore W3046050249C105409693 @default.
- W3046050249 hasConceptScore W3046050249C119232533 @default.
- W3046050249 hasConceptScore W3046050249C121332964 @default.
- W3046050249 hasConceptScore W3046050249C127413603 @default.
- W3046050249 hasConceptScore W3046050249C146978453 @default.
- W3046050249 hasConceptScore W3046050249C151730666 @default.
- W3046050249 hasConceptScore W3046050249C154945302 @default.
- W3046050249 hasConceptScore W3046050249C17744445 @default.
- W3046050249 hasConceptScore W3046050249C2522767166 @default.
- W3046050249 hasConceptScore W3046050249C2767350 @default.
- W3046050249 hasConceptScore W3046050249C2777267654 @default.
- W3046050249 hasConceptScore W3046050249C2778755073 @default.
- W3046050249 hasConceptScore W3046050249C39549134 @default.
- W3046050249 hasConceptScore W3046050249C41008148 @default.
- W3046050249 hasConceptScore W3046050249C55587333 @default.
- W3046050249 hasConceptScore W3046050249C56739046 @default.
- W3046050249 hasConceptScore W3046050249C62520636 @default.
- W3046050249 hasConceptScore W3046050249C64543145 @default.
- W3046050249 hasConceptScore W3046050249C86803240 @default.
- W3046050249 hasIssue "1" @default.
- W3046050249 hasLocation W30460502491 @default.
- W3046050249 hasOpenAccess W3046050249 @default.
- W3046050249 hasPrimaryLocation W30460502491 @default.
- W3046050249 hasRelatedWork W10157162 @default.
- W3046050249 hasRelatedWork W11140712 @default.
- W3046050249 hasRelatedWork W1189111 @default.
- W3046050249 hasRelatedWork W1719001 @default.
- W3046050249 hasRelatedWork W340014 @default.
- W3046050249 hasRelatedWork W3551418 @default.
- W3046050249 hasRelatedWork W3577913 @default.
- W3046050249 hasRelatedWork W5048401 @default.
- W3046050249 hasRelatedWork W668849 @default.
- W3046050249 hasRelatedWork W9709510 @default.
- W3046050249 hasVolume "2020" @default.
- W3046050249 isParatext "false" @default.
- W3046050249 isRetracted "false" @default.
- W3046050249 magId "3046050249" @default.
- W3046050249 workType "article" @default.