Matches in SemOpenAlex for { <https://semopenalex.org/work/W4387138035> ?p ?o ?g. }
Showing items 1 to 94 of
94
with 100 items per page.
- W4387138035 endingPage "23" @default.
- W4387138035 startingPage "1" @default.
- W4387138035 abstract "ABSTRACTHuman beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment problem, the challenge of how to build AI that acts in accordance with our human values. We argue that there is an especially close connection between solving the value alignment problem in AI ethics and using AI to pursue certain forms of human enhancement. But in addition, we also argue that there are important limits to what kinds of human enhancement can be pursued in this way, because some forms of human enhancement—namely moral revolutions—involve a kind of value misalignment rather than alignment.KEYWORDS: Artificial intelligencehuman enhancementmoral revolutions AcknowledgementsBoth authors would like to thank the National Endowment for the Humanities for support for their work, the University of Puget Sound and the John Lantz Senior Fellowship for Research or Advanced Study, and the participants at the Philosophy, AI, and Society Workshop at Stanford University. Ariela Tubert would like to thank the audience at the Ethics and Broader Implications of Technology Conference at the University of Nebraska at Lincoln.Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1 See for instance Russell (Citation2019), Christian (Citation2020), Gabriel (Citation2020), Wallach and Vallor (Citation2020).2 Appiah (Citation2010). See also Baker (Citation2019).3 Russell and Norvig (Citation2010).4 Gershman (Citation2021, 156) makes this point while arguing that the ‘folklore’ about how machine learning has its origins in neuroscience overstates the level of influence neuroscience has actually had.5 See for instance Kahneman, Slovic, and Tversky (Citation1982), Kahneman and Tversky (Citation2000), Kahneman (Citation2011).6 Lieder et al. (Citation2019).7 Lieder et al. (Citation2019, 1096).8 Lieder and Griffiths (Citation2019). The notion of ‘rational analysis’ is drawn from Anderson (Citation1990).9 This is a point of focus in Griffiths (Citation2020).10 Lieder et al. (Citation2019, 1096).11 Lieder et al. (Citation2019, 1096). On gamification and AI more generally, see Deterding et al. (Citation2011).12 Chasse (Citation2021).13 Lieder et al. (Citation2019).14 Sinnott-Armstrong (Citation2008).15 Tversky and Kahneman (Citation1981).16 As Kühberger (Citation2017, 79) notes, the effect is robust and has been replicated across hundreds of papers.17 Kahneman and Tversky (Citation1979).18 Sometimes this point is used as part of an argument that we should be skeptical of moral facts at all, but this move requires a further inference. For influential discussions of some of the issues involved, see Street (Citation2006), Joyce (Citation2007).19 Singer (Citation2005), Greene (Citation2007).20 See for instance Haidt (Citation2012).21 Kass (Citation1997).22 Nussbaum (Citation2004). Kelly (Citation2011) offers an extended discussion of the moral significance of disgust.23 The notion of an expanding circle of moral concern comes from Singer (Citation2011).24 On Tay, see Victor (Citation2016). On the Turkish translation case, see Olson (Citation2018).25 On search engines, see Noble (Citation2018). On facial recognition systems, Buolamwini and Gebru (Citation2018). On hiring decisions, Dastin (Citation2018). On loan and credit card applications, Angwin et al. (Citation2016). On predictive policing, O’Neil (Citation2016). On sentencing and parole decisions, Eubanks (Citation2018).26 See for instance Kleinberg et al. (Citation2018), Kleinberg et al. (Citation2020).27 See for example Dovidio and Gaertner (Citation2000), Amodio and Devine (Citation2006), Gendler (Citation2011), Levy (Citation2017). For a critical assessment of work on implicit bias, though, see Machery (Citation2022).28 Wallach and Allen (Citation2009). We note though that they frame their discussion in terms of building moral machines rather than in terms of value alignment. For Wallach’s thoughts about value alignment, see Wallach and Vallor (Citation2020).29 Mill (Citation1861/1998). Discussions of a utilitarian-oriented AI include Gips (Citation1994), Grau (Citation2011), and Russell (Citation2019).30 Kant (Citation1785/2012). Thomas Powers’ (Citation2006) ‘Prospects for a Kantian Machine’ connects the view to AI.31 Asimov (Citation1950).32 Each of these examples is mentioned by Wallach and Allen (Citation2009, 79).33 Shortliffe and Buchanan (Citation1975).34 Savulescu and Maslen (Citation2015), Giubilini and Savulescu (Citation2018). For critical discussion of the proposal that is still sympathetic to the idea of pursuing AI-based human moral enhancement, see Lara and Decker (Citation2020).35 Deterding (Citation2014) discusses moral gamification, defending a ‘eudaimonic design’ approach.36 Millar (Citation2015) and Contissa, Lagioia, and Sartor (Citation2017) argue in favor of user control over the ethical settings on autonomous cars, while Lin (Citation2014) and Gogoll and Müller (Citation2017) argue against the idea.37 Santurkar et al. (Citation2023). See also Rozado (Citation2023).38 Thompson, Hsu, and Myers (Citation2023).39 See Narayanan and Kapoor (Citation2023) for a critical discussion of Santurkar et al. (Citation2023).40 OpenAI (Citation2023).41 Steinberg (Citation2023).42 Walker (Citation2023).43 Marcus (Citation2023).44 Appiah (Citation2010). Klenk et al. (Citation2022) provides a survey of recent work on moral revolutions.45 Appiah (Citation2010:, 8), Kuhn (Citation1962). Klenk et al. (Citation2022) emphasize how this connection to Kuhn is common also in other authors discussing moral revolutions.46 Wallach and Allen (Citation2009, 79).47 LeCun, Bengio, and Hinton (Citation2015), Bengio, LeCun, and Hinton (Citation2021).48 Ensmenger (Citation2012).49 Holodny (Citation2017).50 Metz (Citation2016).51 Knight (Citation2017).52 Strogatz (Citation2018).53 Rini (Citation2017) also uses AlphaGo’s Move 37 as an analogy for a radically new AI moral view.54 Appiah (Citation2010:, 66), Klenk et al. (Citation2022).55 See discussions of what is needed for significant society-wide moral progress: Moody-Adams (Citation2017), Rorty (Citation2006), Nussbaum (Citation2007).56 Appiah (Citation2010).57 On AI and the risk of value lock-in, see for instance Ord (Citation2020: Chapter 5), MacAskill (Citation2022: Chapter 4).58 Kenward and Sinclair (Citation2021)." @default.
- W4387138035 created "2023-09-29" @default.
- W4387138035 creator A5033360126 @default.
- W4387138035 creator A5056297506 @default.
- W4387138035 date "2023-09-27" @default.
- W4387138035 modified "2023-10-18" @default.
- W4387138035 title "Value alignment, human enhancement, and moral revolutions" @default.
- W4387138035 cites W12977029 @default.
- W4387138035 cites W188378287 @default.
- W4387138035 cites W1999850382 @default.
- W4387138035 cites W2019332346 @default.
- W4387138035 cites W2023718959 @default.
- W4387138035 cites W2039261710 @default.
- W4387138035 cites W2082974516 @default.
- W4387138035 cites W2088724026 @default.
- W4387138035 cites W2096452841 @default.
- W4387138035 cites W2104591832 @default.
- W4387138035 cites W2163092751 @default.
- W4387138035 cites W2462118221 @default.
- W4387138035 cites W2477572219 @default.
- W4387138035 cites W2504502989 @default.
- W4387138035 cites W2556598314 @default.
- W4387138035 cites W2756227927 @default.
- W4387138035 cites W2773687970 @default.
- W4387138035 cites W2911589237 @default.
- W4387138035 cites W2915420448 @default.
- W4387138035 cites W2919115771 @default.
- W4387138035 cites W2950068534 @default.
- W4387138035 cites W2969481347 @default.
- W4387138035 cites W3011865677 @default.
- W4387138035 cites W3046215811 @default.
- W4387138035 cites W3092484258 @default.
- W4387138035 cites W3093731611 @default.
- W4387138035 cites W3105871743 @default.
- W4387138035 cites W3133196413 @default.
- W4387138035 cites W3172314959 @default.
- W4387138035 cites W3176707157 @default.
- W4387138035 cites W350888970 @default.
- W4387138035 cites W4226295841 @default.
- W4387138035 cites W4231801058 @default.
- W4387138035 cites W4247462173 @default.
- W4387138035 cites W4253227781 @default.
- W4387138035 cites W4301958211 @default.
- W4387138035 cites W4323043839 @default.
- W4387138035 cites W4361866126 @default.
- W4387138035 cites W4365388135 @default.
- W4387138035 cites W764369039 @default.
- W4387138035 doi "https://doi.org/10.1080/0020174x.2023.2261506" @default.
- W4387138035 hasPublicationYear "2023" @default.
- W4387138035 type Work @default.
- W4387138035 citedByCount "0" @default.
- W4387138035 crossrefType "journal-article" @default.
- W4387138035 hasAuthorship W4387138035A5033360126 @default.
- W4387138035 hasAuthorship W4387138035A5056297506 @default.
- W4387138035 hasConcept C111472728 @default.
- W4387138035 hasConcept C119857082 @default.
- W4387138035 hasConcept C127413603 @default.
- W4387138035 hasConcept C138885662 @default.
- W4387138035 hasConcept C144024400 @default.
- W4387138035 hasConcept C145197507 @default.
- W4387138035 hasConcept C15744967 @default.
- W4387138035 hasConcept C2776291640 @default.
- W4387138035 hasConcept C2779706800 @default.
- W4387138035 hasConcept C41008148 @default.
- W4387138035 hasConcept C55587333 @default.
- W4387138035 hasConceptScore W4387138035C111472728 @default.
- W4387138035 hasConceptScore W4387138035C119857082 @default.
- W4387138035 hasConceptScore W4387138035C127413603 @default.
- W4387138035 hasConceptScore W4387138035C138885662 @default.
- W4387138035 hasConceptScore W4387138035C144024400 @default.
- W4387138035 hasConceptScore W4387138035C145197507 @default.
- W4387138035 hasConceptScore W4387138035C15744967 @default.
- W4387138035 hasConceptScore W4387138035C2776291640 @default.
- W4387138035 hasConceptScore W4387138035C2779706800 @default.
- W4387138035 hasConceptScore W4387138035C41008148 @default.
- W4387138035 hasConceptScore W4387138035C55587333 @default.
- W4387138035 hasLocation W43871380351 @default.
- W4387138035 hasOpenAccess W4387138035 @default.
- W4387138035 hasPrimaryLocation W43871380351 @default.
- W4387138035 hasRelatedWork W1976698646 @default.
- W4387138035 hasRelatedWork W2084145347 @default.
- W4387138035 hasRelatedWork W2151907323 @default.
- W4387138035 hasRelatedWork W2224427030 @default.
- W4387138035 hasRelatedWork W2322649837 @default.
- W4387138035 hasRelatedWork W261485907 @default.
- W4387138035 hasRelatedWork W2748952813 @default.
- W4387138035 hasRelatedWork W3126495318 @default.
- W4387138035 hasRelatedWork W3151845832 @default.
- W4387138035 hasRelatedWork W3177434258 @default.
- W4387138035 isParatext "false" @default.
- W4387138035 isRetracted "false" @default.
- W4387138035 workType "article" @default.