Matches in SemOpenAlex for { <https://semopenalex.org/work/W3164753353> ?p ?o ?g. }
Showing items 1 to 76 of
76
with 100 items per page.
- W3164753353 endingPage "768" @default.
- W3164753353 startingPage "749" @default.
- W3164753353 abstract "Abstract The nonconsensual taking or sharing of nude or sexual images, also known as “image-based sexual abuse,” is a major social and legal problem in the digital age. In this chapter, we examine the problem of image-based sexual abuse in the context of digital platform governance. Specifically, we focus on two key governance issues: first, the governance of platforms, including the regulatory frameworks that apply to technology companies; and second, the governance by platforms, focusing on their policies, tools, and practices for responding to image-based sexual abuse. After analyzing the policies and practices of a range of digital platforms, we identify four overarching shortcomings: (1) inconsistent, reductionist, and ambiguous language; (2) a stark gap between the policy and practice of content regulation, including transparency deficits; (3) imperfect technology for detecting abuse; and (4) the responsibilization of users to report and prevent abuse. Drawing on a model of corporate social responsibility (CSR), we argue that until platforms better address these problems, they risk failing victim-survivors of image-based sexual abuse and are implicated in the perpetration of such abuse. We conclude by calling for reasonable and proportionate state-based regulation that can help to better align governance by platforms with CSR-initiatives. Keywords Digital platforms Platform governance Image-based sexual abuse Nonconsensual pornography Content moderation Corporate social responsibility Citation Henry, N. and Witt, A. (2021), Governing Image-Based Sexual Abuse: Digital Platform Policies, Tools, and Practices, Bailey, J., Flynn, A. and Henry, N. (Ed.) The Emerald International Handbook of Technology-Facilitated Violence and Abuse (Emerald Studies In Digital Crime, Technology and Social Harms), Emerald Publishing Limited, Bingley, pp. 749-768. https://doi.org/10.1108/978-1-83982-848-520211054 Publisher: Emerald Publishing Limited Copyright © 2021 Nicola Henry and Alice Witt. Published by Emerald Publishing Limited. This chapter is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of these chapters (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode. License This chapter is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of these chapters (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode. Introduction The nonconsensual taking or sharing of nude or sexual images, also known as “image-based sexual abuse” (Henry et al., 2020; McGlynn & Rackley, 2017) or “nonconsensual pornography” (Citron & Franks, 2014; Ruvalcaba & Eaton, 2020), is a major social and legal problem in the digital age. With the development of social media and other networked technologies, which enable over three billion users to generate and instantaneously share content on the internet (Kemp, 2020), image-based sexual abuse is not only rapidly increasingly, but also having significant impacts (Henry et al., 2020). While criminal offenses are an important means to punish perpetrators and provide justice to victim-survivors, criminalization has done little to prevent the scourge of image-based sexual abuse or minimize the harm once images (photographs or videos) are posted online. For example, images can be copied and republished on multiple platforms and devices – in some cases making it virtually impossible to prevent the further spread of images online. Perpetrators are often difficult to identify because of anonymity measures, such as encryption, virtual private networks, and proxy servers that obscure the nature of content, locations of internet traffic, and other information about users and their devices. Moreover, policing for image-based sexual abuse (and cybercrime more generally) is typically resource intensive given that law enforcement agencies often have to work across jurisdictional borders. In response to the complex challenges raised by harmful online content, governments around the world are introducing new regulatory regimes to attempt to better hold technology companies accountable for hosting harmful content on their platforms. At the same time, technology companies are themselves taking more proactive steps to tackle this problem. In this chapter, we examine the problem of image-based sexual abuse in light of these two forms of governance. In the first section, we focus on the governance of digital platforms, examining the introduction of broader governmental and intergovernmental regulatory regimes in a changing landscape, which some have described as a global “techlash” against the major digital platforms (Flew, Martin, & Suzor, 2019, p. 33). In the second section, we examine the governance by digital platforms, focusing specifically on the policies, tools, and practices that are being implemented by digital platforms to respond to and prevent image-based sexual abuse. In the third section, we draw on a model of corporate social responsibility (CSR) to propose ways forward. CSR provides a useful, albeit contested, language to examine the policy and practice of online content moderation or regulation. Although there are different conceptions of CSR, we define it as corporations' social, economic, legal, moral, and ethical responsibilities to address the harmful effects of their activities. Our conception of CSR is embedded within a social justice framework that locates the rationale for action not solely as a profit- or reputation-building exercise, but one that is also contingent on community values and the “common good.” We argue that while many digital platforms are taking proactive steps to detect and address image-based sexual abuse, four main shortcomings are evident in their policy approaches. First, some platforms adopt inconsistent, reductionist, and ambiguous language to describe image-based sexual abuse. Second, although a number of platforms now have an explicit policy position on image-based sexual abuse, there is often a stark gap between the policy and practice of content regulation, as well as a lack of transparency about how decisions are made and what the outcomes of those decisions are. Third, while platforms are increasingly turning to high-tech solutions to either detect or prevent image-based sexual abuse, these are imperfect measures that can be circumvented. And fourth, the onus is predominantly placed on users to find and report image-based sexual abuse to the platforms, which can be retraumatizing and highly stressful. We contend that because of their governing power, public character, and control of information, digital platforms have an ethical responsibility to detect, address, and prevent image-based sexual abuse on their networks. This is despite the degree of legal immunity that platforms have against harmful content posted by their users under section 230(c) of the United States (US) Communications Decency Act of 1996 (CDA 230). We argue that when platforms govern without sufficient regulatory safeguards in place, such as appeal processes and reason-giving practices (Suzor, 2019), they risk failing victim-survivors of image-based sexual abuse and are implicated in the perpetration of image-based sexual abuse. Governance of Digital Platforms Also known as “internet intermediaries,” or “online service providers,” digital platforms are nonstate, corporate organizations or entities that facilitate transactions, information exchange, or communications between third parties on the internet (see, e.g., Taddeo & Floridi, 2016). According to Gillespie (2018), digital platforms are “sites and services that host public expression, store it on and serve it up from the cloud, organize access to it through search and recommendation, or install it onto mobile devices” (p. 254). Gillespie (2018) explains that what digital platforms share in common is the hosting and organization of “user content for public circulation, without having produced or commissioned it” (p. 254). While digital platforms might appear to be neutral conduits or proxies for the exchange of online content between third parties, they are never neutral, and have been described as the “new governors” or “superpowers” of the digital age (Klonick, 2018; Lee, 2018). Some commentators argue that technology companies are engaged in illicit forms of digital surveillance, plundering the behavioral data of users to sell to business customers (including political advertisers) for economic profit (e.g., Zuboff, 2019), as well as creating the norms and means through which individual users can engage in “performative surveillance” in the form of tracking, monitoring, and observing other users online (e.g., Westlake, 2008). In addition to potentially illicit forms of surveillance and data harvesting, one of the key ways platforms govern their networks is by moderating user-generated content. As a form of regulation, content moderation encompasses an array of processes through which platform executives and their employees set, maintain, and enforce the bounds of “appropriate” user behaviors (Witt, Suzor, & Huggins, 2019). The norm is for content moderation to be ex post, meaning it is undertaken after a user has posted content, and reactive in response to user flags or reports (Klonick, 2018; Roberts, 2019). This means that platforms generally do not proactively screen content, decisions about which are thus predominantly made after the material is posted. On some platforms, however, automated systems are increasingly playing a more central role in the detection and removal of harmful online content before anyone has the chance to view or share the material (see further discussion below). There are significant transparency deficits around the ways that different types of content are moderated in practice (Witt et al., 2019, p. 558). It is often unclear, for instance, what material is signaled for removal, how much content is actually removed, and by what means. It is also impossible to determine precisely who removes content (e.g., a platform content moderator or a user) without access to a platform's internal workings (Witt et al., 2019, p. 572). The secrecy around the inner workings of content moderation is reinforced by the operation of contract law, which governs the platform–user relationship, and powerful legal protections under US law (where many platforms are primarily based). Specifically, CDA 230 protects platforms against liability for content posted by third parties. Consequently, platforms that host or republish content are generally not legally liable for what their users say or do except for illegal content or content that infringes intellectual property regimes. Indeed, technology companies not only exercise “unprecedented power” over “what [users] can see or share” (Suzor, 2019, p. 8), but also have “broad discretion to create and enforce their rules in almost any way they see fit” (Suzor, 2019, p. 106). This means that decisions around content can be based on a range of factors, including public-facing policies like terms of service, community guidelines, prescriptive guidelines that moderators follow behind closed doors, legal obligations, market forces, and cultural norms of use. Digital platforms are not, however, completely “lawless” (Suzor, 2019, p. 107). Platforms are subject to a range of laws in jurisdictions around the globe, some of which have the potential to threaten the ongoing stability of the CDA 230 safe harbor provisions. Europe has been described as the “world's leading tech watchdog” (Satariano, 2018) especially with European regulators taking an “increasingly activist stance toward… digital platform companies” (Flew et al., 2019, p. 34). The European Union's General Data Protection Regulation (GDPR) and Germany's NetzDG laws, for instance, can result in significant administrative fines for data protection or security infringements (among other punitive consequences for noncompliance) (see Echikson & Knodt, 2018; The European Parliament and the Council of the European Union, 2016/679). There are also many examples of European courts ordering service providers to restrict the types of content users see and how and when they see it (e.g., copyright or defamation lawsuits) (Suzor, 2019, p. 49). These state-based “regulatory pushbacks” are part of a global “techlash” against the governing powers of digital platforms in recent years (Flew et al., 2019, pp. 33 and 34). At the time of writing this chapter, the United Kingdom had proposed a range of measures in its White Paper on Online Harms, which includes a statutory duty of care that will legally require platforms to stop and prevent harmful material appearing on their networks (Secretary of State for Digital, Culture, Media & Sport and the Secretary of State for the Home Department, 2019). In 2019, Canada released the Digital Charter in Action, which includes 10 key principles designed to ensure the ethical collection, use, and disclosure of data (Innovation, Science and Economic Development Canada, 2019). Going a step further, after the Christchurch mosque shootings in New Zealand on March 15, 2019, the Australian Federal Government passed the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth) which gives the Australian eSafety Commissioner powers to issue take-down notices to digital platforms that host abhorrent violent material (AVM). If a service provider fails to remove AVM, they can be subject to prosecution under Australian federal criminal law, among other potential courses of action. Moreover, in 2018, the Australian federal government introduced an innovative civil penalty scheme which prohibits the nonconsensual sharing of intimate images, as well as threatening to share intimate images. Under this scheme, the eSafety Commissioner can issue substantial fines, formal warnings, infringement notices, or take-down notices to individuals and corporations requiring the removal of images within 48 hours. These domestic and international developments recognize that the decision-making processes of ostensibly “private” digital platforms can have significant impacts on individual users and far-reaching implications for politics, culture, and society (the “public sphere”) more broadly. They also suggest that platform immunity from legal liability for both privacy violations and the hosting of harmful content is diminishing – at least in some jurisdictional contexts. Digital platforms might then not be completely lawless, but do in practice govern, to use Suzor's (2019) term, “in a lawless way” (p. 107). Platforms exercise extraordinary power with limited safeguards for users, such as fairness, equality, and certainty, which many Western citizens have come to expect from governing actors (Witt et al., 2019). The result is often a significant gap between platform policies and their governance in practice, as well as a lack of transparency around digital platforms' decision-making processes. Governance by Digital Platforms In this section, we explore an array of policies, tools, and practices that are designed to detect, prevent, and respond to image-based sexual abuse on some of the largest digital platforms. Given the rapid pace of innovation in the technology sector, we selected platforms according to their traffic, market dominance, and their capacity to host image-based sexual abuse content. The sites we selected were predominantly the most popular sites as ranked by the analytics company Alexa (Alexa Internet, n.d.). The social media and search engine platforms we examined included Google, YouTube, Facebook, Yahoo!, Reddit, Instagram, Microsoft, Twitter, Flickr, Snapchat, TikTok, and Tumblr. The pornography sites we examined included Pornhub, XVideos, and xHamster. After creating a list of sites, we used the Google search engine to identify each company's policy documents, including their terms of service, community guidelines, reports, and official blogs. Each document was analyzed to identify specific image-based sexual abuse policies, general policies that could be applicable to image-based sexual abuse, and tools for either detecting, reporting, or blocking content, if any. We also searched for any relevant news articles or blogs on platforms' responses to image-based sexual abuse content. Our approach has four main limitations. The first limitation is that we were only able to examine publicly available policy documents. As such, we were not able to examine the undisclosed guidelines that moderators follow behind closed doors or information about the privatized automated systems that digital platforms might use. Second, we carried out our analysis over a three-month period between January and March 2020 and thus we cannot account for any changes in policies, tools, or practices after this time. Third, we did not examine non-English technology companies, nor did we examine the fringe, “rogue,” or underground platforms (e.g., on the Clear Net or Dark Net) where image-based sexual abuse content is being shared and traded (see Henry & Flynn, 2019). Finally, we did not seek to empirically investigate the experiences or perspectives of either victim-survivors or platform representatives in relation to content removal or platform policies, tools, and practices. Currently there is a pervasive lack of transparency around platform governance and more research is needed to address this gap. The analysis below, however, provides insight into how select platforms are attempting to address and prevent image-based sexual abuse. Here we focus on three key areas of content moderation: platform policies; reporting options and practices; and technological tools. Platform Policies on Image-Based Sexual Abuse The term “revenge porn” came into popular usage in 2011 after widespread media attention to the nonconsensual sharing of nude or sexual images of musicians and sportspersons on the website IsAnyoneUp.com and the subsequent criminal trial of its founder Hunter Moore (Martens, 2011). The term, however, is a misnomer because not all perpetrators are motivated by revenge when they share nude or sexual images without consent. Instead they may be motivated for other reasons, such as sexual gratification, monetary gain, social status building, or a desire for power and control (Citron & Franks, 2014; Henry et al., 2020). The term “revenge porn” has been widely criticized as having victim-blaming, harm-minimizing, or salacious connotations. Scholars, activists, victim-survivors, and practitioners also argue that it fails to capture the complexity and diversity of behaviors involving the use and abuse of nonconsensual nude or sexual images by known and unknown persons alike, using diverse means and methods (Henry et al., 2020; McGlynn & Rackley, 2017; Powell, Henry, & Flynn, 2018). Although a small number of digital platforms continue to refer to “revenge porn” in their terms of service or community guidelines, others have adopted alternative terms, such as “nonconsensual pornography,” “involuntary pornography,” or “the nonconsensual sharing of intimate images.” Tumblr's community guidelines, for instance, state: “Absolutely do not post nonconsensual pornography – that is, private photos or videos taken or posted without the subject's consent” (Tumblr, 2020, Privacy violations, para 1). Other platforms outline prohibitions against broader forms of online content. For instance, Pornhub's terms of service explicitly prohibit, among other behaviors, the impersonation of another person, the posting of copyrighted material, content that depicts a person under the age of 18, and content that is “obscene, illegal, unlawful, defamatory, libellous, harassing, hateful, racially, or ethnically offensive” (Pornhub, 2020, Monitoring and enforcement, para 4). Notably, however, Pornhub does not specify explicit prohibitions against image-based sexual abuse. In their policies, xHamster and XVideos do not specifically mention image-based sexual abuse but instead refer to privacy, abuse, harassment, inappropriate, or illegal content (xHamster, 2020; XVideos, n.d.). TikTok's Community Policy similarly does not mention image-based sexual abuse content and instead tells users that this is “NOT the place to post, share, or promote… harmful or dangerous content” (TikTok, 2019; para 4). On some platforms, the prohibition of image-based sexual abuse is unclear. For instance, Snapchat states that users should not “take Snaps of people in private spaces – like a bathroom, locker room or a medical facility – without their knowledge and consent” (Snap Inc., 2019, para 4). Although examples are given of what a “private space” might entail, it is unclear whether the nonconsensual sharing of nude or sexual imagery is also prohibited in the context of “public” spaces. Facebook's policy on the sharing of image-based sexual abuse content, on the other hand, is much clearer, allowing the sharing of images to be either “noncommercial” or “private” with an expansive definition of what an “intimate” image includes. Facebook prohibits the nonconsensual sharing of intimate images according to three criteria: the image is noncommercial or produced in a private setting; the person is nude, nearly nude, or engaged in a sexual act or posing in a sexual way; and there is lack of consent indicated by captions, comments, the title of the page, independent sources, or reports from victims or others (Facebook, 2020a). However, the focus on images that are noncommercial, and which are produced in a private setting, appears to deny sex workers or pornographic actors the right to control the dissemination of their images. There can be significant flow-on effects of ambiguous policy stances on image-based sexual abuse. Platform policies that are open-textured, or which use nondescript terms, can enable ad hoc decision-making in response to business and other pressures (Witt et al., 2019). The lack of consistent language for platforms to name and work through the problems of image-based sexual abuse can make it difficult for stakeholders to discuss the concerns that victim-survivors and other societal actors raise. Moreover, vague guidelines can fundamentally limit the ability of victim-survivors or their authorized representatives to apply platform policies to reporting features or inform users as to the bounds of acceptable behavior. Given that platforms almost always reserve “broad discretion” to determine what, if any, response will be given to a report of harmful content (Suzor, 2019, p. 106), it is essentially their choice whether or not to impose punitive (or other) measures on users when their terms of service or community guidelines have been violated (some of which have appeals processes in place). While platforms are not able to make arrests or issue warrants, they are able to remove content, limit access to their sites to offending users, issue warnings, disable accounts for specified periods of time, or permanently suspend accounts at their discretion. YouTube, for instance, has implemented a “strikes system” which first entails the removal of content and a warning issued (sent by email) to let the user know the Community Guidelines have been violated with no penalty to the user's channel if it is a first offense (YouTube, 2020, What happens if, para 1). After a first offense, users will be issued a strike against their channel, and once they have received three strikes, their channel will be terminated. Other platforms have similar systems in place. As noted by York and Zuckerman (2019), the suspension of user accounts can act as a “strong disincentive” to post harmful content where social or professional reputation is at stake (p. 144). Deepfakes The extent to which platform policies and guidelines explicitly or implicitly cover “deepfakes,” including deepfake pornography, is a relatively new governance issue. Deepfakes are a portmanteau of “deep learning,” a subfield of narrow artificial intelligence (AI) used to create content and fake images. In December 2017, a Reddit user, who called himself “deepfakes,” trained algorithms to swap the faces of actors in pornography videos with the faces of well-known celebrities (see Chesney & Citron, 2019; Franks & Waldman, 2019). Since then, the volume of deepfake videos on the internet has increased exponentially; the vast majority of which are pornographic and disproportionately target women (Ajder, Patrini, Cavalli, & Cullen, 2019). In early 2020, Facebook, Reddit, Twitter, and YouTube announced new or altered policies prohibiting deepfake content. In order for deepfake content to be removed on Facebook, for instance, it must meet two criteria: first, it must have been “edited or synthesized… in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say”; and second, it must be the product of AI or machine learning (Facebook, 2020a, Manipulated media, para 3). The narrow scope of these criteria, which appears to be targeting manipulated fake news rather than different types of manipulated media, makes it unclear whether videos with no sound will be covered by the policy – for instance, a person's face that is superimposed onto another person's body in a silent porn video. Moreover, this policy may not cover low-tech, non-AI techniques that are used to alter videos and photographs – also known as “shallowfakes” (see Bose, 2020). On the other hand, Twitter's new deepfake policy refers to “synthetic or manipulated media that are likely to cause harm” according to three key criteria: first, if the content is synthetic or manipulated; second, if the content was shared in a deceptive manner; and third, if the content is likely to impact public safety or cause serious harm (Twitter, 2020, para 1). The posting of deepfake imagery on Twitter can lead to a number of consequences depending on whether any or all of the three criteria are satisfied. These include applying a label to the content to make it clear that the content is fake; reducing the visibility of the content or preventing it from being recommended; providing a link to additional explanations or clarifications; removing the content; or suspending accounts where there have been repeated or severe violations of the policy (Twitter, 2020). While specific deepfake policies do not exist on other platforms, some have more general rules relating to “fake/d,” “false,” “misleading,” “digitally manipulated,” “lookalike,” and/or “aggregate” content, which could result in the take-down of deepfake images. Pornhub (2020) does not mention deepfakes in its Terms of Service; however, in 2018 it did announce a ban on deepfakes (Cole, 2018). Nevertheless, the site continues to host deepfake pornography. When we searched for “deepfakes” using the internal Pornhub search function, no results were found, yet when we searched through Google “deepfakes” and “pornhub,” multiple results of fake celebrity videos were returned. Reporting Harmful Content Reporting options are another means through which digital platforms can address the problem of image-based sexual abuse. All of the platforms we examined have in place some sort of reporting protocol, including the porn sites, which are supposed to trigger review by human content moderators. On porn sites, for instance, users can report through either a Digital Millennium Copyright Act of 1998 take-down request, or via a content removal form. Facebook recently announced that image-based sexual abuse content is now triaged alongside self-harm in the content moderation queue (Solon, 2019). Another important form of content reporting occurs through the “flagging” system where users are enlisted as a “volunteer corps of regulators” to alert platforms about content that violates their policies and community standards (Crawford & Gillespie, 2016, p. 412). Facebook users, for instance, flag around one million pieces of content per day (Buni & Chemaly, 2016). Many companies provide built-in reporting features through which users can report material that potentially violates content policies (Witt et al., 2019, p. 577). For instance, Pornhub allows users to flag videos (using the “Flag this video” link under each video) if it is “illegal, unlawful, harassing, harmful, offensive, or various other reasons,” stating that it will remove the content from the site without delay (Pornhub, 2020, Prohibited uses, para 2). Platform reporting systems predominantly place the onus on victim-survivors or other users to flag or report image-based sexual abuse content. In other words, digital platforms “[responsibilize users] to reduce their own risk of [victimization]” (Salter, Crofts, & Lee, 2018, p. 301). Major online platforms, like Facebook and Instagram, suggest that users take a range of preventive measures, such as unfollowing or blocking those responsible for posting abusive content, reviewing their safety and security settings, and accessing hyperlinked information. Microsoft, for instance, suggests that users should identify the source and/or owner of an image and attempt to have it removed before reporting it as a potential policy violation (Microsoft, 2020). If unsuccessful, victims are encouraged to report content through built-in or other reporting features. Preconditions like this, in many ways, are a “practical solut" @default.
- W3164753353 created "2021-06-07" @default.
- W3164753353 creator A5003211300 @default.
- W3164753353 creator A5060066744 @default.
- W3164753353 date "2021-06-04" @default.
- W3164753353 modified "2023-10-16" @default.
- W3164753353 title "Governing Image-Based Sexual Abuse: Digital Platform Policies, Tools, and Practices" @default.
- W3164753353 cites W1994507267 @default.
- W3164753353 cites W2007707102 @default.
- W3164753353 cites W2104353502 @default.
- W3164753353 cites W2136686335 @default.
- W3164753353 cites W2304096286 @default.
- W3164753353 cites W2604954060 @default.
- W3164753353 cites W2618630523 @default.
- W3164753353 cites W2790146862 @default.
- W3164753353 cites W2810296389 @default.
- W3164753353 cites W2912719603 @default.
- W3164753353 cites W2916721815 @default.
- W3164753353 cites W2966021913 @default.
- W3164753353 cites W3125221547 @default.
- W3164753353 cites W3125405799 @default.
- W3164753353 doi "https://doi.org/10.1108/978-1-83982-848-520211054" @default.
- W3164753353 hasPublicationYear "2021" @default.
- W3164753353 type Work @default.
- W3164753353 sameAs 3164753353 @default.
- W3164753353 citedByCount "4" @default.
- W3164753353 countsByYear W31647533532021 @default.
- W3164753353 countsByYear W31647533532022 @default.
- W3164753353 countsByYear W31647533532023 @default.
- W3164753353 crossrefType "book-chapter" @default.
- W3164753353 hasAuthorship W3164753353A5003211300 @default.
- W3164753353 hasAuthorship W3164753353A5060066744 @default.
- W3164753353 hasBestOaLocation W31647533531 @default.
- W3164753353 hasConcept C115961682 @default.
- W3164753353 hasConcept C144133560 @default.
- W3164753353 hasConcept C15744967 @default.
- W3164753353 hasConcept C17744445 @default.
- W3164753353 hasConcept C190385971 @default.
- W3164753353 hasConcept C2992354236 @default.
- W3164753353 hasConcept C3017944768 @default.
- W3164753353 hasConcept C31972630 @default.
- W3164753353 hasConcept C38652104 @default.
- W3164753353 hasConcept C41008148 @default.
- W3164753353 hasConcept C545542383 @default.
- W3164753353 hasConcept C71924100 @default.
- W3164753353 hasConceptScore W3164753353C115961682 @default.
- W3164753353 hasConceptScore W3164753353C144133560 @default.
- W3164753353 hasConceptScore W3164753353C15744967 @default.
- W3164753353 hasConceptScore W3164753353C17744445 @default.
- W3164753353 hasConceptScore W3164753353C190385971 @default.
- W3164753353 hasConceptScore W3164753353C2992354236 @default.
- W3164753353 hasConceptScore W3164753353C3017944768 @default.
- W3164753353 hasConceptScore W3164753353C31972630 @default.
- W3164753353 hasConceptScore W3164753353C38652104 @default.
- W3164753353 hasConceptScore W3164753353C41008148 @default.
- W3164753353 hasConceptScore W3164753353C545542383 @default.
- W3164753353 hasConceptScore W3164753353C71924100 @default.
- W3164753353 hasLocation W31647533531 @default.
- W3164753353 hasOpenAccess W3164753353 @default.
- W3164753353 hasPrimaryLocation W31647533531 @default.
- W3164753353 hasRelatedWork W2053487507 @default.
- W3164753353 hasRelatedWork W2067108088 @default.
- W3164753353 hasRelatedWork W2077865380 @default.
- W3164753353 hasRelatedWork W2083375246 @default.
- W3164753353 hasRelatedWork W2085372204 @default.
- W3164753353 hasRelatedWork W2134894512 @default.
- W3164753353 hasRelatedWork W2748952813 @default.
- W3164753353 hasRelatedWork W2765597752 @default.
- W3164753353 hasRelatedWork W2899084033 @default.
- W3164753353 hasRelatedWork W2931662336 @default.
- W3164753353 isParatext "false" @default.
- W3164753353 isRetracted "false" @default.
- W3164753353 magId "3164753353" @default.
- W3164753353 workType "book-chapter" @default.