Matches in SemOpenAlex for { <https://semopenalex.org/work/W3196449922> ?p ?o ?g. }
- W3196449922 abstract "Empirical attacks on collaborative learning show that the gradients of deep neural networks can not only disclose private latent attributes of the training data but also be used to reconstruct the original data. While prior works tried to quantify the privacy risk stemming from gradients, these measures do not establish a theoretically grounded understanding of gradient leakages, do not generalize across attackers, and can fail to fully explain what is observed through empirical attacks in practice. In this paper, we introduce theoretically-motivated measures to quantify information leakages in both attack-dependent and attack-independent manners. Specifically, we present an adaptation of the $mathcal{V}$-information, which generalizes the empirical attack success rate and allows quantifying the amount of information that can leak from any chosen family of attack models. We then propose attack-independent measures, that only require the shared gradients, for quantifying both original and latent information leakages. Our empirical results, on six datasets and four popular models, reveal that gradients of the first layers contain the highest amount of original information, while the (fully-connected) classifier layers placed after the (convolutional) feature extractor layers contain the highest latent information. Further, we show how techniques such as gradient aggregation during training can mitigate information leakages. Our work paves the way for better defenses such as layer-based protection or strong aggregation." @default.
- W3196449922 created "2021-09-13" @default.
- W3196449922 creator A5007503250 @default.
- W3196449922 creator A5033819196 @default.
- W3196449922 creator A5036280257 @default.
- W3196449922 creator A5043326652 @default.
- W3196449922 creator A5062266544 @default.
- W3196449922 date "2021-05-28" @default.
- W3196449922 modified "2023-09-27" @default.
- W3196449922 title "Quantifying and Localizing Private Information Leakage from Neural Network Gradients" @default.
- W3196449922 cites W1686946872 @default.
- W3196449922 cites W1782590233 @default.
- W3196449922 cites W1825675169 @default.
- W3196449922 cites W1834627138 @default.
- W3196449922 cites W1849277567 @default.
- W3196449922 cites W1899249567 @default.
- W3196449922 cites W1915485278 @default.
- W3196449922 cites W1995875735 @default.
- W3196449922 cites W2017977879 @default.
- W3196449922 cites W2043769961 @default.
- W3196449922 cites W2053637704 @default.
- W3196449922 cites W2087142053 @default.
- W3196449922 cites W2087399108 @default.
- W3196449922 cites W2109394932 @default.
- W3196449922 cites W2112796928 @default.
- W3196449922 cites W2113459411 @default.
- W3196449922 cites W2115781030 @default.
- W3196449922 cites W2133665775 @default.
- W3196449922 cites W2158213899 @default.
- W3196449922 cites W2194775991 @default.
- W3196449922 cites W2252024402 @default.
- W3196449922 cites W2529714286 @default.
- W3196449922 cites W2535690855 @default.
- W3196449922 cites W2536626143 @default.
- W3196449922 cites W2541884796 @default.
- W3196449922 cites W2593634001 @default.
- W3196449922 cites W2795435272 @default.
- W3196449922 cites W2803832867 @default.
- W3196449922 cites W2885195348 @default.
- W3196449922 cites W2897830718 @default.
- W3196449922 cites W2905148628 @default.
- W3196449922 cites W2912213068 @default.
- W3196449922 cites W2945237470 @default.
- W3196449922 cites W2946622848 @default.
- W3196449922 cites W2946930197 @default.
- W3196449922 cites W2962807446 @default.
- W3196449922 cites W2963025848 @default.
- W3196449922 cites W2963346868 @default.
- W3196449922 cites W2963456518 @default.
- W3196449922 cites W2963837083 @default.
- W3196449922 cites W2963952467 @default.
- W3196449922 cites W2964294232 @default.
- W3196449922 cites W2967985550 @default.
- W3196449922 cites W2969261410 @default.
- W3196449922 cites W2970408908 @default.
- W3196449922 cites W2995022099 @default.
- W3196449922 cites W2996320484 @default.
- W3196449922 cites W2996336810 @default.
- W3196449922 cites W2996736801 @default.
- W3196449922 cites W3000479830 @default.
- W3196449922 cites W3014541599 @default.
- W3196449922 cites W3048684575 @default.
- W3196449922 cites W3048775464 @default.
- W3196449922 cites W3092696781 @default.
- W3196449922 cites W3100354738 @default.
- W3196449922 cites W3110068734 @default.
- W3196449922 cites W3110258731 @default.
- W3196449922 cites W3118608800 @default.
- W3196449922 cites W3132138018 @default.
- W3196449922 cites W3158675315 @default.
- W3196449922 cites W3162858683 @default.
- W3196449922 cites W3175192640 @default.
- W3196449922 cites W3176786489 @default.
- W3196449922 hasPublicationYear "2021" @default.
- W3196449922 type Work @default.
- W3196449922 sameAs 3196449922 @default.
- W3196449922 citedByCount "0" @default.
- W3196449922 crossrefType "posted-content" @default.
- W3196449922 hasAuthorship W3196449922A5007503250 @default.
- W3196449922 hasAuthorship W3196449922A5033819196 @default.
- W3196449922 hasAuthorship W3196449922A5036280257 @default.
- W3196449922 hasAuthorship W3196449922A5043326652 @default.
- W3196449922 hasAuthorship W3196449922A5062266544 @default.
- W3196449922 hasConcept C105795698 @default.
- W3196449922 hasConcept C108583219 @default.
- W3196449922 hasConcept C117978034 @default.
- W3196449922 hasConcept C119857082 @default.
- W3196449922 hasConcept C120936955 @default.
- W3196449922 hasConcept C124101348 @default.
- W3196449922 hasConcept C127413603 @default.
- W3196449922 hasConcept C137822555 @default.
- W3196449922 hasConcept C154945302 @default.
- W3196449922 hasConcept C21880701 @default.
- W3196449922 hasConcept C2779201187 @default.
- W3196449922 hasConcept C33923547 @default.
- W3196449922 hasConcept C38652104 @default.
- W3196449922 hasConcept C41008148 @default.
- W3196449922 hasConcept C50644808 @default.
- W3196449922 hasConcept C81363708 @default.
- W3196449922 hasConcept C95623464 @default.