Matches in SemOpenAlex for { <https://semopenalex.org/work/W3194331388> ?p ?o ?g. }
Showing items 1 to 76 of
76
with 100 items per page.
- W3194331388 endingPage "953" @default.
- W3194331388 startingPage "942" @default.
- W3194331388 abstract "We develop a communication-efficient distributed learning algorithm that is robust against Byzantine worker machines. We propose and analyze a distributed gradient-descent algorithm that performs a simple thresholding based on gradient norms to mitigate Byzantine failures. We show the (statistical) error-rate of our algorithm matches that of Yin <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>et al.</i> (2018), which uses more complicated schemes (coordinate-wise median, trimmed mean). Furthermore, for communication efficiency, we consider a generic class of <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink> <tex-math notation=LaTeX>$delta $ </tex-math></inline-formula> -approximate compressors from Karimireddi <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>et al.</i> (2019) that encompasses sign-based compressors and top- <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink> <tex-math notation=LaTeX>$k$ </tex-math></inline-formula> sparsification. Our algorithm uses compressed gradients and gradient norms for aggregation and Byzantine removal respectively. We establish the statistical error rate for non-convex smooth loss functions. We show that, in certain range of the compression factor <inline-formula xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink> <tex-math notation=LaTeX>$delta $ </tex-math></inline-formula> , the (order-wise) rate of convergence is not affected by the compression operation. Moreover, we analyze the compressed gradient descent algorithm with error feedback (proposed in Karimireddi <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>et al.</i> 2019) in a distributed setting and in the presence of Byzantine worker machines. We show that exploiting error feedback improves the statistical error rate. Finally, we experimentally validate our results and show good performance in convergence for convex (least-square regression) and non-convex (neural network training) problems." @default.
- W3194331388 created "2021-08-30" @default.
- W3194331388 creator A5030620564 @default.
- W3194331388 creator A5032924598 @default.
- W3194331388 creator A5051046818 @default.
- W3194331388 creator A5053049612 @default.
- W3194331388 creator A5083450236 @default.
- W3194331388 date "2021-09-01" @default.
- W3194331388 modified "2023-10-16" @default.
- W3194331388 title "Communication-Efficient and Byzantine-Robust Distributed Learning With Error Feedback" @default.
- W3194331388 cites W2112796928 @default.
- W3194331388 cites W2134539478 @default.
- W3194331388 cites W2490498838 @default.
- W3194331388 cites W2742439621 @default.
- W3194331388 cites W3080442116 @default.
- W3194331388 cites W3137092842 @default.
- W3194331388 cites W4211030719 @default.
- W3194331388 cites W4230471307 @default.
- W3194331388 cites W4252654521 @default.
- W3194331388 doi "https://doi.org/10.1109/jsait.2021.3105076" @default.
- W3194331388 hasPublicationYear "2021" @default.
- W3194331388 type Work @default.
- W3194331388 sameAs 3194331388 @default.
- W3194331388 citedByCount "8" @default.
- W3194331388 countsByYear W31943313882022 @default.
- W3194331388 countsByYear W31943313882023 @default.
- W3194331388 crossrefType "journal-article" @default.
- W3194331388 hasAuthorship W3194331388A5030620564 @default.
- W3194331388 hasAuthorship W3194331388A5032924598 @default.
- W3194331388 hasAuthorship W3194331388A5051046818 @default.
- W3194331388 hasAuthorship W3194331388A5053049612 @default.
- W3194331388 hasAuthorship W3194331388A5083450236 @default.
- W3194331388 hasBestOaLocation W31943313881 @default.
- W3194331388 hasConcept C11413529 @default.
- W3194331388 hasConcept C118615104 @default.
- W3194331388 hasConcept C154945302 @default.
- W3194331388 hasConcept C26517878 @default.
- W3194331388 hasConcept C33923547 @default.
- W3194331388 hasConcept C38652104 @default.
- W3194331388 hasConcept C41008148 @default.
- W3194331388 hasConcept C45357846 @default.
- W3194331388 hasConcept C57869625 @default.
- W3194331388 hasConcept C94375191 @default.
- W3194331388 hasConceptScore W3194331388C11413529 @default.
- W3194331388 hasConceptScore W3194331388C118615104 @default.
- W3194331388 hasConceptScore W3194331388C154945302 @default.
- W3194331388 hasConceptScore W3194331388C26517878 @default.
- W3194331388 hasConceptScore W3194331388C33923547 @default.
- W3194331388 hasConceptScore W3194331388C38652104 @default.
- W3194331388 hasConceptScore W3194331388C41008148 @default.
- W3194331388 hasConceptScore W3194331388C45357846 @default.
- W3194331388 hasConceptScore W3194331388C57869625 @default.
- W3194331388 hasConceptScore W3194331388C94375191 @default.
- W3194331388 hasFunder F4320306076 @default.
- W3194331388 hasIssue "3" @default.
- W3194331388 hasLocation W31943313881 @default.
- W3194331388 hasLocation W31943313882 @default.
- W3194331388 hasOpenAccess W3194331388 @default.
- W3194331388 hasPrimaryLocation W31943313881 @default.
- W3194331388 hasRelatedWork W149041114 @default.
- W3194331388 hasRelatedWork W1595229445 @default.
- W3194331388 hasRelatedWork W1965815883 @default.
- W3194331388 hasRelatedWork W2024638892 @default.
- W3194331388 hasRelatedWork W2051487156 @default.
- W3194331388 hasRelatedWork W2073681303 @default.
- W3194331388 hasRelatedWork W2963177394 @default.
- W3194331388 hasRelatedWork W322408318 @default.
- W3194331388 hasRelatedWork W4313359513 @default.
- W3194331388 hasRelatedWork W763418848 @default.
- W3194331388 hasVolume "2" @default.
- W3194331388 isParatext "false" @default.
- W3194331388 isRetracted "false" @default.
- W3194331388 magId "3194331388" @default.
- W3194331388 workType "article" @default.