Matches in SemOpenAlex for { <https://semopenalex.org/work/W3166384310> ?p ?o ?g. }
- W3166384310 endingPage "7344" @default.
- W3166384310 startingPage "7330" @default.
- W3166384310 abstract "Training deep neural networks on large datasets containing high-dimensional data requires a large amount of computation. A solution to this problem is data-parallel distributed training, where a model is replicated into several computational nodes that have access to different chunks of the data. This approach, however, entails high communication rates and latency because of the computed gradients that need to be shared among nodes at every iteration. The problem becomes more pronounced in the case that there is wireless communication between the nodes (i.e., due to the limited network bandwidth). To address this problem, various compression methods have been proposed, including sparsification, quantization, and entropy encoding of the gradients. Existing methods leverage the intra-node information redundancy, that is, they compress gradients at each node independently. In contrast, we advocate that the gradients across the nodes are correlated and propose methods to leverage this inter-node redundancy to improve compression efficiency. Depending on the node communication protocol (parameter server or ring-allreduce), we propose two instances for the gradient compression that we coin Learned Gradient Compression (LGC). Our methods exploit an autoencoder (i.e., trained during the first stages of the distributed training) to capture the common information that exists in the gradients of the distributed nodes. To constrain the nodes' computational complexity, the autoencoder is realized with a lightweight neural network. We have tested our LGC methods on the image classification and semantic segmentation tasks using different convolutional neural networks (CNNs) [ResNet50, ResNet101, and pyramid scene parsing network (PSPNet)] and multiple datasets (ImageNet, Cifar10, and CamVid). The ResNet101 model trained for image classification on Cifar10 achieved significant compression rate reductions with the accuracy of 93.57%, which is lower than the baseline distributed training with uncompressed gradients only by 0.18%. The rate of the model is reduced by 8095× and 8× compared with the baseline and the state-of-the-art deep gradient compression (DGC) method, respectively." @default.
- W3166384310 created "2021-06-22" @default.
- W3166384310 creator A5043377240 @default.
- W3166384310 creator A5043511500 @default.
- W3166384310 creator A5082072396 @default.
- W3166384310 creator A5091029466 @default.
- W3166384310 date "2022-12-01" @default.
- W3166384310 modified "2023-10-06" @default.
- W3166384310 title "Learned Gradient Compression for Distributed Deep Learning" @default.
- W3166384310 cites W12634471 @default.
- W3166384310 cites W1533527630 @default.
- W3166384310 cites W1572016165 @default.
- W3166384310 cites W1677182931 @default.
- W3166384310 cites W1994211684 @default.
- W3166384310 cites W2057332538 @default.
- W3166384310 cites W2068143054 @default.
- W3166384310 cites W2099213070 @default.
- W3166384310 cites W2108598243 @default.
- W3166384310 cites W2112796928 @default.
- W3166384310 cites W2138256990 @default.
- W3166384310 cites W2147800946 @default.
- W3166384310 cites W2150412388 @default.
- W3166384310 cites W2171943915 @default.
- W3166384310 cites W2194775991 @default.
- W3166384310 cites W2294370754 @default.
- W3166384310 cites W2405578611 @default.
- W3166384310 cites W2407022425 @default.
- W3166384310 cites W2560023338 @default.
- W3166384310 cites W2883901178 @default.
- W3166384310 cites W2886851211 @default.
- W3166384310 cites W2963122961 @default.
- W3166384310 cites W2963703197 @default.
- W3166384310 cites W2964350391 @default.
- W3166384310 cites W2970895312 @default.
- W3166384310 cites W2989289980 @default.
- W3166384310 cites W3101036738 @default.
- W3166384310 cites W3107010297 @default.
- W3166384310 doi "https://doi.org/10.1109/tnnls.2021.3084806" @default.
- W3166384310 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/34111008" @default.
- W3166384310 hasPublicationYear "2022" @default.
- W3166384310 type Work @default.
- W3166384310 sameAs 3166384310 @default.
- W3166384310 citedByCount "16" @default.
- W3166384310 countsByYear W31663843102020 @default.
- W3166384310 countsByYear W31663843102021 @default.
- W3166384310 countsByYear W31663843102022 @default.
- W3166384310 countsByYear W31663843102023 @default.
- W3166384310 crossrefType "journal-article" @default.
- W3166384310 hasAuthorship W3166384310A5043377240 @default.
- W3166384310 hasAuthorship W3166384310A5043511500 @default.
- W3166384310 hasAuthorship W3166384310A5082072396 @default.
- W3166384310 hasAuthorship W3166384310A5091029466 @default.
- W3166384310 hasBestOaLocation W31663843102 @default.
- W3166384310 hasConcept C101738243 @default.
- W3166384310 hasConcept C108583219 @default.
- W3166384310 hasConcept C111919701 @default.
- W3166384310 hasConcept C115961682 @default.
- W3166384310 hasConcept C120314980 @default.
- W3166384310 hasConcept C13481523 @default.
- W3166384310 hasConcept C152124472 @default.
- W3166384310 hasConcept C153083717 @default.
- W3166384310 hasConcept C154945302 @default.
- W3166384310 hasConcept C41008148 @default.
- W3166384310 hasConcept C78548338 @default.
- W3166384310 hasConcept C80444323 @default.
- W3166384310 hasConcept C81363708 @default.
- W3166384310 hasConcept C9417928 @default.
- W3166384310 hasConcept C94835093 @default.
- W3166384310 hasConceptScore W3166384310C101738243 @default.
- W3166384310 hasConceptScore W3166384310C108583219 @default.
- W3166384310 hasConceptScore W3166384310C111919701 @default.
- W3166384310 hasConceptScore W3166384310C115961682 @default.
- W3166384310 hasConceptScore W3166384310C120314980 @default.
- W3166384310 hasConceptScore W3166384310C13481523 @default.
- W3166384310 hasConceptScore W3166384310C152124472 @default.
- W3166384310 hasConceptScore W3166384310C153083717 @default.
- W3166384310 hasConceptScore W3166384310C154945302 @default.
- W3166384310 hasConceptScore W3166384310C41008148 @default.
- W3166384310 hasConceptScore W3166384310C78548338 @default.
- W3166384310 hasConceptScore W3166384310C80444323 @default.
- W3166384310 hasConceptScore W3166384310C81363708 @default.
- W3166384310 hasConceptScore W3166384310C9417928 @default.
- W3166384310 hasConceptScore W3166384310C94835093 @default.
- W3166384310 hasFunder F4320321730 @default.
- W3166384310 hasIssue "12" @default.
- W3166384310 hasLocation W31663843101 @default.
- W3166384310 hasLocation W31663843102 @default.
- W3166384310 hasLocation W31663843103 @default.
- W3166384310 hasLocation W31663843104 @default.
- W3166384310 hasOpenAccess W3166384310 @default.
- W3166384310 hasPrimaryLocation W31663843101 @default.
- W3166384310 hasRelatedWork W2732415564 @default.
- W3166384310 hasRelatedWork W2946374589 @default.
- W3166384310 hasRelatedWork W3189611668 @default.
- W3166384310 hasRelatedWork W3209662401 @default.
- W3166384310 hasRelatedWork W4210339658 @default.
- W3166384310 hasRelatedWork W4214538768 @default.
- W3166384310 hasRelatedWork W4306194456 @default.