Matches in SemOpenAlex for { <https://semopenalex.org/work/W4320232838> ?p ?o ?g. }
- W4320232838 endingPage "119587" @default.
- W4320232838 startingPage "119587" @default.
- W4320232838 abstract "Traffic prediction problem is one of the essential tasks of intelligent transportation system (ITS), alleviating traffic congestion effectively and promoting the intelligent development of urban traffic. To accommodate long-range dependencies, Transformer-based methods have been used in traffic prediction tasks due to the parallelizable processing of sequences and explanation of attention matrices compared with recurrent neural units (RNNs). However, the Transformer-based model has two limitations, on the one hand, it ignores the local correlation in the traffic state in its parallel processing of the sequence, on the other hand, the absolute positional embedding is adopted to represent the positional relationship of time nodes is destroyed when it comes to calculate attention score. To address two embarrassing shortcomings, a novel framework called RPConvformer is proposed, where the improved parts are 1D causal convolutional sequence embedding and relative position encoding. In sequence embedding, we develop a sequence embedding layer composed of convolutional units, which consist of origin 1D convolutional and 1D causal convolutional. The size of the receptive field of the convolution can focus on the local region correlation of the sequence. In relative position encoding, we introduce a bias vector to automatically learn the relative position information of time nodes when linearly mapping the feature tensor. We respect the encoding and decoding framework of the Transformer, the encoder is responsible for extracting historical traffic state information, and the decoder autoregressively predicts the future traffic state. The multi-head attention mechanism is adopted by both encoder and decoder aims to focus on rich temporal feature patterns. Moreover, key mask technique is used after computing attention matrix to mask the traffic state at missing moments improving the resilience of the model. Extensive experiments on two real-world traffic flow datasets. The results show that RPConvformer achieves the best performance compared to state-of-the-art time series models. Ablation experiments show that considering the local correlation of time series has a higher gain on prediction performance. Random mask experiments show that the model is robust when the historical data is less than 10% missing. In addition, multi-head attention matrix provides further explanation for the dependence between time nodes. RPConvformer as an improved Transformer-based model can provide new ideas for molding temporal dimension in traffic prediction tasks. Our code has been open-sourced at (https://github.com/YanJieWen/RPConvformer)." @default.
- W4320232838 created "2023-02-13" @default.
- W4320232838 creator A5009499426 @default.
- W4320232838 creator A5027593990 @default.
- W4320232838 creator A5044475841 @default.
- W4320232838 creator A5046117499 @default.
- W4320232838 creator A5053716293 @default.
- W4320232838 date "2023-05-01" @default.
- W4320232838 modified "2023-10-01" @default.
- W4320232838 title "RPConvformer: A novel Transformer-based deep neural networks for traffic flow prediction" @default.
- W4320232838 cites W1970988048 @default.
- W4320232838 cites W1973943669 @default.
- W4320232838 cites W1983883318 @default.
- W4320232838 cites W2004353783 @default.
- W4320232838 cites W2008483594 @default.
- W4320232838 cites W2026453187 @default.
- W4320232838 cites W2057918527 @default.
- W4320232838 cites W2083238230 @default.
- W4320232838 cites W2157895285 @default.
- W4320232838 cites W2160507653 @default.
- W4320232838 cites W2194775991 @default.
- W4320232838 cites W2521050763 @default.
- W4320232838 cites W2579495707 @default.
- W4320232838 cites W2805089611 @default.
- W4320232838 cites W2889230014 @default.
- W4320232838 cites W2940640769 @default.
- W4320232838 cites W2955819484 @default.
- W4320232838 cites W2974087501 @default.
- W4320232838 cites W2975262648 @default.
- W4320232838 cites W3003862857 @default.
- W4320232838 cites W3016546268 @default.
- W4320232838 cites W3022964535 @default.
- W4320232838 cites W3035338169 @default.
- W4320232838 cites W3038969758 @default.
- W4320232838 cites W3092622251 @default.
- W4320232838 cites W3103942004 @default.
- W4320232838 cites W3123909522 @default.
- W4320232838 cites W3172256710 @default.
- W4320232838 cites W3194625338 @default.
- W4320232838 cites W4211110525 @default.
- W4320232838 cites W4224211827 @default.
- W4320232838 cites W4294151799 @default.
- W4320232838 doi "https://doi.org/10.1016/j.eswa.2023.119587" @default.
- W4320232838 hasPublicationYear "2023" @default.
- W4320232838 type Work @default.
- W4320232838 citedByCount "3" @default.
- W4320232838 countsByYear W43202328382023 @default.
- W4320232838 crossrefType "journal-article" @default.
- W4320232838 hasAuthorship W4320232838A5009499426 @default.
- W4320232838 hasAuthorship W4320232838A5027593990 @default.
- W4320232838 hasAuthorship W4320232838A5044475841 @default.
- W4320232838 hasAuthorship W4320232838A5046117499 @default.
- W4320232838 hasAuthorship W4320232838A5053716293 @default.
- W4320232838 hasConcept C108583219 @default.
- W4320232838 hasConcept C111919701 @default.
- W4320232838 hasConcept C11413529 @default.
- W4320232838 hasConcept C118505674 @default.
- W4320232838 hasConcept C121332964 @default.
- W4320232838 hasConcept C125411270 @default.
- W4320232838 hasConcept C147168706 @default.
- W4320232838 hasConcept C148047603 @default.
- W4320232838 hasConcept C153180895 @default.
- W4320232838 hasConcept C154945302 @default.
- W4320232838 hasConcept C165801399 @default.
- W4320232838 hasConcept C41008148 @default.
- W4320232838 hasConcept C41608201 @default.
- W4320232838 hasConcept C50644808 @default.
- W4320232838 hasConcept C57273362 @default.
- W4320232838 hasConcept C62520636 @default.
- W4320232838 hasConcept C66322947 @default.
- W4320232838 hasConcept C81363708 @default.
- W4320232838 hasConceptScore W4320232838C108583219 @default.
- W4320232838 hasConceptScore W4320232838C111919701 @default.
- W4320232838 hasConceptScore W4320232838C11413529 @default.
- W4320232838 hasConceptScore W4320232838C118505674 @default.
- W4320232838 hasConceptScore W4320232838C121332964 @default.
- W4320232838 hasConceptScore W4320232838C125411270 @default.
- W4320232838 hasConceptScore W4320232838C147168706 @default.
- W4320232838 hasConceptScore W4320232838C148047603 @default.
- W4320232838 hasConceptScore W4320232838C153180895 @default.
- W4320232838 hasConceptScore W4320232838C154945302 @default.
- W4320232838 hasConceptScore W4320232838C165801399 @default.
- W4320232838 hasConceptScore W4320232838C41008148 @default.
- W4320232838 hasConceptScore W4320232838C41608201 @default.
- W4320232838 hasConceptScore W4320232838C50644808 @default.
- W4320232838 hasConceptScore W4320232838C57273362 @default.
- W4320232838 hasConceptScore W4320232838C62520636 @default.
- W4320232838 hasConceptScore W4320232838C66322947 @default.
- W4320232838 hasConceptScore W4320232838C81363708 @default.
- W4320232838 hasFunder F4320335869 @default.
- W4320232838 hasLocation W43202328381 @default.
- W4320232838 hasOpenAccess W4320232838 @default.
- W4320232838 hasPrimaryLocation W43202328381 @default.
- W4320232838 hasRelatedWork W2731899572 @default.
- W4320232838 hasRelatedWork W2999805992 @default.
- W4320232838 hasRelatedWork W3011074480 @default.
- W4320232838 hasRelatedWork W3116150086 @default.
- W4320232838 hasRelatedWork W3133861977 @default.