Spatial-temporal upsampling graph convolutional network for daily long-term traffic speed prediction

作者全名:"Zhang, Song; Liu, Yanbing; Xiao, Yunpeng; He, Rui"

作者地址:"[Zhang, Song; Xiao, Yunpeng; He, Rui] Chongqing Univ Posts & Telecommun, Sch Comp Sci & Technol, Chongqing 400065, Peoples R China; [Liu, Yanbing] Chongqing Med Univ, Sch Med Informat, Chongqing 400065, Peoples R China"

通信作者:"Liu, YB (通讯作者),Chongqing Med Univ, Sch Med Informat, Chongqing 400065, Peoples R China."

来源:JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES

ESI学科分类: 

WOS号:WOS:000999620800006

JCR分区:Q1

影响因子:6.9

年份:2022

卷号:34

期号:10

开始页:8996

结束页:9010

文献类型:Article

关键词:Long-term traffic speed prediction; Spatial-temporal upsampling; Graph convolutional network; Intelligent transportation system

摘要:"The daily long-term traffic prediction is an important urban computing issue, and can give users a global insight into traffic. Accurate traffic prediction is conducive to rational route planning and efficient traffic resource allocation. However, it is challenging to capture the global spatial-temporal correlations for daily long-term traffic prediction. In this paper, we propose a spatial-temporal upsampling graph convolutional network (STUGCN) for daily long-term traffic speed prediction. STUGCN uses an innovative upsampling method to capture the global spatial-temporal correlations. Specifically, in spatial dimension, we construct an upsampled road network by adding virtual nodes to the original road network to capture local and global spatial correlations. In temporal dimension, we build a time graph to capture the temporal correlations among adjacent time steps. Besides, we construct a knowledge base, and the global temporal correlations can be captured by upsampling the current day from the knowledge base. Therefore, STUGCN not only preserves the local spatial-temporal correlations, but also has the ability to learn global spatial-temporal correlations. The experimental results on two real-world datasets demonstrate that our approach is approximately 16.4%-17.1%, 14.1%-17.0% and 17.4%-22.4% better than the state-of-the-art in terms of MAE, RMSE and MAPE metrics, respectively."

基金机构:"National Natural Science Foundation of China [61772098, 61772099]; Doctoral Program of CQUPT [BYJS202014]"

基金资助正文:"This work was supported by the National Natural Science Foundation of China under Grants 61772098 and 61772099, and supported by the Doctoral Program of CQUPT under Grant BYJS202014."