Two-Stream Graph Convolutional Network for Intra-Oral Scanner Image Segmentation

作者全名:"Zhao, Yue; Zhang, Lingming; Liu, Yang; Meng, Deyu; Cui, Zhiming; Gao, Chenqiang; Gao, Xinbo; Lian, Chunfeng; Shen, Dinggang"

作者地址:"[Zhao, Yue; Zhang, Lingming; Gao, Chenqiang] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China; [Zhao, Yue; Zhang, Lingming; Gao, Chenqiang] Chongqing Key Lab Signal & Informat Proc, Chongqing 400065, Peoples R China; [Liu, Yang] Chongqing Med Univ, Dept Orthodont, Stomatol Hosp, Chongqing 401147, Peoples R China; [Liu, Yang] Chongqing Key Lab Oral Dis & Biomed Sci, Chongqing 401147, Peoples R China; [Meng, Deyu; Lian, Chunfeng] Macau Univ Sci & Technol, Fac Informat Technol, Taipa, Macau, Peoples R China; [Meng, Deyu; Lian, Chunfeng] Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Peoples R China; [Cui, Zhiming] Univ Hong Kong, Sch Comp Sci, Hong Kong, Peoples R China; [Cui, Zhiming; Shen, Dinggang] ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China; [Gao, Xinbo] Chongqing Univ Posts & Telecommun, Sch Comp Sci & Technol, Chongqing 400065, Peoples R China; [Shen, Dinggang] Shanghai United Imaging Intelligence Co Ltd, Shanghai 200030, Peoples R China"

通信作者:"Gao, CQ (通讯作者),Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China.; Shen, DG (通讯作者),ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China."

来源:IEEE TRANSACTIONS ON MEDICAL IMAGING

ESI学科分类:CLINICAL MEDICINE

WOS号:WOS:000777332500008

JCR分区:Q1

影响因子:10.6

年份:2022

卷号:41

期号:4

开始页:826

结束页:835

文献类型:Article

关键词:Image segmentation; Three-dimensional displays; Teeth; Shape; Task analysis; Dentistry; Feature extraction; Intra-oral scanner image segmentation; graph convolutional network

摘要:"Precise segmentation of teeth from intra-oral scanner images is an essential task in computer-aided orthodontic surgical planning. The state-of-the-art deep learning-based methods often simply concatenate the raw geometric attributes (i.e., coordinates and normal vectors) of mesh cells to train a single-stream network for automatic intra-oral scanner image segmentation. However, since different raw attributes reveal completely different geometric information, the naive concatenation of different raw attributes at the (low-level) input stage may bring unnecessary confusion in describing and differentiating between mesh cells, thus hampering the learning of high-level geometric representations for the segmentation task. To address this issue, we design a two-stream graph convolutional network (i.e., TSGCN), which can effectively handle inter-view confusion between different raw attributes to more effectively fuse their complementary information and learn discriminative multi-view geometric representations. Specifically, our TSGCN adopts two input-specific graph-learning streams to extract complementary high-level geometric representations from coordinates and normal vectors, respectively. Then, these single-view representations are further fused by a self-attention module to adaptively balance the contributions of different views in learning more discriminative multi-view representations for accurate and fully automatic tooth segmentation. We have evaluated our TSGCN on a real-patient dataset of dental (mesh) models acquired by 3D intraoral scanners. Experimental results show that our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation."

基金机构:"National Natural Science Foundation of China [62176035, 61906025, 82101058, 11690011, U1811461]; Chongqing Research Program of Basic Research and Frontier Technology [cstc2020jcyj-msxmX0835, cstc2021jcyjbsh0155, cstc2020jcyj-msxmX0525]; Science and Technology Research Program of Chongqing Municipal Education Commission [KJZD-K202100606, KJQN201900607, KJQN202000647, KJQN202100646]; Chongqing Yuzhong District Basic Research and Frontier Exploration Project [20200117]; Key Project of Smart Medicine of Chongqing Medical University [ZHYX202101]"

基金资助正文:"This work was supported in part by the National Natural Science Foundation of China under Grant 62176035, Grant 61906025, Grant 82101058, Grant 11690011, and Grant U1811461; in part by the Chongqing Research Program of Basic Research and Frontier Technology under Grant cstc2020jcyj-msxmX0835, Grant cstc2021jcyjbsh0155, and Grant cstc2020jcyj-msxmX0525; in part by the Science and Technology Research Program of Chongqing Municipal Education Commission under Grant KJZD-K202100606, Grant KJQN201900607, Grant KJQN202000647, and Grant KJQN202100646; in part by the Chongqing Yuzhong District Basic Research and Frontier Exploration Project under Grant 20200117; and in part by the Key Project of Smart Medicine of Chongqing Medical University under Grant ZHYX202101."