https://scholars.lib.ntu.edu.tw/handle/123456789/607459
標題: | Node2Grids: A Cost-Efficient Uncoupled Training Framework for Large-Scale Graph Learning | 作者: | Yang D Chen C Zheng Y Zheng Z Liao S.-W. SHIH-WEI LIAO |
關鍵字: | graph convolutional network;large-scale graph learning;uncoupled training;Convolution;Graph neural networks;Learning systems;Mapping;Convolutional networks;Convolutional neural network;Cost-efficient;Graph convolutional network;Grid-like;Large-scale graph learning;Large-scales;Network-based modeling;Training framework;Uncoupled training;Convolutional neural networks | 公開日期: | 2021 | 起(迄)頁: | 2281-2290 | 來源出版物: | International Conference on Information and Knowledge Management, Proceedings | 摘要: | Graph Convolutional Network (GCN) has been widely used in graph learning tasks. However, GCN-based models (GCNs) are inherently coupled training frameworks repetitively conducting the recursive neighborhood aggregation, which leads to high computational and memory overheads when processing large-scale graphs. To tackle these issues, we present Node2Grids, a cost-efficient uncoupled training framework that leverages the independent mapped data for obtaining the embedding. Instead of directly processing the coupled nodes as GCNs, Node2Grids supports a more efficacious method in practice, mapping the coupled graph data into the independent grid-like data which can be fed into the uncoupled models as Convolutional Neural Network (CNN). This simple but valid strategy significantly saves memory and computational resources while achieving comparable results with the leading GCN-based models. Specifically, in order to support a general and convenient mapping approach, Node2Grids selects the most influential neighborhood with central node fusion information to construct the grid-like data. To further improve the downstream tasks' efficiency, a simple CNN-based neural network is employed to capture the significant information from the mapped grid-like data. Moreover, the grid-level attention mechanism is implemented, which enables implicitly specifying the different weights for the extracted grids of CNN. In addition to the typical transductive and inductive learning tasks, we also verify our framework on million-scale graphs to demonstrate the superiority of cost performance against the state-of-the-art GCN-based approaches. The codes are available on the GitHub link. ? 2021 ACM. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85119209656&doi=10.1145%2f3459637.3482456&partnerID=40&md5=2622efb4ee6cca17e2b096e472c92bc0 https://scholars.lib.ntu.edu.tw/handle/123456789/607459 |
DOI: | 10.1145/3459637.3482456 |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。