Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc
Reading group: Vitor Georgen presented "Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc" (MLSys'20) at 4A312 the 26/11/2021 at 10h30.
Abstract
Graph neural networks (GNNs) have been demonstrated to be an effective model for learning tasks related to graph structured data. Different from classical deep neural networks that handle relatively small individual samples, GNNs process very large graphs, which must be partitioned and processed in a distributed manner. We present ROC, a distributed multi-GPU framework for fast GNN training and inference on graphs. ROC is up to 4× faster than existing GNN frameworks on a single machine, and can scale to multiple GPUs on multiple machines. This performance gain is mainly enabled by ROC’s graph partitioning and memory management optimizations. Besides performance acceleration, the better scalability of ROC also enables the exploration of more sophisticated GNN architectures on large, real-world graphs. We demonstrate that a class of GNN architectures significantly deeper and larger than the typical two-layer models can achieve new state-of-the-art classification accuracy on the widely used Reddit dataset.