Skip to content

Latest commit

 

History

History

GTN

GTN code

Adapted from seongjunyun/Graph_Transformer_Networks.

We add GCN and GAT comparison and tried to reproduce the result in the GTN paper.

running environment

  • Python 3.8.5
  • torch 1.4.0 cuda 10.1
  • dgl 0.5.2 cuda 10.1
  • torch_geometric 1.6.1 cuda 10.1 with latest torch_sparse etc. (Install as guided in here)
  • torch-sparse-old cuda 10.1 (Build from source as guided in here)

running procedure

python main_gnn.py --dataset DBLP --model gcn
python main_gnn.py --dataset DBLP --model gat
python main_gnn.py --dataset ACM --model gcn --num_layers 2
python main_gnn.py --dataset ACM --model gat --num_layers 2
python main_gnn.py --dataset IMDB --model gcn
python main_gnn.py --dataset IMDB --model gat --weight_decay 0.03

performance report

We repeat 5 times and report the average Macro-F1 for each model and each dataset.

GCN GAT GTN
DBLP 91.48 94.18 94.18 (cpu only, running failed due to memory allocation, just report the result in paper)
ACM 92.28 92.49 92.28 (cpu only)
IMDB 59.11 58.86 57.53

The following content is from the initial seongjunyun/Graph_Transformer_Networks repo.

Graph Transformer Networks

This repository is the implementation of Graph Transformer Networks(GTN).

Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, Hyunwoo J. Kim, Graph Transformer Networks, In Advances in Neural Information Processing Systems (NeurIPS 2019).

Installation

Install pytorch

Install torch_geometric

$ pip install torch-sparse-old

** The latest version of torch_geometric removed the backward() of the multiplication of sparse matrices (spspmm), so to solve the problem, we uploaded the old version of torch-sparse with backward() on pip under the name torch-sparse-old.

Data Preprocessing

We used datasets from Heterogeneous Graph Attention Networks (Xiao Wang et al.) and uploaded the preprocessing code of acm data as an example.

Running the code

$ mkdir data
$ cd data

Download datasets (DBLP, ACM, IMDB) from this link and extract data.zip into data folder.

$ cd ..
  • DBLP
$ python main.py --dataset DBLP --num_layers 3
  • ACM
 $ python main.py --dataset ACM --num_layers 2 --adaptive_lr true
  • IMDB
 $ python main_sparse.py --dataset IMDB --num_layers 3 --adaptive_lr true

Citation

If this work is useful for your research, please cite our paper:

@inproceedings{yun2019graph,
  title={Graph Transformer Networks},
  author={Yun, Seongjun and Jeong, Minbyul and Kim, Raehyun and Kang, Jaewoo and Kim, Hyunwoo J},
  booktitle={Advances in Neural Information Processing Systems},
  pages={11960--11970},
  year={2019}
}