-
Notifications
You must be signed in to change notification settings - Fork 88
[Model] CoBFormer #233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Model] CoBFormer #233
Conversation
|
cobformer的内容已更新 |
| @@ -0,0 +1 @@ | |||
| python cobformer_trainer.py --dataset=Cora --learning_rate=0.01 --gcn_wd=1e-3 --weight_decay=5e-5 --gcn_type=1 --gcn_layers=2 --n_patch=112 --use_patch_attn --alpha=0.7 --tau=0.3 --gpu_id=0 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
参考其他模型的readme文件,写一下数据集描述,运行的命令,结果
| def fix_seed(seed): | ||
| random.seed(seed) | ||
| np.random.seed(seed) | ||
| tlx.set_seed(seed) | ||
| tlx.set_device(device=f'gpu:{args.gpu_id}', id=args.gpu_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
固定随机种子相关的代码需要删掉
| parser.add_argument('--num_hidden', type=int, default=64, help='隐藏层维度') | ||
| parser.add_argument('--num_layers', type=int, default=1, help='层数') | ||
| parser.add_argument('--n_head', type=int, default=4, help='注意力头数') | ||
| parser.add_argument('--num_epochs', type=int, default=500, help='训练轮数') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
help部分用英文来写
|
|
||
| return res_gnn, res_trans | ||
|
|
||
| def run(args, device, data, patch, split_idx, alpha, tau, postfix): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
run函数改名为main函数
| path = osp.join(osp.expanduser('~'), 'datasets', args.dataset) | ||
| n_patch = args.n_patch | ||
| alpha = args.alpha | ||
| tau = args.tau | ||
| load_path = None | ||
| if args.dataset in ['ogbn-products']: | ||
| load_path = f'Data/partition/{args.dataset}_partition_{n_patch}.pt' | ||
|
|
||
| postfix = "test" | ||
| runs = 5 | ||
| print("n_patch: ", n_patch) | ||
|
|
||
| dataset = Planetoid(path, args.dataset) | ||
| data = dataset[0] | ||
|
|
||
| data.train_mask = tlx.convert_to_numpy(data.train_mask) | ||
| data.val_mask = tlx.convert_to_numpy(data.val_mask) | ||
| data.test_mask = tlx.convert_to_numpy(data.test_mask) | ||
| # 在各个mask后面pad一个维度,pad的值为0,mask是一维数组,用np.pad(mask, (0, 1), mode='constant') | ||
| data.train_mask = np.pad(data.train_mask, (0, 1), mode='constant') | ||
| data.val_mask = np.pad(data.val_mask, (0, 1), mode='constant') | ||
| data.test_mask = np.pad(data.test_mask, (0, 1), mode='constant') | ||
|
|
||
| split_idx = { | ||
| 'train': data.train_mask, | ||
| 'valid': data.val_mask, | ||
| 'test': data.test_mask | ||
| } | ||
|
|
||
|
|
||
| patch = partition_patch(data, n_patch, load_path) | ||
| batch_size = args.batch_size | ||
|
|
||
| results = [[], []] | ||
| for r in range(runs): | ||
| res_gnn, res_trans = run(args, device, data, patch, split_idx, alpha, tau, postfix) | ||
| results[0].append(res_gnn) | ||
| results[1].append(res_trans) | ||
|
|
||
| print(f"==== Final GNN====") | ||
| result = tlx.convert_to_tensor(results[0]) * 100. # 替换torch.tensor | ||
| print(result) | ||
| print(f"max: {tlx.ops.reduce_max(result, axis=0)}") # 使用 tlx.ops.reduce_max | ||
| print(f"min: {tlx.ops.reduce_min(result, axis=0)}") # 使用 tlx.ops.reduce_min | ||
| print(f"mean: {tlx.ops.reduce_mean(result, axis=0)}") # 使用 tlx.ops.reduce_mean | ||
| print(f"std: {tlx.ops.reduce_std(result, axis=0)}") # 使用 tlx.ops.reduce_std | ||
|
|
||
| print(f'GNN Micro: {tlx.ops.reduce_mean(result, axis=0)[1]:.2f} ± {tlx.ops.reduce_std(result, axis=0)[1]:.2f}') | ||
| print(f'GNN Macro: {tlx.ops.reduce_mean(result, axis=0)[3]:.2f} ± {tlx.ops.reduce_std(result, axis=0)[3]:.2f}') | ||
|
|
||
| print(f"==== Final Trans====") | ||
| result = tlx.convert_to_tensor(results[1]) * 100. | ||
| print(result) | ||
| print(f"max: {tlx.ops.reduce_max(result, axis=0)}") | ||
| print(f"min: {tlx.ops.reduce_min(result, axis=0)}") | ||
| print(f"mean: {tlx.ops.reduce_mean(result, axis=0)}") | ||
| print(f"std: {tlx.ops.reduce_std(result, axis=0)}") | ||
|
|
||
| print(f'Trans Micro: {tlx.ops.reduce_mean(result, axis=0)[1]:.2f} ± {tlx.ops.reduce_std(result, axis=0)[1]:.2f}') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些内容需要腾挪到main函数中
| pred1, pred2 = model(data.x, patch, data.edge_index, edge_weight=edge_weight, num_nodes=data.num_nodes) | ||
| loss = model.loss(pred1, pred2, label, split_index['train']) | ||
| loss.backward() | ||
| optimizer.step() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些接口要改用tlx的train_one_step接口
| from gammagl.models.cobformer import CoBFormer | ||
|
|
||
| def eval_f1(pred, label, num_classes): | ||
| # 将tlx tensor转换为numpy数组 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
注释要写成英文,其余的注释也要改成英文
Description
Checklist
Please feel free to remove inapplicable items for your PR.
or have been fixed to be compatible with this change
Changes