# Adversarial Cross-View Disentangled Graph Contrastive Learning

https://arxiv.org/pdf/2209.07699

Adversarial Cross-View Disentangled Graph Contrastive Learning，2022，arxiv preprint

# 1. 简介

## 1.1 摘要

Graph contrastive learning (GCL) is prevalent to tackle the supervision shortage issue in graph learning tasks. Many recent GCL methods have been proposed with various manually designed augmentation techniques, aiming to implement challenging augmentations on the original graph to yield robust representation. Although many of them achieve remarkable performances, existing GCL methods still struggle to improve model robustness without risking losing task-relevant information because they ignore the fact the augmentation-induced latent factors could be highly entangled with the original graph, thus it is more difficult to discriminate the task-relevant information from irrelevant information. Consequently, the learned representation is either brittle or unilluminating. In light of this, we introduce the Adversarial Cross-View Disentangled Graph Contrastive Learning (ACDGCL), which follows the information bottleneck principle to learn minimal yet sufficient representations from graph data. To be specific, our proposed model elicits the augmentation-invariant and augmentation-dependent factors separately. Except for the conventional contrastive loss which guarantees the consistency and sufficiency of the representations across different contrastive views, we introduce a cross-view reconstruction mechanism to pursue the representation disentanglement. Besides, an adversarial view is added as the third view of contrastive loss to enhance model robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-arts on graph classification task over multiple benchmark datasets.

## 1.2 本文工作

GCL的一个基础假设：假设$t_1(G)$$t_2(G)$表示原始图的两个增强视角，那么$t_1(G)$$t_2(G)$之间应该是mutually redundant。所谓“mutually redundant”，一种正式的定义如下：

$I\left(t_1(G) ; y \mid t_2(G)\right)=I\left(t_2(G) ; y \mid t_1(G)\right)=0$

# 2. 方法

ACDGCL框架如下图所示，架构还是比较简单清晰的。这里面我们需要关注的主要有两点：

1. 两个Extractor怎么设计的？如何设计损失函数保证distanglement的正确性？
2. 对拷是怎么做的？即$z_{adv}$如何生成的。

## 2.1 Disentanglement

Extractor的设计非常简单，以图嵌入为输入，用一个MLP-based网络学习disentangled embedding：

$\left[\mathbf{z}^{a u g}=g_{a u g}(f(t(G))), \mathbf{z}^{i n v}=g_{i n v}(f(t(G)))\right]$

$\mathbf{z}_w^r=g_r\left(\mathbf{z}_w^{a u g} \odot \mathbf{z}_w^{i n v}\right), \mathbf{z}_w^{c r}=g_r\left(\mathbf{z}_w^{a u g} \odot \mathbf{z}_{w^{\prime}}^{i n v}\right)$

$H\left(\mathbf{z}_w \mid \mathbf{z}_w^{a u g}, \mathbf{z}_{w^{\prime}}^{i n v}\right) \leqslant\left\|\mathbf{z}_w-g_r\left(\mathbf{z}_w^{a u g} \odot \mathbf{z}_{w^{\prime}}^{i n v}\right)\right\|_2^2 \text { where } w=w^{\prime} \text { or } w \neq w^{\prime} \text {. }$

$\mathcal{L}_{\text {recon }}=\frac{1}{2 N} \sum_{i=1}^N \sum_{w=1}^2\left[\left\|\mathbf{z}_{w, i}-\mathbf{z}_{w, i}^r\right\|_2^2+\left\|\mathbf{z}_{w, i}-\mathbf{z}_{w, i}^{c r}\right\|_2^2\right] .$

## 2.2 对抗视角的生成

$\delta^*=\underset{\|\delta\|_{\infty} \leqslant \epsilon}{\operatorname{argmax}} \mathcal{L}_{\text {adv }}\left(t_1(G), t_2(G), G+\delta\right),$

$\mathcal{L}_{\mathrm{adv}}=\frac{1}{N} \sum_{i=1}^N \max _{\delta *}\left[\mathcal{L}_{\mathrm{CL}}\left(\mathrm{z}_{1, i}^{i n v}, G+\delta^*\right)+\mathcal{L}_{\mathrm{CL}}\left(\mathrm{z}_{2, i}^{i n v}, G+\delta^*\right)\right]$

$\min _{f, g} \mathbb{E}_{G \in \mathbf{G}}\left[\mathcal{L}_{\mathrm{inv}}+\lambda_r \mathcal{L}_{\mathrm{recon}}+\lambda_a \max _{\|\delta\|_{\infty} \leqslant \epsilon} \mathcal{L}_{\mathrm{adv}}\right]$

# 3. 实验

1. 无监督

2. 半监督

## 3.2 消融实验

• w/o Intra-view：表示只进行跨视角重构
• w/o Inter-view：表示只进行同视角重构