# Towards Robust Graph Contrastive Learning

https://arxiv.org/pdf/2102.13085

Towards Robust Graph Contrastive Learning，2021，arxiv preprint

# 1. 简介

## 1.1 摘要

We study the problem of adversarially robust self-supervised learning on graphs. In the contrastive learning framework, we introduce a new method that increases the adversarial robustness of the learned representations through i) adversarial transformations and ii) transformations that not only remove but also insert edges. We evaluate the learned representations in a preliminary set of experiments, obtaining promising results. We believe this work takes an important step towards incorporating robustness as a viable auxiliary task in graph contrastive learning.

# 2. GROC

## 2.1 常规图对比

$\mathcal{L}\left(v, \tau_{1}, \tau_{2}\right)=-\log \frac{\exp \left(\sigma\left(z_{1}, z_{2}\right) / t\right)}{\exp \left(\sigma\left(z_{1}, z_{2}\right) / t\right)+\sum_{u \in N e g(v)} \exp \left(\sigma\left(z_{1}, f_{\theta}(u)\right) / t\right)}$

$\frac{1}{2 n} \sum_{v \in V}\left[\mathcal{L}\left(v, \tau_{1}, \tau_{2}\right)+\mathcal{L}\left(v, \tau_{2}, \tau_{1}\right)\right]$

## 2.2 作者方法

• 第一步：edge removal，在执行完$\tau_i'$后，先利用公式1计算对比损失，搞一次初步的forward-backward，得到边的梯度值，然后去除那些梯度值小的边。
• 第二步：edge insertion，定义一个候选集$S^+$，先给这些边加到图里面取，并给他们分配一个非0权重，然后经过第一步初步的forward-backward后，保留候选集中部分梯度值大的边，其余的丢掉。

# 3. 实验

he其中1~5表示施加扰动（对抗攻击）的等级。可以看到所有模型都是不稳定的，施加扰动后模型性能大幅下降。但是GROC模型和其他方法相比，鲁棒性更强。

• 版权声明： 本博客所有文章除特别声明外，著作权归作者所有。转载请注明出处！