提交 ce23bc10 编写于 作者: V Varuna Jayasiri

gat v2 docs

上级 800df5c4
此差异已折叠。
此差异已折叠。
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html;charset=utf-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<meta name="description" content=""/>
<meta name="twitter:card" content="summary"/>
<meta name="twitter:image:src" content="https://avatars1.githubusercontent.com/u/64068543?s=400&amp;v=4"/>
<meta name="twitter:title" content="Graph Attention Networks v2 (GATv2)"/>
<meta name="twitter:description" content=""/>
<meta name="twitter:site" content="@labmlai"/>
<meta name="twitter:creator" content="@labmlai"/>
<meta property="og:url" content="https://nn.labml.ai/graphs/gatv2/readme.html"/>
<meta property="og:title" content="Graph Attention Networks v2 (GATv2)"/>
<meta property="og:image" content="https://avatars1.githubusercontent.com/u/64068543?s=400&amp;v=4"/>
<meta property="og:site_name" content="LabML Neural Networks"/>
<meta property="og:type" content="object"/>
<meta property="og:title" content="Graph Attention Networks v2 (GATv2)"/>
<meta property="og:description" content=""/>
<title>Graph Attention Networks v2 (GATv2)</title>
<link rel="shortcut icon" href="/icon.png"/>
<link rel="stylesheet" href="../../pylit.css">
<link rel="canonical" href="https://nn.labml.ai/graphs/gatv2/readme.html"/>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4V3HC8HBLH"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag() {
dataLayer.push(arguments);
}
gtag('js', new Date());
gtag('config', 'G-4V3HC8HBLH');
</script>
</head>
<body>
<div id='container'>
<div id="background"></div>
<div class='section'>
<div class='docs'>
<p>
<a class="parent" href="/">home</a>
<a class="parent" href="../index.html">graphs</a>
<a class="parent" href="index.html">gatv2</a>
</p>
<p>
<a href="https://github.com/lab-ml/labml_nn/tree/master/labml_nn/graphs/gatv2/readme.md">
<img alt="Github"
src="https://img.shields.io/github/stars/lab-ml/nn?style=social"
style="max-width:100%;"/></a>
<a href="https://twitter.com/labmlai"
rel="nofollow">
<img alt="Twitter"
src="https://img.shields.io/twitter/follow/labmlai?style=social"
style="max-width:100%;"/></a>
</p>
</div>
</div>
<div class='section' id='section-0'>
<div class='docs'>
<div class='section-link'>
<a href='#section-0'>#</a>
</div>
<h1><a href="https://nn.labml.ai/graph/gatv2/index.html">Graph Attention Networks v2 (GATv2)</a></h1>
<p>This is a <a href="https://pytorch.org">PyTorch</a> implementation of the GATv2 opeartor from the paper
<a href="https://arxiv.org/abs/2105.14491">How Attentive are Graph Attention Networks?</a>.</p>
<p>GATv2s work on graph data.
A graph consists of nodes and edges connecting nodes.
For example, in Cora dataset the nodes are research papers and the edges are citations that
connect the papers.</p>
<p>The GATv2 operator which fixes the static attention problem of the standard GAT:
since the linear layers in the standard GAT are applied right after each other, the ranking
of attended nodes is unconditioned on the query node.
In contrast, in GATv2, every node can attend to any other node.</p>
<p>Here is <a href="https://nn.labml.ai/graph/gatv2/experiment.html">the training code</a> for training
a two-layer GAT on Cora dataset.</p>
<p><a href="https://app.labml.ai/run/8e27ad82ed2611ebabb691fb2028a868"><img alt="View Run" src="https://img.shields.io/badge/labml-experiment-brightgreen" /></a></p>
</div>
<div class='code'>
</div>
</div>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.4/MathJax.js?config=TeX-AMS_HTML">
</script>
<!-- MathJax configuration -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {
inlineMath: [ ['$','$'] ],
displayMath: [ ['$$','$$'] ],
processEscapes: true,
processEnvironments: true
},
// Center justify equations in code and markdown cells. Elsewhere
// we use CSS to left justify single line equations in code cells.
displayAlign: 'center',
"HTML-CSS": { fonts: ["TeX"] }
});
</script>
<script>
function handleImages() {
var images = document.querySelectorAll('p>img')
console.log(images);
for (var i = 0; i < images.length; ++i) {
handleImage(images[i])
}
}
function handleImage(img) {
img.parentElement.style.textAlign = 'center'
var modal = document.createElement('div')
modal.id = 'modal'
var modalContent = document.createElement('div')
modal.appendChild(modalContent)
var modalImage = document.createElement('img')
modalContent.appendChild(modalImage)
var span = document.createElement('span')
span.classList.add('close')
span.textContent = 'x'
modal.appendChild(span)
img.onclick = function () {
console.log('clicked')
document.body.appendChild(modal)
modalImage.src = img.src
}
span.onclick = function () {
document.body.removeChild(modal)
}
}
handleImages()
</script>
</body>
</html>
\ No newline at end of file
...@@ -69,6 +69,7 @@ ...@@ -69,6 +69,7 @@
<h1>Graph Neural Networks</h1> <h1>Graph Neural Networks</h1>
<ul> <ul>
<li><a href="gat/index.html">Graph Attention Networks (GAT)</a></li> <li><a href="gat/index.html">Graph Attention Networks (GAT)</a></li>
<li><a href="gatv2/index.html">Graph Attention Networks v2 (GATv2)</a></li>
</ul> </ul>
</div> </div>
<div class='code'> <div class='code'>
......
...@@ -115,6 +115,7 @@ implementations.</p> ...@@ -115,6 +115,7 @@ implementations.</p>
<h4>✨ Graph Neural Networks</h4> <h4>✨ Graph Neural Networks</h4>
<ul> <ul>
<li><a href="graphs/gat/index.html">Graph Attention Networks (GAT)</a></li> <li><a href="graphs/gat/index.html">Graph Attention Networks (GAT)</a></li>
<li><a href="gatv2/index.html">Graph Attention Networks v2 (GATv2)</a></li>
</ul> </ul>
<h4><a href="cfr/index.html">Counterfactual Regret Minimization (CFR)</a></h4> <h4><a href="cfr/index.html">Counterfactual Regret Minimization (CFR)</a></h4>
<p>Solving games with incomplete information such as poker with CFR.</p> <p>Solving games with incomplete information such as poker with CFR.</p>
......
...@@ -281,7 +281,7 @@ ...@@ -281,7 +281,7 @@
<url> <url>
<loc>https://nn.labml.ai/index.html</loc> <loc>https://nn.labml.ai/index.html</loc>
<lastmod>2021-07-17T16:30:00+00:00</lastmod> <lastmod>2021-07-25T16:30:00+00:00</lastmod>
<priority>1.00</priority> <priority>1.00</priority>
</url> </url>
...@@ -741,9 +741,23 @@ ...@@ -741,9 +741,23 @@
</url> </url>
<url>
<loc>https://nn.labml.ai/graphs/gatv2/index.html</loc>
<lastmod>2021-07-25T16:30:00+00:00</lastmod>
<priority>1.00</priority>
</url>
<url>
<loc>https://nn.labml.ai/graphs/gatv2/experiment.html</loc>
<lastmod>2021-07-25T16:30:00+00:00</lastmod>
<priority>1.00</priority>
</url>
<url> <url>
<loc>https://nn.labml.ai/graphs/index.html</loc> <loc>https://nn.labml.ai/graphs/index.html</loc>
<lastmod>2021-07-08T16:30:00+00:00</lastmod> <lastmod>2021-07-25T16:30:00+00:00</lastmod>
<priority>1.00</priority> <priority>1.00</priority>
</url> </url>
......
...@@ -117,7 +117,7 @@ class GraphAttentionV2Layer(Module): ...@@ -117,7 +117,7 @@ class GraphAttentionV2Layer(Module):
# We calculate these for each head $k$. *We have omitted $\cdot^k$ for simplicity*. # We calculate these for each head $k$. *We have omitted $\cdot^k$ for simplicity*.
# #
# $$e_{ij} = a(\mathbf{W_l} \overrightarrow{h_i}, \mathbf{W_r} \overrightarrow{h_j}) = # $$e_{ij} = a(\mathbf{W_l} \overrightarrow{h_i}, \mathbf{W_r} \overrightarrow{h_j}) =
# a(\overrightarrow{{g_l}_i}}, \overrightarrow{{g_r}_j}})$$ # a(\overrightarrow{{g_l}_i}, \overrightarrow{{g_r}_j})$$
# #
# $e_{ij}$ is the attention score (importance) from node $j$ to node $i$. # $e_{ij}$ is the attention score (importance) from node $j$ to node $i$.
# We calculate this for each head. # We calculate this for each head.
...@@ -131,7 +131,7 @@ class GraphAttentionV2Layer(Module): ...@@ -131,7 +131,7 @@ class GraphAttentionV2Layer(Module):
# #
# $$e_{ij} = \mathbf{a}^\top \text{LeakyReLU} \Big( # $$e_{ij} = \mathbf{a}^\top \text{LeakyReLU} \Big(
# \Big[ # \Big[
# \overrightarrow{{g_l}_i}} + \overrightarrow{{g_r}_j}} # \overrightarrow{{g_l}_i} + \overrightarrow{{g_r}_j}
# \Big] \Big)$$ # \Big] \Big)$$
# First we calculate # First we calculate
......
...@@ -62,6 +62,7 @@ implementations almost weekly. ...@@ -62,6 +62,7 @@ implementations almost weekly.
#### ✨ Graph Neural Networks #### ✨ Graph Neural Networks
* [Graph Attention Networks (GAT)](https://nn.labml.ai/graphs/gat/index.html) * [Graph Attention Networks (GAT)](https://nn.labml.ai/graphs/gat/index.html)
* [Graph Attention Networks v2 (GATv2)](https://nn.labml.ai/graphs/gatv2/index.html)
#### ✨ [Counterfactual Regret Minimization (CFR)](https://nn.labml.ai/cfr/index.html) #### ✨ [Counterfactual Regret Minimization (CFR)](https://nn.labml.ai/cfr/index.html)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册