提交 13f36c18 编写于 作者: V Varuna Jayasiri

links

上级 8168b044
......@@ -81,12 +81,10 @@ We believe these would help you understand these algorithms better.</p>
implementations.</p>
<h2>Modules</h2>
<h4><a href="transformers/index.html">Transformers</a></h4>
<p><a href="transformers/index.html">Transformers module</a>
contains implementations for
<a href="transformers/mha.html">multi-headed attention</a>
and
<a href="transformers/relative_mha.html">relative multi-headed attention</a>.</p>
<ul>
<li><a href="transformers/mha.html">Multi-headed attention</a></li>
<li><a href="transformers/models.html">Transformer building blocks</a></li>
<li><a href="transformers/xl/relative_mha.html">Relative multi-headed attention</a>.</li>
<li><a href="transformers/gpt/index.html">GPT Architecture</a></li>
<li><a href="transformers/glu_variants/simple.html">GLU Variants</a></li>
<li><a href="transformers/knn/index.html">kNN-LM: Generalization through Memorization</a></li>
......
......@@ -78,7 +78,7 @@ from paper <a href="https://arxiv.org/abs/1706.03762">Attention Is All You Need<
and derivatives and enhancements of it.</p>
<ul>
<li><a href="mha.html">Multi-head attention</a></li>
<li><a href="relative_mha.html">Relative multi-head attention</a></li>
<li><a href="xl/relative_mha.html">Relative multi-head attention</a></li>
<li><a href="models.html">Transformer Encoder and Decoder Models</a></li>
<li><a href="positional_encoding.html">Fixed positional encoding</a></li>
</ul>
......
......@@ -15,12 +15,9 @@ implementations.
#### ✨ [Transformers](transformers/index.html)
[Transformers module](transformers/index.html)
contains implementations for
[multi-headed attention](transformers/mha.html)
and
[relative multi-headed attention](transformers/relative_mha.html).
* [Multi-headed attention](transformers/mha.html)
* [Transformer building blocks](transformers/models.html)
* [Relative multi-headed attention](transformers/xl/relative_mha.html).
* [GPT Architecture](transformers/gpt/index.html)
* [GLU Variants](transformers/glu_variants/simple.html)
* [kNN-LM: Generalization through Memorization](transformers/knn/index.html)
......
......@@ -14,7 +14,7 @@ from paper [Attention Is All You Need](https://arxiv.org/abs/1706.03762),
and derivatives and enhancements of it.
* [Multi-head attention](mha.html)
* [Relative multi-head attention](relative_mha.html)
* [Relative multi-head attention](xl/relative_mha.html)
* [Transformer Encoder and Decoder Models](models.html)
* [Fixed positional encoding](positional_encoding.html)
......
......@@ -21,12 +21,9 @@ implementations almost weekly.
#### ✨ [Transformers](https://nn.labml.ai/transformers/index.html)
[Transformers module](https://nn.labml.ai/transformers/index.html)
contains implementations for
[multi-headed attention](https://nn.labml.ai/transformers/mha.html)
and
[relative multi-headed attention](https://nn.labml.ai/transformers/relative_mha.html).
* [Multi-headed attention](https://nn.labml.ai/transformers/mha.html)
* [Transformer building blocks](https://nn.labml.ai/transformers/models.html)
* [Relative multi-headed attention](https://nn.labml.ai/transformers/xl/relative_mha.html).
* [GPT Architecture](https://nn.labml.ai/transformers/gpt/index.html)
* [GLU Variants](https://nn.labml.ai/transformers/glu_variants/simple.html)
* [kNN-LM: Generalization through Memorization](https://nn.labml.ai/transformers/knn)
......
......@@ -5,7 +5,7 @@ with open("readme.md", "r") as f:
setuptools.setup(
name='labml-nn',
version='0.4.85',
version='0.4.86',
author="Varuna Jayasiri, Nipun Wijerathne",
author_email="vpjayasiri@gmail.com, hnipun@gmail.com",
description="A collection of PyTorch implementations of neural network architectures and layers.",
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册