Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
Annotated Deep Learning Paper Implementations
提交
13f36c18
A
Annotated Deep Learning Paper Implementations
项目概览
Greenplum
/
Annotated Deep Learning Paper Implementations
9 个月 前同步成功
通知
6
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
A
Annotated Deep Learning Paper Implementations
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
13f36c18
编写于
2月 05, 2021
作者:
V
Varuna Jayasiri
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
links
上级
8168b044
变更
6
隐藏空白更改
内联
并排
Showing
6 changed file
with
12 addition
and
20 deletion
+12
-20
docs/index.html
docs/index.html
+3
-5
docs/transformers/index.html
docs/transformers/index.html
+1
-1
labml_nn/__init__.py
labml_nn/__init__.py
+3
-6
labml_nn/transformers/__init__.py
labml_nn/transformers/__init__.py
+1
-1
readme.md
readme.md
+3
-6
setup.py
setup.py
+1
-1
未找到文件。
docs/index.html
浏览文件 @
13f36c18
...
...
@@ -81,12 +81,10 @@ We believe these would help you understand these algorithms better.</p>
implementations.
</p>
<h2>
Modules
</h2>
<h4>
✨
<a
href=
"transformers/index.html"
>
Transformers
</a></h4>
<p><a
href=
"transformers/index.html"
>
Transformers module
</a>
contains implementations for
<a
href=
"transformers/mha.html"
>
multi-headed attention
</a>
and
<a
href=
"transformers/relative_mha.html"
>
relative multi-headed attention
</a>
.
</p>
<ul>
<li><a
href=
"transformers/mha.html"
>
Multi-headed attention
</a></li>
<li><a
href=
"transformers/models.html"
>
Transformer building blocks
</a></li>
<li><a
href=
"transformers/xl/relative_mha.html"
>
Relative multi-headed attention
</a>
.
</li>
<li><a
href=
"transformers/gpt/index.html"
>
GPT Architecture
</a></li>
<li><a
href=
"transformers/glu_variants/simple.html"
>
GLU Variants
</a></li>
<li><a
href=
"transformers/knn/index.html"
>
kNN-LM: Generalization through Memorization
</a></li>
...
...
docs/transformers/index.html
浏览文件 @
13f36c18
...
...
@@ -78,7 +78,7 @@ from paper <a href="https://arxiv.org/abs/1706.03762">Attention Is All You Need<
and derivatives and enhancements of it.
</p>
<ul>
<li><a
href=
"mha.html"
>
Multi-head attention
</a></li>
<li><a
href=
"relative_mha.html"
>
Relative multi-head attention
</a></li>
<li><a
href=
"
xl/
relative_mha.html"
>
Relative multi-head attention
</a></li>
<li><a
href=
"models.html"
>
Transformer Encoder and Decoder Models
</a></li>
<li><a
href=
"positional_encoding.html"
>
Fixed positional encoding
</a></li>
</ul>
...
...
labml_nn/__init__.py
浏览文件 @
13f36c18
...
...
@@ -15,12 +15,9 @@ implementations.
#### ✨ [Transformers](transformers/index.html)
[Transformers module](transformers/index.html)
contains implementations for
[multi-headed attention](transformers/mha.html)
and
[relative multi-headed attention](transformers/relative_mha.html).
* [Multi-headed attention](transformers/mha.html)
* [Transformer building blocks](transformers/models.html)
* [Relative multi-headed attention](transformers/xl/relative_mha.html).
* [GPT Architecture](transformers/gpt/index.html)
* [GLU Variants](transformers/glu_variants/simple.html)
* [kNN-LM: Generalization through Memorization](transformers/knn/index.html)
...
...
labml_nn/transformers/__init__.py
浏览文件 @
13f36c18
...
...
@@ -14,7 +14,7 @@ from paper [Attention Is All You Need](https://arxiv.org/abs/1706.03762),
and derivatives and enhancements of it.
* [Multi-head attention](mha.html)
* [Relative multi-head attention](relative_mha.html)
* [Relative multi-head attention](
xl/
relative_mha.html)
* [Transformer Encoder and Decoder Models](models.html)
* [Fixed positional encoding](positional_encoding.html)
...
...
readme.md
浏览文件 @
13f36c18
...
...
@@ -21,12 +21,9 @@ implementations almost weekly.
#### ✨ [Transformers](https://nn.labml.ai/transformers/index.html)
[
Transformers module
](
https://nn.labml.ai/transformers/index.html
)
contains implementations for
[
multi-headed attention
](
https://nn.labml.ai/transformers/mha.html
)
and
[
relative multi-headed attention
](
https://nn.labml.ai/transformers/relative_mha.html
)
.
*
[
Multi-headed attention
](
https://nn.labml.ai/transformers/mha.html
)
*
[
Transformer building blocks
](
https://nn.labml.ai/transformers/models.html
)
*
[
Relative multi-headed attention
](
https://nn.labml.ai/transformers/xl/relative_mha.html
)
.
*
[
GPT Architecture
](
https://nn.labml.ai/transformers/gpt/index.html
)
*
[
GLU Variants
](
https://nn.labml.ai/transformers/glu_variants/simple.html
)
*
[
kNN-LM: Generalization through Memorization
](
https://nn.labml.ai/transformers/knn
)
...
...
setup.py
浏览文件 @
13f36c18
...
...
@@ -5,7 +5,7 @@ with open("readme.md", "r") as f:
setuptools
.
setup
(
name
=
'labml-nn'
,
version
=
'0.4.8
5
'
,
version
=
'0.4.8
6
'
,
author
=
"Varuna Jayasiri, Nipun Wijerathne"
,
author_email
=
"vpjayasiri@gmail.com, hnipun@gmail.com"
,
description
=
"A collection of PyTorch implementations of neural network architectures and layers."
,
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录