Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Stevezhangz
BERT Pytorch
提交
5c6b04bf
B
BERT Pytorch
项目概览
Stevezhangz
/
BERT Pytorch
通知
14
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
B
BERT Pytorch
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
未验证
提交
5c6b04bf
编写于
4月 22, 2021
作者:
S
stevezhangz
提交者:
GitHub
4月 22, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Update README.md
上级
9704445d
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
83 addition
and
1 deletion
+83
-1
README.md
README.md
+83
-1
未找到文件。
README.md
浏览文件 @
5c6b04bf
# BERT-pytorch
BERT BERT BERT
Repeted by myself, without pre-train.
# How to use
Bash code(preparation)
sudo apt-get install ipython3
sudo apt-get install pip
sudo apt-get install git
git clone https://github.com/stevezhangz/BERT-pytorch.git
cd BERT-pytorch
pip install -r requirements.txt
I prepare two demos for model training(poem and a conversation)
run train_demo.py to train
ipython3 train_demo.py
except that, you have to learn about how to run it on your dataset
(1)first use "general_transform_text2list" in data_process.py to transform txt or json file to list which defined as "[s1,s2,s3,s4.....]"
(2)then use "generate_vocab_normalway" in data_process.py to transform list file to "sentences, id_sentence, idx2word, word2idx, vocab_size"
(3)Last but not least, use "creat_batch" in data_process.py to transform "sentences, id_sentence, idx2word, word2idx, vocab_size" to a batch.
(4)finally using dataloder in pytorch to load data.
for example:
#json2list=general_transform_text2list("data/demo.txt",type="txt")
json2list=general_transform_text2list("data/chinese-poetry/chuci/chuci.json",type="json",args=['content'])
data=json2list.getdata()
# transform list to token
list2token=generate_vocab_normalway(data,map_dir="words_info.json")
sentences,token_list,idx2word,word2idx,vocab_size=list2token.transform()
batch = creat_batch(batch_size,max_pred,maxlen,vocab_size,word2idx,token_list,sentences)
input_ids, segment_ids, masked_tokens, masked_pos, isNext = zip(*batch)
input_ids, segment_ids, masked_tokens, masked_pos, isNext = \
torch.LongTensor(input_ids), torch.LongTensor(segment_ids), torch.LongTensor(masked_tokens), \
torch.LongTensor(masked_pos), torch.LongTensor(isNext)
loader = Data.DataLoader(Text_file(input_ids, segment_ids, masked_tokens, masked_pos, isNext), batch_size, True)
model=Bert(n_layers=n_layers,
vocab_size=vocab_size,
emb_size=d_model,
max_len=maxlen,
seg_size=n_segments,
dff=d_ff,
dk=d_k,
dv=d_v,
n_head=n_heads,
n_class=2,
)
if use_gpu:
with torch.cuda.device(device) as device:
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adadelta(model.parameters(), lr=lr)
model.Train(epoches=epoches,
train_data_loader=loader,
optimizer=optimizer,
criterion=criterion,
save_dir=weight_dir,
save_freq=100,
load_dir="checkpoint/checkpoint_199.pth",
use_gpu=use_gpu,
device=device
)
# How to config
Modify super parameters directly in “Config.cfg”
# Pretrain
Because of time, I can't spend time to train the model. You are welcome to use my model for training and contribute pre train weight to this project
# About me
author={
E-maile:stevezhangz@163.com
}
# Acknowledgement
Acknowledgement for the open-source
[
poem dataset
](
https://github.com/chinese-poetry/chinese-poetry
)
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录