提交 c74f8ce1 编写于 作者: 简单小白菜's avatar 简单小白菜

Update README.md

上级 5b49163a
该项目将数据预处理、模型构建等均放在了main.py。该模型的结构采用最简单的LSTM加Linear结构,但是在Epoch为150时结果能达到0.38左右,同时在预测结束时,该模型会保存一张图片——labels和预测结果的对比图。
该项目将数据预处理、模型构建等均放在了LSTM+Linear.py中。该模型的结构采用最简单的LSTM加Linear结构,但是在Epoch为150时结果(R2)能达到0.38左右,同时在预测结束时,该模型会保存一张图片——labels和预测结果的对比图。
在基础的模型上,还尝试了将特征拆分输入LSTM,分别输出结果,经过cat拼接后在输入LSTM,最后经过Linear线性层输出结果。模型结构如下:
` def forward(self, inputs):
out0, (h0, c0) = self.lstm1(inputs[:, :, 0].view(-1, self.args.window_size, self.args.input_size))
out1, (h1, c1) = self.lstm1(inputs[:, :, 1].view(-1, self.args.window_size, self.args.input_size))
out2, (h2, c2) = self.lstm1(inputs[:, :, 2].view(-1, self.args.window_size, self.args.input_size))
out3, (h3, c3) = self.lstm1(inputs[:, :, 3].view(-1, self.args.window_size, self.args.input_size))
out4, (h4, c4) = self.lstm1(inputs[:, :, 4].view(-1, self.args.window_size, self.args.input_size))
out5, (h5, c5) = self.lstm1(inputs[:, :, 5].view(-1, self.args.window_size, self.args.input_size))
out6, (h6, c6) = self.lstm1(inputs[:, :, 6].view(-1, self.args.window_size, self.args.input_size))
out7, (h7, c7) = self.lstm1(inputs[:, :, 7].view(-1, self.args.window_size, self.args.input_size))
# 加入Attention层
out0 = self.Attention(out0, h0, c0)
out1 = self.Attention(out1, h1, c1)
out2 = self.Attention(out2, h2, c2)
out3 = self.Attention(out3, h3, c3)
out4 = self.Attention(out4, h4, c4)
out5 = self.Attention(out5, h5, c5)
out6 = self.Attention(out6, h6, c6)
out7 = self.Attention(out7, h7, c7)
inputs = torch.cat([out0, out1, out2, out3, out4, out5, out6, out7], dim=1)
# inputs = inputs.view(-1, self.args.input_size, self.args.all_input_size * self.args.hidden_size)
out = self.dropout(self.fc1(inputs))
out = self.dropout(self.fc2(out))
out = self.fc3(out)
return out`
但是这个模型的结果尚不达标,R2结果为负,因此还在尝试阶段。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册