在docker下使用多卡训练报错
Created by: yeyupiaoling
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
----------- Configuration Arguments -----------
augment_conf_path: ./conf/augmentation.config
batch_size: 32
dev_manifest: ./dataset/manifest.dev
init_from_pretrained_model: None
is_local: 1
learning_rate: 0.0005
max_duration: 27.0
mean_std_path: ./dataset/mean_std.npz
min_duration: 0.0
num_conv_layers: 2
num_epoch: 50
num_iter_print: 100
num_rnn_layers: 3
num_samples: 120000
output_model_dir: ./models/checkpoints/
rnn_layer_size: 2048
save_epoch: 1
share_rnn_weights: 0
shuffle_method: batch_shuffle_clipped
specgram_type: linear
test_off: 1
train_manifest: ./dataset/manifest.train
use_gpu: 1
use_gru: 1
use_sortagrad: 1
vocab_path: ./dataset/zh_vocab.txt
------------------------------------------------
W0805 02:13:43.410346 69 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 10.1, Runtime API Version: 10.0
W0805 02:13:43.412559 69 device_context.cc:260] device: 0, cuDNN Version: 7.5.
W0805 02:13:46.192203 69 device_context.h:155] WARNING: device: 0. The installed Paddle is compiled with CUDNN 7.6, but CUDNN version in your machine is 7.5, which may cause serious incompatible bug. Please recompile or reinstall Paddle with compatible CUDNN version.
W0805 02:13:50.619479 69 fuse_all_reduce_op_pass.cc:74] Find all_reduce operators: 38. To make the speed faster, some all_reduce ops are fused during training, after fusion, the number of all_reduce ops is 38.
/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py:1070: UserWarning: The following exception is not an EOF exception.
"The following exception is not an EOF exception.")
Traceback (most recent call last):
File "train.py", line 117, in <module>
main()
File "train.py", line 113, in main
train()
File "train.py", line 108, in train
test_off=args.test_off)
File "/DeepSpeech/model_utils/model.py", line 311, in train
return_numpy=False)
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 1071, in run
six.reraise(*sys.exc_info())
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 1066, in run
return_merged=return_merged)
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 1167, in _run_impl
return_merged=return_merged)
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/executor.py", line 879, in _run_parallel
tensors = exe.run(fetch_var_names, return_merged)._move_to_list()
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::operators::WarpCTCFunctor<paddle::platform::CUDADeviceContext>::operator()(paddle::framework::ExecutionContext const&, float const*, float*, int const*, int const*, int const*, unsigned long, unsigned long, unsigned long, float*)
3 paddle::operators::WarpCTCKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
4 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::WarpCTCKernel<paddle::platform::CUDADeviceContext, float> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
6 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
7 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
8 paddle::framework::details::ComputationOpHandle::RunImpl()
9 paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync(paddle::framework::details::OpHandleBase*)
10 paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp(paddle::framework::details::OpHandleBase*, std::shared_ptr<paddle::framework::BlockingQueue<unsigned long> > const&, unsigned long*)
11 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<void>, std::__future_base::_Result_base::_Deleter>, void> >::_M_invoke(std::_Any_data const&)
12 std::__future_base::_State_base::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>&, bool&)
13 ThreadPool::ThreadPool(unsigned long)::{lambda()#1}::operator()() const
------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/framework.py", line 2610, in append_op
attrs=kwargs.get("attrs", None))
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/paddle/fluid/layers/loss.py", line 628, in warpctc
'norm_by_times': norm_by_times,
File "/DeepSpeech/model_utils/network.py", line 433, in deep_speech_v2_network
input=fc, label=text_data, blank=dict_size, norm_by_times=True)
File "/DeepSpeech/model_utils/model.py", line 137, in create_network
share_rnn_weights=self._share_rnn_weights)
File "/DeepSpeech/model_utils/model.py", line 260, in train
train_reader, log_probs, ctc_loss = self.create_network()
File "train.py", line 108, in train
test_off=args.test_off)
File "train.py", line 113, in main
train()
File "train.py", line 117, in <module>
main()
----------------------
Error Message Summary:
----------------------
Error: warp-ctc [version 2] Error in compute_ctc_loss: execution failed
[Hint: Expected CTC_STATUS_SUCCESS == status, but received CTC_STATUS_SUCCESS:0 != status:3.] at (/paddle/paddle/fluid/operators/warpctc_op.h:94)
[operator < warpctc > error]
Failed in training!