未验证 提交 5ab525db 编写于 作者: saxon_zh's avatar saxon_zh 提交者: GitHub

add define callback/metric/loss chapter (#893)

* upgrade code to 2.0-beta

* add high level api doc

* add define callback/metric/loss chapter

* add define callback/metric/loss chapter
上级 46f341e3
......@@ -57,7 +57,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"metadata": {},
"outputs": [
{
......@@ -66,7 +66,7 @@
"text/plain": "'0.0.0'"
},
"metadata": {},
"execution_count": 4
"execution_count": 2
}
],
"source": [
......@@ -91,10 +91,7 @@
"* 如何进行模型的组网。\n",
"* 高层API进行模型训练的相关API使用。\n",
"* 如何在fit接口满足需求的时候进行自定义,使用基础API来完成训练。\n",
"* 如何使用多卡来加速训练。\n",
"\n",
"其他端到端的示例教程:\n",
"* TBD"
"* 如何使用多卡来加速训练。"
]
},
{
......@@ -112,22 +109,20 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": "['DatasetFolder',\n 'ImageFolder',\n 'MNIST',\n 'Flowers',\n 'Cifar10',\n 'Cifar100',\n 'VOC2012']"
},
"metadata": {},
"execution_count": 17
"output_type": "stream",
"name": "stdout",
"text": "视觉相关数据集: ['DatasetFolder', 'ImageFolder', 'MNIST', 'Flowers', 'Cifar10', 'Cifar100', 'VOC2012']\n自然语言相关数据集: ['Conll05st', 'Imdb', 'Imikolov', 'Movielens', 'MovieReviews', 'UCIHousing', 'WMT14', 'WMT16']\n"
}
],
"source": [
"paddle.vision.datasets.__all__"
"print('视觉相关数据集:', paddle.vision.datasets.__all__)\n",
"print('自然语言相关数据集:', paddle.text.datasets.__all__)"
]
},
{
......@@ -458,9 +453,9 @@
"source": [
"## 5. 模型训练\n",
"\n",
"使用`paddle.Model`封装成模型类后进行训练非常的简洁方便,我们可以直接通过调用`Model.fit`就可以完成训练过程。\n",
"网络结构通过`paddle.Model`接口封装成模型类后进行执行操作非常的简洁方便,可以直接通过调用`Model.fit`就可以完成训练过程。\n",
"\n",
"使用`Model.fit`接口启动训练前,我们先通过`Model.prepare`接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。\n",
"使用`Model.fit`接口启动训练前,我们先通过`Model.prepare`接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。\n",
"\n"
]
},
......@@ -553,13 +548,269 @@
"python -m paddle.distributed.launch train.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.3 自定义Loss\n",
"\n",
"有时我们会遇到特定任务的Loss计算方式在框架既有的Loss接口中不存在,或算法不符合自己的需求,那么期望能够自己来进行Loss的自定义,我们这里就会讲解介绍一下如何进行Loss的自定义操作,首先来看下面的代码:\n",
"\n",
"```python\n",
"class SelfDefineLoss(paddle.nn.Layer):\n",
" \"\"\"\n",
" 1. 继承paddle.nn.Layer\n",
" \"\"\"\n",
" def __init__(self):\n",
" \"\"\"\n",
" 2. 构造函数根据自己的实际算法需求和使用需求进行参数定义即可\n",
" \"\"\"\n",
" super(SelfDefineLoss, self).__init__()\n",
"\n",
" def forward(self, input, label):\n",
" \"\"\"\n",
" 3. 实现forward函数,forward在调用时会传递两个参数:input和label\n",
" - input:单个或批次训练数据经过模型前向计算输出结果\n",
" - label:单个或批次训练数据对应的标签数据\n",
"\n",
" 接口返回值是一个Tensor,根据自定义的逻辑加和或计算均值后的损失\n",
" \"\"\"\n",
" # 使用Paddle中相关API自定义的计算逻辑\n",
" # output = xxxxx\n",
" # return output\n",
"```\n",
"\n",
"那么了解完代码层面如果编写自定义代码后我们看一个实际的例子,下面是在图像分割示例代码中写的一个自定义Loss,当时主要是想使用自定义的softmax计算维度。\n",
"\n",
"```python\n",
"class SoftmaxWithCrossEntropy(paddle.nn.Layer):\n",
" def __init__(self):\n",
" super(SoftmaxWithCrossEntropy, self).__init__()\n",
"\n",
" def forward(self, input, label):\n",
" loss = F.softmax_with_cross_entropy(input, \n",
" label, \n",
" return_softmax=False,\n",
" axis=1)\n",
" return paddle.mean(loss)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.4 自定义Metric\n",
"\n",
"和Loss一样,如果遇到一些想要做个性化实现的操作时,我们也可以来通过框架完成自定义的评估计算方法,具体的实现方式如下:\n",
"\n",
"```python\n",
"class SelfDefineMetric(paddle.metric.Metric):\n",
" \"\"\"\n",
" 1. 继承paddle.metric.Metric\n",
" \"\"\"\n",
" def __init__(self):\n",
" \"\"\"\n",
" 2. 构造函数实现,自定义参数即可\n",
" \"\"\"\n",
" super(SelfDefineMetric, self).__init__()\n",
"\n",
" def name(self):\n",
" \"\"\"\n",
" 3. 实现name方法,返回定义的评估指标名字\n",
" \"\"\"\n",
" return '自定义评价指标的名字'\n",
"\n",
" def compute(self, ...)\n",
" \"\"\"\n",
" 4. 本步骤可以省略,实现compute方法,这个方法主要用于`update`的加速,可以在这个方法中调用一些paddle实现好的Tensor计算API,编译到模型网络中一起使用低层C++ OP计算。\n",
" \"\"\"\n",
"\n",
" return 自己想要返回的数据,会做为update的参数传入。\n",
"\n",
" def update(self, ...):\n",
" \"\"\"\n",
" 5. 实现update方法,用于单个batch训练时进行评估指标计算。\n",
" - 当`compute`类函数未实现时,会将模型的计算输出和标签数据的展平作为`update`的参数传入。\n",
" - 当`compute`类函数做了实现时,会将compute的返回结果作为`update`的参数传入。\n",
" \"\"\"\n",
" return acc value\n",
" \n",
" def accumulate(self):\n",
" \"\"\"\n",
" 6. 实现accumulate方法,返回历史batch训练积累后计算得到的评价指标值。\n",
" 每次`update`调用时进行数据积累,`accumulate`计算时对积累的所有数据进行计算并返回。\n",
" 结算结果会在`fit`接口的训练日志中呈现。\n",
" \"\"\"\n",
" # 利用update中积累的成员变量数据进行计算后返回\n",
" return accumulated acc value\n",
"\n",
" def reset(self):\n",
" \"\"\"\n",
" 7. 实现reset方法,每个Epoch结束后进行评估指标的重置,这样下个Epoch可以重新进行计算。\n",
" \"\"\"\n",
" # do reset action\n",
"```\n",
"\n",
"我们看一个框架中的具体例子,这个是框架中已提供的一个评估指标计算接口,这里就是按照上述说明中的实现方法进行了相关类继承和成员函数实现。\n",
"\n",
"```python\n",
"from paddle.metric import Metric\n",
"\n",
"\n",
"class Precision(Metric):\n",
" \"\"\"\n",
" Precision (also called positive predictive value) is the fraction of\n",
" relevant instances among the retrieved instances. Refer to\n",
" https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers\n",
"\n",
" Noted that this class manages the precision score only for binary\n",
" classification task.\n",
" \n",
" ......\n",
"\n",
" \"\"\"\n",
"\n",
" def __init__(self, name='precision', *args, **kwargs):\n",
" super(Precision, self).__init__(*args, **kwargs)\n",
" self.tp = 0 # true positive\n",
" self.fp = 0 # false positive\n",
" self._name = name\n",
"\n",
" def update(self, preds, labels):\n",
" \"\"\"\n",
" Update the states based on the current mini-batch prediction results.\n",
"\n",
" Args:\n",
" preds (numpy.ndarray): The prediction result, usually the output\n",
" of two-class sigmoid function. It should be a vector (column\n",
" vector or row vector) with data type: 'float64' or 'float32'.\n",
" labels (numpy.ndarray): The ground truth (labels),\n",
" the shape should keep the same as preds.\n",
" The data type is 'int32' or 'int64'.\n",
" \"\"\"\n",
" if isinstance(preds, paddle.Tensor):\n",
" preds = preds.numpy()\n",
" elif not _is_numpy_(preds):\n",
" raise ValueError(\"The 'preds' must be a numpy ndarray or Tensor.\")\n",
"\n",
" if isinstance(labels, paddle.Tensor):\n",
" labels = labels.numpy()\n",
" elif not _is_numpy_(labels):\n",
" raise ValueError(\"The 'labels' must be a numpy ndarray or Tensor.\")\n",
"\n",
" sample_num = labels.shape[0]\n",
" preds = np.floor(preds + 0.5).astype(\"int32\")\n",
"\n",
" for i in range(sample_num):\n",
" pred = preds[i]\n",
" label = labels[i]\n",
" if pred == 1:\n",
" if pred == label:\n",
" self.tp += 1\n",
" else:\n",
" self.fp += 1\n",
"\n",
" def reset(self):\n",
" \"\"\"\n",
" Resets all of the metric state.\n",
" \"\"\"\n",
" self.tp = 0\n",
" self.fp = 0\n",
"\n",
" def accumulate(self):\n",
" \"\"\"\n",
" Calculate the final precision.\n",
"\n",
" Returns:\n",
" A scaler float: results of the calculated precision.\n",
" \"\"\"\n",
" ap = self.tp + self.fp\n",
" return float(self.tp) / ap if ap != 0 else .0\n",
"\n",
" def name(self):\n",
" \"\"\"\n",
" Returns metric name\n",
" \"\"\"\n",
" return self._name\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.5 自定义Callback\n",
"\n",
"`fit`接口的callback参数支持我们传一个Callback类实例,用来在每轮训练和每个batch训练前后进行调用,可以通过callback收集到训练过程中的一些数据和参数,或者实现一些自定义操作。\n",
"\n",
"```python\n",
"class SelfDefineCallback(paddle.callbacks.Callback):\n",
" \"\"\"\n",
" 1. 继承paddle.callbacks.Callback\n",
" 2. 按照自己的需求实现以下类成员方法:\n",
" def on_train_begin(self, logs=None) 训练开始前,`Model.fit`接口中调用\n",
" def on_train_end(self, logs=None) 训练结束后,`Model.fit`接口中调用\n",
" def on_eval_begin(self, logs=None) 评估开始前,`Model.evaluate`接口调用\n",
" def on_eval_end(self, logs=None) 评估结束后,`Model.evaluate`接口调用\n",
" def on_test_begin(self, logs=None) 预测测试开始前,`Model.predict`接口中调用\n",
" def on_test_end(self, logs=None) 预测测试结束后,`Model.predict`接口中调用 \n",
" def on_epoch_begin(self, epoch, logs=None) 每轮训练开始前,`Model.fit`接口中调用 \n",
" def on_epoch_end(self, epoch, logs=None) 每轮训练结束后,`Model.fit`接口中调用 \n",
" def on_train_batch_begin(self, step, logs=None) 单个Batch训练开始前,`Model.fit`和`Model.train_batch`接口中调用\n",
" def on_train_batch_end(self, step, logs=None) 单个Batch训练结束后,`Model.fit`和`Model.train_batch`接口中调用\n",
" def on_eval_batch_begin(self, step, logs=None) 单个Batch评估开始前,`Model.evalute`和`Model.eval_batch`接口中调用\n",
" def on_eval_batch_end(self, step, logs=None) 单个Batch评估结束后,`Model.evalute`和`Model.eval_batch`接口中调用\n",
" def on_test_batch_begin(self, step, logs=None) 单个Batch预测测试开始前,`Model.predict`和`Model.test_batch`接口中调用\n",
" def on_test_batch_end(self, step, logs=None) 单个Batch预测测试结束后,`Model.predict`和`Model.test_batch`接口中调用\n",
" \"\"\"\n",
" def __init__(self):\n",
" super(SelfDefineCallback, self).__init__()\n",
"\n",
" 按照需求定义自己的类成员方法\n",
"```\n",
"\n",
"我们看一个框架中的实际例子,这是一个框架自带的ModelCheckpoint回调函数,方便用户在fit训练模型时自动存储每轮训练得到的模型。\n",
"\n",
"```python\n",
"class ModelCheckpoint(Callback):\n",
" def __init__(self, save_freq=1, save_dir=None):\n",
" self.save_freq = save_freq\n",
" self.save_dir = save_dir\n",
"\n",
" def on_epoch_begin(self, epoch=None, logs=None):\n",
" self.epoch = epoch\n",
"\n",
" def _is_save(self):\n",
" return self.model and self.save_dir and ParallelEnv().local_rank == 0\n",
"\n",
" def on_epoch_end(self, epoch, logs=None):\n",
" if self._is_save() and self.epoch % self.save_freq == 0:\n",
" path = '{}/{}'.format(self.save_dir, epoch)\n",
" print('save checkpoint at {}'.format(os.path.abspath(path)))\n",
" self.model.save(path)\n",
"\n",
" def on_train_end(self, logs=None):\n",
" if self._is_save():\n",
" path = '{}/final'.format(self.save_dir)\n",
" print('save checkpoint at {}'.format(os.path.abspath(path)))\n",
" self.model.save(path)\n",
"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. 模型评估\n",
"\n",
"对于训练好的模型进行评估操作可以使用`evaluate`接口来实现。"
"对于训练好的模型进行评估操作可以使用`evaluate`接口来实现,事先定义好用于评估使用的数据集后,可以简单的调用`evaluate`接口即可完成模型评估操作,结束后根据prepare中loss和metric的定义来进行相关评估结果计算返回。\n",
"\n",
"返回格式是一个字典:\n",
"* 只包含loss,`{'loss': xxx}`\n",
"* 包含loss和一个评估指标,`{'loss': xxx, 'metric name': xxx}`\n",
"* 包含loss和多个评估指标,`{'loss': xxx, 'metric name': xxx, 'metric name': xxx}`"
]
},
{
......@@ -577,7 +828,13 @@
"source": [
"## 7. 模型预测\n",
"\n",
"高层API中提供`predict`接口,支持用户使用测试数据来完成模型的预测。"
"高层API中提供了`predict`接口来方便用户对训练好的模型进行预测验证,只需要基于训练好的模型将需要进行预测测试的数据放到接口中进行计算即可,接口会将经过模型计算得到的预测结果进行返回。\n",
"\n",
"返回格式是一个list,元素数目对应模型的输出数目:\n",
"* 模型是单一输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n)]\n",
"* 模型是多输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), (numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), ...]\n",
"\n",
"numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据,数目对应预测数据集的数目。"
]
},
{
......@@ -589,6 +846,23 @@
"pred_result = model.predict(val_dataset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 7.1 使用多卡进行预测\n",
"\n",
"有时我们需要进行预测验证的数据较多,单卡无法满足我们的时间诉求,那么`predict`接口也为用户支持实现了使用多卡模式来运行。\n",
"\n",
"使用起来也是超级简单,无需修改代码程序,只需要使用launch来启动对应的预测脚本即可。\n",
"\n",
"```bash\n",
"$ python3 -m paddle.distributed.launch infer.py\n",
"```\n",
"\n",
"infer.py里面就是包含model.predict的代码程序。"
]
},
{
"cell_type": "markdown",
"metadata": {},
......@@ -597,7 +871,7 @@
"\n",
"### 8.1 模型存储\n",
"\n",
"模型训练和验证达到我们的预期后,可以使用`save`接口来将我们的模型保存下来,用于后续模型的Fine-tuning或推理部署。"
"模型训练和验证达到我们的预期后,可以使用`save`接口来将我们的模型保存下来,用于后续模型的Fine-tuning(接口参数training=True)或推理部署(接口参数training=False)。"
]
},
{
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册