From 08b8d99ba9ab49d5767a16ca953eb6da6680fad9 Mon Sep 17 00:00:00 2001 From: jzhang533 Date: Tue, 18 Aug 2020 11:49:20 +0800 Subject: [PATCH] hello paddle added (#869) --- .../hello_paddle/hello_paddle.ipynb | 369 ++++++++++++++++++ .../image_search/image_search.ipynb | 2 +- 2 files changed, 370 insertions(+), 1 deletion(-) create mode 100644 paddle2.0_docs/hello_paddle/hello_paddle.ipynb diff --git a/paddle2.0_docs/hello_paddle/hello_paddle.ipynb b/paddle2.0_docs/hello_paddle/hello_paddle.ipynb new file mode 100644 index 0000000..742ca92 --- /dev/null +++ b/paddle2.0_docs/hello_paddle/hello_paddle.ipynb @@ -0,0 +1,369 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Hello Paddle: 从普通程序走向机器学习程序" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "这篇示例向你介绍普通的程序跟机器学习程序的区别,并带着你用飞桨框架,实现你的第一个机器学习程序。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 普通程序跟机器学习程序的逻辑区别\n", + "\n", + "作为一名开发者,你最熟悉的开始学习一门编程语言,或者一个深度学习框架的方式,可能是通过一个hello, world程序。\n", + "\n", + "学习飞桨也可以这样,这篇小示例教程将会通过一个非常简单的示例来向你展示如何开始使用飞桨。\n", + "\n", + "机器学习程序跟通常的程序最大的不同是,通常的程序是在给定输入的情况下,通过告诉计算机处理数据的规则,然后得到处理后的结果。而机器学习程序则是在并不知道这些规则的情况下,让机器来从数据当中**学习**出来规则。\n", + "\n", + "作为热身,我们先来看看通常的程序所做的事情。\n", + "\n", + "我们现在面临这样一个任务:\n", + "\n", + "我们乘坐出租车的时候,会有一个10元的起步价,只要上车就需要收取。出租车每行驶1公里,需要再支付每公里2元的行驶费用。当一个乘客坐完出租车之后,车上的计价器需要算出来该乘客需要支付的乘车费用。\n", + "\n", + "如果用python来实现该功能,会如下所示:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "12.0\n", + "16.0\n", + "20.0\n", + "28.0\n", + "30.0\n", + "50.0\n" + ] + } + ], + "source": [ + "def calculate_fee(minutes_dialed):\n", + " return 10 + 2 * minutes_dialed\n", + "\n", + "for x in [1.0, 3.0, 5.0, 9.0, 10.0, 20.0]:\n", + " print(calculate_fee(x))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "接下来,我们把问题稍微变换一下,现在我们知道乘客每次乘坐出租车的公里数,也知道乘客每次下车的时候支付给出租车司机的总费用。但是并不知道乘车的起步价,以及每公里行驶费用是多少。我们希望让机器从这些数据当中学习出来计算总费用的规则。\n", + "\n", + "更具体的,我们想要让机器学习程序通过数据学习出来下面的公式当中的参数w和参数b(这是一个非常简单的示例,所以`w`和`b`都是浮点数,随着对深度学习了解的深入,你将会知道`w`和`b`通常情况下会是矩阵和向量)。这样,当下次乘车的时候,我们知道了行驶里程`distance_travelled`的时候,我们就可以估算出来用户的总费用`total_fee`了。\n", + "\n", + "```\n", + "total_fee = w * distance_travelled + b\n", + "```\n", + "\n", + "接下来,我们看看用飞桨如何实现这个hello, world级别的机器学习程序。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 导入飞桨\n", + "\n", + "为了能够使用飞桨,我们需要先用python的`import`语句导入飞桨`paddle`。\n", + "同时,为了能够更好的对数组进行计算和处理,我们也还需要导入`numpy`。\n", + "\n", + "如果你是在本机运行这个notebook,而且还没有安装飞桨,可以去飞桨的官网查看如何安装:[飞桨官网](https://www.paddlepaddle.org.cn/)。并且请使用2.0beta或以上版本的飞桨。" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "paddle version 2.0.0-alpha0\n" + ] + } + ], + "source": [ + "import numpy as np\n", + "import paddle\n", + "\n", + "print(\"paddle version \" + paddle.__version__)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 准备数据\n", + "\n", + "在这个机器学习任务中,我们已经知道了乘客的行驶里程`distance_travelled`,和对应的,这些乘客的总费用`total_fee`。\n", + "通常情况下,在机器学习任务中,像`distance_travelled`这样的输入值,一般被称为`x`(或者特征`feature`),像`total_fee`这样的输出值,一般被称为`y`(或者标签`label`)。\n", + "\n", + "我们用numpy当中的数组来表示这两组数据,然后用`np.hstack`把这两组数据拼成最终的数据。" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[[ 1. 12.]\n", + " [ 3. 16.]\n", + " [ 5. 20.]\n", + " [ 9. 28.]\n", + " [10. 30.]\n", + " [20. 50.]]\n" + ] + } + ], + "source": [ + "x_data = np.array([[1.], [3.0], [5.0], [9.0], [10.0], [20.0]]).astype('float32')\n", + "y_data = np.array( [[12.], [16.0], [20.0], [28.0], [30.0], [50.0]]).astype('float32')\n", + "data = np.hstack((x_data, y_data))\n", + "print(data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 用飞桨定义模型的计算\n", + "\n", + "使用飞桨定义模型的计算的过程,本质上,是我们用python,通过飞桨提供的API,来告诉飞桨我们的计算规则的过程。回顾一下,我们想要通过飞桨用机器学习方法,从数据当中学习出来如下公式当中的`w`和`b`。这样在未来,给定`x`时就可以估算出来`y`值(估算出来的`y`记为`y_predict`)\n", + "\n", + "```\n", + "y_predict = w * x + b\n", + "```\n", + "\n", + "我们将会用飞桨的线性变换层:`paddle.nn.Linear`来实现这个计算过程,这个公式里的变量`x, y, w, b, y_predict`,对应着飞桨里面的[Tensor概念](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/tensor.html)。\n", + "\n", + "### 稍微补充一下\n", + "\n", + "在这里的示例中,我们根据经验,已经事先知道了`distance_travelled`和`total_fee`之间是线性的关系,而在更实际的问题当中,`x`和`y`的关系通常是非线性的,因此也就需要使用更多类型,也更复杂的神经网络。(比如,BMI指数跟你的身高就不是线性关系,一张图片里的某个像素值跟这个图片是猫还是狗也不是线性关系。)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "# this line will be removed after 2.0beta's official release\n", + "paddle.enable_imperative()\n", + "linear = paddle.nn.Linear(input_dim=1, output_dim=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 准备好运行飞桨\n", + "\n", + "前面我们提到过,机器(计算机)在一开始的时候是随便猜`w`和`b`的,我们先看看机器猜的怎么样。你应该可以看到,这时候的`w`是一个随机值,`b`是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "w before optimize: 1.7059897184371948\n", + "b before optimize: 0.0\n" + ] + } + ], + "source": [ + "w_before_opt = linear.weight.numpy()[0][0]\n", + "b_before_opt = linear.bias.numpy()[0]\n", + "\n", + "print(\"w before optimize: {}\".format(w_before_opt))\n", + "print(\"b before optimize: {}\".format(b_before_opt))\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 告诉飞桨怎么样学习\n", + "\n", + "前面我们定义好了神经网络(尽管是一个最简单的神经网络),我们还需要告诉飞桨,怎么样去**学习**,从而能得到参数`w`和`b`。\n", + "\n", + "这个过程简单的来陈述一下,你应该就会大致明白了(尽管背后的理论和知识还需要逐步的去学习)。在机器学习/深度学习当中,机器(计算机)在最开始的时候,得到参数`w`和`b`的方式是随便猜一下,用这种随便猜测得到的参数值,去进行计算(预测)的时候,得到的`y_predict`,跟实际的`y`值一定是有**差距**的。接下来,机器会根据这个差距来**调整`w`和`b`**,随着这样的逐步的调整,`w`和`b`会越来越正确,`y_predict`跟`y`之间的差距也会越来越小,从而最终能得到好用的`w`和`b`。这个过程就是机器**学习**的过程。\n", + "\n", + "用更加技术的语言来说,衡量**差距**的函数(一个公式)就是损失函数,用来**调整**参数的方法就是优化算法。\n", + "\n", + "在本示例当中,我们用最简单的均方误差(mean square error)作为损失函数(`paddle.nn.MSELoss`);和最常见的优化算法SGD(stocastic gradient descent)作为优化算法(传给`paddle.optimizer.SGDOptimizer`的参数`learning_rate`,你可以理解为控制每次调整的步子大小的参数)。" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "mse_loss = paddle.nn.MSELoss()\n", + "sgd_optimizer = paddle.optimizer.SGDOptimizer(learning_rate=0.001, parameter_list = linear.parameters())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 运行优化算法\n", + "\n", + "接下来,我们让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量`y`和`y_predict`的`loss`)在不断的降低。" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "epoch 0 average loss [155.91637]\n", + "epoch 1000 average loss [8.280919]\n", + "epoch 2000 average loss [1.851555]\n", + "epoch 3000 average loss [0.4139988]\n", + "epoch 4000 average loss [0.09256991]\n", + "finished training, average loss [0.02072961]\n" + ] + } + ], + "source": [ + "TOTAL_EPOCH = 5000\n", + "for i in range(TOTAL_EPOCH):\n", + " # 每个epoch对数据进行随机打乱,会让优化算法更好的前进。\n", + " np.random.shuffle(data)\n", + " # 把numpy数据转换为paddle的tensor数据\n", + " x = paddle.imperative.to_variable(data[:,:1])\n", + " y = paddle.imperative.to_variable(data[:,1:])\n", + " \n", + " y_predict = linear(x)\n", + " loss = mse_loss(y_predict, y)\n", + " loss.backward()\n", + " sgd_optimizer.minimize(loss)\n", + " linear.clear_gradients()\n", + " \n", + " if i%1000 == 0:\n", + " print(\"epoch {} average loss {}\".format(i, loss.numpy()))\n", + " \n", + "print(\"finished training, average loss {}\".format(loss.numpy()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 机器学习出来的参数\n", + "\n", + "经过了这样的对参数`w`和`b`的调整(**学习**),我们再通过下面的程序,来看看现在的参数变成了多少。你应该会发现`w`变成了很接近2.0的一个值,`b`变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "w after optimize: 2.018333911895752\n", + "b after optimize: 9.765570640563965\n" + ] + } + ], + "source": [ + "w_after_opt = linear.weight.numpy()[0][0]\n", + "b_after_opt = linear.bias.numpy()[0]\n", + "\n", + "print(\"w after optimize: {}\".format(w_after_opt))\n", + "print(\"b after optimize: {}\".format(b_after_opt))\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# hello paddle\n", + "\n", + "通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "hello paddle\n" + ] + } + ], + "source": [ + "print(\"hello paddle\")" + ] + } + ], + "metadata": { + "colab": { + "name": "hello-paddle.ipynb", + "private_outputs": true, + "provenance": [], + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.7" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/paddle2.0_docs/image_search/image_search.ipynb b/paddle2.0_docs/image_search/image_search.ipynb index 5a3911b..317e167 100644 --- a/paddle2.0_docs/image_search/image_search.ipynb +++ b/paddle2.0_docs/image_search/image_search.ipynb @@ -505,7 +505,7 @@ "\n", "- `inverse_temperature`参数起到的作用是让softmax在计算梯度时,能够处于梯度更显著的区域。(可以参考[attention is all you need](https://arxiv.org/abs/1706.03762)中,在点积之后的`scale`操作)。\n", "- 整个计算过程,会先用上面的网络分别计算前10张图片(anchors)的高维表示,和后10张图片的高维表示。然后再用[matmul](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/matmul_cn.html)计算前10张图片分别与后10张图片的相似度。(所以`similarities`会是一个`(10, 10)`的Tensor)。\n", - "- 为[softmax_with_cross_entropy](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/softmax_with_cross_entropy_cn.html)构造类别标签时,则相应的,可以构造出来0 ~ num_classes的标签值,用来让学习的目标成为相似的图片的相似度尽可能的趋向于1.0,而不相似的图片的相似度尽可能的趋向于0.0。\n" + "- 为[softmax_with_cross_entropy](https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/softmax_with_cross_entropy_cn.html)构造类别标签时,则相应的,可以构造出来0 ~ num_classes的标签值,用来让学习的目标成为相似的图片的相似度尽可能的趋向于1.0,而不相似的图片的相似度尽可能的趋向于-1.0。\n" ] }, { -- GitLab