{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 变分量子态对角化算法(VQSD)\n", " Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 概览\n", "\n", "- 在本案例中,我们将通过Paddle Quantum训练量子神经网络来完成量子态的对角化。\n", "\n", "- 首先,让我们通过下面几行代码引入必要的library和package。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "pycharm": { "is_executing": false, "name": "#%%\n" } }, "outputs": [], "source": [ "import numpy\n", "from numpy import diag\n", "import scipy\n", "from paddle import fluid\n", "from paddle_quantum.circuit import UAnsatz\n", "from paddle_quantum.utils import dagger\n", "from paddle.complex import matmul, trace, transpose" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "\n", "## 背景\n", "量子态对角化算法(VQSD,Variational Quantum State Diagonalization)[1-3] 的目标是输出一个量子态的特征谱,即其所有特征值。求解量子态的特征值在量子计算中有着诸多应用,比如可以用于计算保真度和冯诺依曼熵,也可以用于主成分分析。\n", "- 量子态通常是一个混合态,表示如下 $\\rho_{\\text{mixed}} = \\sum_i P_i |\\psi_i\\rangle\\langle\\psi_i|$\n", "- 作为一个简单的例子,我们考虑一个2量子位的量子态,它的特征谱为 $(0.5, 0.3, 0.1, 0.1)$, 我们先通过随机作用一个酉矩阵来生成具有这样特征谱的随机量子态。\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "pycharm": { "is_executing": false, "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 0.25692714+2.79965123e-18j -0.01201165+4.35008229e-02j\n", " -0.04922153-5.53435795e-03j -0.05482813+6.81592880e-02j]\n", " [-0.01201165-4.35008229e-02j 0.29589652-4.11838221e-18j\n", " 0.10614221-7.12575060e-02j -0.03921986-9.71359495e-02j]\n", " [-0.04922153+5.53435795e-03j 0.10614221+7.12575060e-02j\n", " 0.214462 -3.16199986e-18j 0.02936413-1.13227406e-01j]\n", " [-0.05482813-6.81592880e-02j -0.03921986+9.71359495e-02j\n", " 0.02936413+1.13227406e-01j 0.23271434+4.32784528e-18j]]\n" ] } ], "source": [ "scipy.random.seed(13)\n", "V = scipy.stats.unitary_group.rvs(4) # 随机生成一个酉矩阵\n", "D = diag([0.5, 0.3, 0.1, 0.1]) # 输入目标态 rho 的谱\n", "V_H = V.conj().T \n", "rho = V @ D @ V_H # 生成 rho\n", "print(rho) # 打印量子态 rho" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 搭建量子神经网络\n", "- 在这个案例中,我们将通过训练量子神经网络QNN(也可以理解为参数化量子电路)来学习量子态的特征谱。这里,我们提供一个预设的2量子位量子电路。\n", "\n", "- 我们预设一些该参数化电路的参数,比如宽度为2量子位。\n", "\n", "- 初始化其中的变量参数,${\\bf{\\theta }}$代表我们量子神经网络中的参数组成的向量。\n", " " ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "pycharm": { "is_executing": false, "name": "#%% \n" } }, "outputs": [], "source": [ "N = 2 # 量子神经网络的宽度\n", "SEED = 14 # 固定随机种子\n", "THETA_SIZE = 15 # 量子神经网络中参数的数量\n", "\n", "def U_theta(theta, N):\n", " \"\"\"\n", " Quantum Neural Network\n", " \"\"\"\n", " \n", " # 按照量子比特数量/网络宽度初始化量子神经网络\n", " cir = UAnsatz(N)\n", " \n", " # 调用内置的量子神经网络模板\n", " cir.universal_2_qubit_gate(theta)\n", "\n", " # 返回量子神经网络所模拟的酉矩阵 U\n", " return cir.U" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 配置训练模型 - 损失函数\n", "- 现在我们已经有了数据和量子神经网络的架构,我们将进一步定义训练参数、模型和损失函数。\n", "- 通过作用量子神经网络$U(\\theta)$在量子态$\\rho$后得到的量子态记为$\\tilde\\rho$,我们设定损失函数为$\\tilde\\rho$与用来标记的量子态$\\sigma=0.1 |00\\rangle\\langle 00| + 0.2 |01\\rangle \\langle 01| + 0.3 |10\\rangle \\langle10| + 0.4 |11 \\rangle\\langle 11|$的推广的内积。\n", "- 具体的,设定损失函数为 $\\mathcal{L}(\\boldsymbol{\\theta}) = \\text{Tr}(\\tilde\\rho\\sigma) .$" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "pycharm": { "is_executing": false, "name": "#%%\n" } }, "outputs": [], "source": [ "# 输入用来标记的量子态sigma\n", "sigma = diag([0.1, 0.2, 0.3, 0.4]).astype('complex128') \n", "\n", "class Net(fluid.dygraph.Layer):\n", " \"\"\"\n", " Construct the model net\n", " \"\"\"\n", "\n", " def __init__(self, shape, rho, sigma, param_attr=fluid.initializer.Uniform(low=0.0, high=2 * numpy.pi, seed=SEED),\n", " dtype='float64'):\n", " super(Net, self).__init__()\n", " \n", " # 将 Numpy array 转换成 Paddle 动态图模式中支持的 variable\n", " self.rho = fluid.dygraph.to_variable(rho)\n", " self.sigma = fluid.dygraph.to_variable(sigma)\n", " \n", " # 初始化 theta 参数列表,并用 [0, 2*pi] 的均匀分布来填充初始值\n", " self.theta = self.create_parameter(shape=shape, attr=param_attr, dtype=dtype, is_bias=False)\n", "\n", " # 定义损失函数和前向传播机制\n", " def forward(self, N):\n", " \n", " # 施加量子神经网络\n", " U = U_theta(self.theta, N)\n", "\n", " # rho_tilde 是将 U 作用在 rho 后得到的量子态 U*rho*U^dagger \n", " rho_tilde = matmul(matmul(U, self.rho), dagger(U))\n", "\n", " # 计算损失函数\n", " loss = trace(matmul(self.sigma, rho_tilde))\n", "\n", " return loss.real, rho_tilde" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 配置训练模型 - 模型参数\n", "\n", "在进行量子神经网络的训练之前,我们还需要进行一些训练(超)参数的设置,例如学习速率与迭代次数。\n", "- 设定学习速率(learning rate)为 0.1;\n", "- 设定迭代次数为50次。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "pycharm": { "is_executing": false, "name": "#%%\n" } }, "outputs": [], "source": [ "ITR = 50 # 设置训练的总的迭代次数\n", "LR = 0.1 # 设置学习速率" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 进行训练\n", "\n", "- 当训练模型的各项参数都设置完成后,我们将数据转化为Paddle动态图中的变量,进而进行量子神经网络的训练。\n", "- 过程中我们用的是Adam Optimizer,也可以调用Paddle中提供的其他优化器。\n", "- 我们将训练过程中的结果依次输出。" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "pycharm": { "is_executing": false, "name": "#%% \n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "iter: 0 loss: 0.2354\n", "iter: 10 loss: 0.1912\n", "iter: 20 loss: 0.1844\n", "iter: 30 loss: 0.1823\n", "iter: 40 loss: 0.1813\n" ] } ], "source": [ "# 初始化paddle动态图机制\n", "with fluid.dygraph.guard():\n", " \n", " # 确定网络的参数维度\n", " net = Net(shape=[THETA_SIZE], rho=rho, sigma=sigma)\n", "\n", " # 一般来说,我们利用Adam优化器来获得相对好的收敛,当然你可以改成SGD或者是RMS prop.\n", " opt = fluid.optimizer.AdagradOptimizer(learning_rate=LR, parameter_list=net.parameters())\n", " \n", " # 优化循环\n", " for itr in range(ITR):\n", " \n", " # 前向传播计算损失函数并返回估计的能谱\n", " loss, rho_tilde = net(N)\n", " rho_tilde_np = rho_tilde.numpy()\n", " \n", " # 在动态图机制下,反向传播极小化损失函数\n", " loss.backward()\n", " opt.minimize(loss)\n", " net.clear_gradients()\n", " \n", " # 打印训练结果\n", " if itr % 10 == 0:\n", " print('iter:', itr, 'loss:', '%.4f' % loss.numpy()[0])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 总结\n", "根据上面训练得到的结果,通过大概50次迭代,我们就比较好的完成了对角化。\n", "我们可以通过打印\n", "$\\tilde{\\rho} = U(\\boldsymbol{\\theta})\\rho U^\\dagger(\\boldsymbol{\\theta})$\n", "的来验证谱分解的效果。特别的,我们可以验证它的对角线与我们目标谱是非常接近的。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "pycharm": { "is_executing": false, "name": "#%%\n" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The estimated spectrum is: [0.49401064 0.30357179 0.10224927 0.10016829]\n", "The target spectrum is: [0.5 0.3 0.1 0.1]\n" ] } ], "source": [ "print(\"The estimated spectrum is:\", numpy.real(numpy.diag(rho_tilde_np)))\n", "print(\"The target spectrum is:\", numpy.diag(D))" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 参考文献\n", "\n", "[1] [Larose, R., Tikku, A., Neel-judy, É. O., Cincio, L. & Coles, P. J. Variational quantum state diagonalization. npj Quantum Inf. (2019) doi:10.1038/s41534-019-0167-6.](https://www.nature.com/articles/s41534-019-0167-6)\n", "\n", "[2] [Nakanishi, K. M., Mitarai, K. & Fujii, K. Subspace-search variational quantum eigensolver for excited states. Phys. Rev. Res. 1, 033062 (2019).](https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.1.033062)\n", "\n", "[3] [Cerezo, M., Sharma, K., Arrasmith, A. & Coles, P. J. Variational Quantum State Eigensolver. arXiv:2004.01372 (2020).](https://arxiv.org/pdf/2004.01372.pdf)\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" }, "pycharm": { "stem_cell": { "cell_type": "raw", "metadata": { "collapsed": false }, "source": [] } } }, "nbformat": 4, "nbformat_minor": 1 }