SSVQE_Tutorial_CN.ipynb 14.4 KB
Notebook
Newer Older
Q
Quleaf 已提交
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 子空间搜索-量子变分特征求解器 (Subspace-Search VQE)\n",
    "\n",
    "<em> Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. </em>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 概览\n",
    "\n",
    "- 在这个案例中,我们将展示如何通过Paddle Quantum训练量子神经网络来求解量子系统的特征。\n",
    "\n",
    "- 首先,让我们通过下面几行代码引入必要的library和package。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "pycharm": {
     "is_executing": false,
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "import numpy\n",
    "\n",
    "from paddle.complex import matmul, transpose\n",
    "from paddle import fluid\n",
    "from paddle_quantum.circuit import UAnsatz\n",
Q
Quleaf 已提交
39
    "from paddle_quantum.utils import random_pauli_str_generator, pauli_str_to_matrix, dagger"
Q
Quleaf 已提交
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 背景\n",
    "\n",
    "- 量子计算在近期内备受瞩目的一个应用就是变分量子特征求解器(VQE, variational quantum eigensolver (VQE)) [1-3].\n",
    "- VQE是量子化学在近期有噪量子设备(NISQ device)上的核心应用之一,其中一个功能比较强大的版本是SSVQE [4],其核心是去求解一个物理系统的哈密顿量的基态和**激发态**的性质。数学上,可以理解为求解一个厄米矩阵(Hermitian matrix)的特征值及其对应的特征向量。该哈密顿量的特征值组成的集合我们称其为能谱 (Energy spectrum)。\n",
    "- 接下来我们将通过一个简单的例子学习如何通过训练量子神经网络解决这个问题,即求解出给定哈密顿量 $H$ 的能谱。\n",
    "\n",
    "## SSVQE分析物理系统的基态和激发态的能量\n",
    "\n",
    "- 对于具体需要分析的分子,我们需要其几何构型 (geometry)、电荷 (charge) 以及自旋多重度 (spin multiplicity) 等多项信息来建模获取描述系统的哈密顿量。具体的,通过我们内置的量子化学工具包可以利用 fermionic-to-qubit 映射的技术来输出目标分子的量子比特哈密顿量表示。\n",
    "- 作为简单的入门案例,我们在这里提供一个简单的随机2量子比特哈密顿量作为例子。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "N = 2        # 量子比特数/量子神经网络的宽度 \n",
    "SEED = 14    # 固定随机种子"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Random Hamiltonian in Pauli string format = \n",
Q
Quleaf 已提交
78
      " [[-0.9208973013017021, 'y0,y1'], [0.198490728051139, 'y1'], [0.9587239095910918, 'x1'], [0.07176335076663909, 'x1'], [-0.7920788641602743, 'x1'], [-0.5028229022143014, 'x0'], [-0.14565978959930526, 'z1,y0'], [0.5965836192828249, 'z1,z0'], [0.6251774164147041, 'y1,y0'], [-0.1580838163596252, 'z0,x1']]\n"
Q
Quleaf 已提交
79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172
     ]
    }
   ],
   "source": [
    "# 生成用泡利字符串表示的随机哈密顿量\n",
    "hamiltonian = random_pauli_str_generator(N, terms=10)\n",
    "print(\"Random Hamiltonian in Pauli string format = \\n\", hamiltonian)\n",
    "\n",
    "# 生成哈密顿量的矩阵信息\n",
    "H = pauli_str_to_matrix(hamiltonian, N)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 搭建量子神经网络(QNN)\n",
    "\n",
    "- 在实现SSVQE的过程中,我们首先需要设计量子神经网络QNN(也即参数化量子电路)。在本教程中,我们提供一个预设的适用于2量子比特的通用量子电路模板。理论上,该模板具有足够强大的表达能力可以表示任意的2-量子比特逻辑运算 [5]。具体的实现方式是需要 3个 $CNOT$ 门加上任意15个单比特旋转门 $\\in \\{R_y, R_z\\}$。\n",
    "\n",
    "- 初始化其中的变量参数,${\\bf{\\theta }}$ 代表我们量子神经网络中的参数组成的向量,一共有15个参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "pycharm": {
     "is_executing": false,
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "THETA_SIZE = 15  # 量子神经网络中参数的数量\n",
    "\n",
    "def U_theta(theta, N):\n",
    "    \"\"\"\n",
    "    Quantum Neural Network\n",
    "    \"\"\"\n",
    "    \n",
    "    # 按照量子比特数量/网络宽度初始化量子神经网络\n",
    "    cir = UAnsatz(N)\n",
    "    \n",
    "    # 调用内置的量子神经网络模板\n",
    "    cir.universal_2_qubit_gate(theta)\n",
    "\n",
    "    # 返回量子神经网络所模拟的酉矩阵 U\n",
    "    return cir.U"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 配置训练模型 - 损失函数\n",
    "\n",
    "- 现在我们已经有了数据和量子神经网络的架构,我们将进一步定义训练参数、模型和损失函数,具体的理论可以参考 [4].\n",
    "- 通过作用量子神经网络 $U(\\theta)$ 在1组正交的初始态上 (方便起见,可以取计算基 $\\{|00\\rangle, |01\\rangle, |10\\rangle, |11\\rangle \\}$),我们将分别得到四个输出态 $\\left| {\\psi_k \\left( {\\bf{\\theta }} \\right)} \\right\\rangle (k=0,1,2,3)$。\n",
    "- 进一步,在SSVQE模型中的损失函数一般由哈密顿量H与量子态 $\\left| {\\psi_k \\left( {\\bf{\\theta }} \\right)} \\right\\rangle$ 的内积的加权求和给出。\n",
    "- 具体的损失函数定义为\n",
    "$$\\mathcal{L}(\\boldsymbol{\\theta}) = 4\\left\\langle {\\psi_1 \\left( {\\bf{\\theta }} \\right)} \\right|H\\left| {\\psi_1 \\left( {\\bf{\\theta }} \\right)} \\right\\rangle + 3\\left\\langle {\\psi_2 \\left( {\\bf{\\theta }} \\right)} \\right|H\\left| {\\psi_2 \\left( {\\bf{\\theta }} \\right)} \\right\\rangle + 2\\left\\langle {\\psi_3 \\left( {\\bf{\\theta }} \\right)} \\right|H\\left| {\\psi_3 \\left( {\\bf{\\theta }} \\right)} \\right\\rangle + \\left\\langle {\\psi_4 \\left( {\\bf{\\theta }} \\right)} \\right|H\\left| {\\psi_4 \\left( {\\bf{\\theta }} \\right)} \\right\\rangle.$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "pycharm": {
     "is_executing": false,
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "class Net(fluid.dygraph.Layer):\n",
    "    \"\"\"\n",
    "    Construct the model net\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, shape, param_attr=fluid.initializer.Uniform(low=0.0, high=2 * numpy.pi, seed=SEED),\n",
    "                 dtype='float64'):\n",
    "        super(Net, self).__init__()\n",
    "        \n",
    "        # 初始化 theta 参数列表,并用 [0, 2*pi] 的均匀分布来填充初始值\n",
    "        self.theta = self.create_parameter(shape=shape, attr=param_attr, dtype=dtype, is_bias=False)\n",
    "    \n",
    "    # 定义损失函数和前向传播机制\n",
    "    def forward(self, H, N):\n",
    "        \n",
    "        # 施加量子神经网络\n",
    "        U = U_theta(self.theta, N)\n",
    "        \n",
    "        # 计算损失函数\n",
Q
Quleaf 已提交
173
    "        loss_struct = matmul(matmul(dagger(U), H), U).real\n",
Q
Quleaf 已提交
174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237
    "\n",
    "        # 输入计算基去计算每个子期望值,相当于取 U^dagger*H*U 的对角元 \n",
    "        loss_components = [\n",
    "            loss_struct[0][0],\n",
    "            loss_struct[1][1],\n",
    "            loss_struct[2][2],\n",
    "            loss_struct[3][3]\n",
    "        ]\n",
    "        \n",
    "        # 最终加权求和后的损失函数\n",
    "        loss = 4 * loss_components[0] + 3 * loss_components[1] + 2 * loss_components[2] + 1 * loss_components[3]\n",
    "        \n",
    "        return loss, loss_components"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 配置训练模型 - 模型参数\n",
    "\n",
    "在进行量子神经网络的训练之前,我们还需要进行一些训练的超参数设置,主要是学习速率 (LR, learning rate)、迭代次数(ITR, iteration)和量子神经网络计算模块的深度 (D, Depth)。这里我们设定学习速率为0.3, 迭代次数为50次。读者不妨自行调整来直观感受下超参数调整对训练效果的影响。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "pycharm": {
     "is_executing": false,
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "ITR = 50 # 设置训练的总迭代次数\n",
    "LR = 0.3 # 设置学习速率"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 进行训练\n",
    "\n",
    "- 当训练模型的各项参数都设置完成后,我们将数据转化为Paddle动态图中的变量,进而进行量子神经网络的训练。\n",
    "- 过程中我们用的是Adam Optimizer,也可以调用Paddle中提供的其他优化器。\n",
    "- 我们将训练过程中的每一轮loss打印出来。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "pycharm": {
     "is_executing": false,
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
Q
Quleaf 已提交
238 239 240 241 242
      "iter: 10 loss: -3.8289\n",
      "iter: 20 loss: -3.9408\n",
      "iter: 30 loss: -3.9473\n",
      "iter: 40 loss: -3.9480\n",
      "iter: 50 loss: -3.9481\n"
Q
Quleaf 已提交
243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299
     ]
    }
   ],
   "source": [
    "# 初始化paddle动态图机制\n",
    "with fluid.dygraph.guard():\n",
    "    \n",
    "    # 我们需要将 Numpy array 转换成 Paddle 动态图模式中支持的 variable\n",
    "    hamiltonian = fluid.dygraph.to_variable(H)\n",
    "\n",
    "    # 确定网络的参数维度\n",
    "    net = Net(shape=[THETA_SIZE])\n",
    "\n",
    "    # 一般来说,我们利用Adam优化器来获得相对好的收敛,当然你可以改成SGD或者是RMS prop.\n",
    "    opt = fluid.optimizer.AdagradOptimizer(learning_rate=LR, parameter_list=net.parameters())\n",
    "\n",
    "    # 优化循环\n",
    "    for itr in range(1, ITR + 1):\n",
    "        \n",
    "        # 前向传播计算损失函数并返回估计的能谱\n",
    "        loss, loss_components = net(hamiltonian, N)\n",
    "        \n",
    "        # 在动态图机制下,反向传播极小化损失函数\n",
    "        loss.backward()\n",
    "        opt.minimize(loss)\n",
    "        net.clear_gradients()\n",
    "        \n",
    "        # 打印训练结果\n",
    "        if itr % 10 == 0:\n",
    "            print('iter:', itr, 'loss:', '%.4f' % loss.numpy()[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 测试效果\n",
    "我们现在已经完成了量子神经网络的训练,我们将通过与理论值的对比来测试效果。\n",
    "- 理论值由numpy中的工具来求解哈密顿量的各个特征值;\n",
    "- 我们将训练QNN得到的各个能级的能量和理想情况下的理论值进行比对。\n",
    "- 可以看到,SSVQE训练输出的值与理想值高度接近。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "pycharm": {
     "is_executing": false,
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
Q
Quleaf 已提交
300 301 302 303 304 305 306 307
      "The estimated ground state energy is:  [-1.07559121]\n",
      "The theoretical ground state energy:  -1.0756323552750124\n",
      "The estimated 1st excited state energy is:  [-0.72131763]\n",
      "The theoretical 1st excited state energy:  -0.7213113808180259\n",
      "The estimated 2nd excited state energy is:  [0.72132372]\n",
      "The theoretical 2nd excited state energy:  0.7213113808180256\n",
      "The estimated 3rd excited state energy is:  [1.07558513]\n",
      "The theoretical 3rd excited state energy:  1.0756323552750122\n"
Q
Quleaf 已提交
308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359
     ]
    }
   ],
   "source": [
    "print('The estimated ground state energy is: ', loss_components[0].numpy())\n",
    "print('The theoretical ground state energy: ', numpy.linalg.eigh(H)[0][0])\n",
    "\n",
    "print('The estimated 1st excited state energy is: ', loss_components[1].numpy())\n",
    "print('The theoretical 1st excited state energy: ', numpy.linalg.eigh(H)[0][1])\n",
    "\n",
    "print('The estimated 2nd excited state energy is: ', loss_components[2].numpy())\n",
    "print('The theoretical 2nd excited state energy: ', numpy.linalg.eigh(H)[0][2])\n",
    "\n",
    "print('The estimated 3rd excited state energy is: ', loss_components[3].numpy())\n",
    "print('The theoretical 3rd excited state energy: ', numpy.linalg.eigh(H)[0][3])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## 参考文献\n",
    "\n",
    "[1] [Peruzzo, A. et al. A variational eigenvalue solver on a photonic quantum processor. Nat. Commun. 5, 4213 (2014).](https://www.nature.com/articles/ncomms5213)\n",
    "\n",
    "[2] [McArdle, S., Endo, S., Aspuru-Guzik, A., Benjamin, S. C. & Yuan, X. Quantum computational chemistry. Rev. Mod. Phys. 92, 015003 (2020).](https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.92.015003)\n",
    "\n",
    "[3] [Cao, Y. et al. Quantum chemistry in the age of quantum computing. Chem. Rev. 119, 10856–10915 (2019).](https://pubs.acs.org/doi/abs/10.1021/acs.chemrev.8b00803)\n",
    "\n",
    "[4] [Nakanishi, K. M., Mitarai, K. & Fujii, K. Subspace-search variational quantum eigensolver for excited states. Phys. Rev. Res. 1, 033062 (2019).](https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.1.033062)\n",
    "\n",
    "[5] [Vatan, F. & Williams, C. Optimal quantum circuits for general two-qubit gates. Phys. Rev. A 69, 032315 (2004).](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.69.032315)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
Q
Quleaf 已提交
360
   "version": "3.7.7"
Q
Quleaf 已提交
361 362 363 364 365 366 367 368 369 370 371 372 373 374
  },
  "pycharm": {
   "stem_cell": {
    "cell_type": "raw",
    "metadata": {
     "collapsed": false
    },
    "source": []
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}