{ "cells": [ { "cell_type": "markdown", "id": "8a69d69c", "metadata": {}, "source": [ "# 变分量子电路编译" ] }, { "cell_type": "markdown", "id": "d17b0cc0", "metadata": {}, "source": [ " Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. " ] }, { "cell_type": "markdown", "id": "596b716a", "metadata": {}, "source": [ "## 概览" ] }, { "cell_type": "markdown", "id": "07a15fd1", "metadata": {}, "source": [ "变分量子电路编译是一个通过优化参数化量子电路来模拟未知酉算子的过程,本教程我们将讨论两种未知酉算子的情况:一是给定酉算子 $U$ 的矩阵形式;二是给定一个实现 $U$ 的黑箱,可以将 $U$ 接入电路使用但不允许访问其内部构造。针对不同形式的 $U$,我们利用量桨构建量子线路训练损失函数,分别得到 $U$ 的近似酉算子 $V(\\vec{\\theta})$(这里我们用 $V(\\vec{\\theta})$ 表示参数化量子门序列所表示的酉算子,为简单起见下文我们用 $V$ 来表示)和 $V^{\\dagger}$ 的量子线路,并根据经过 $U$ 和 $V$ 演化后的量子态的迹距离对结果进行评估。" ] }, { "cell_type": "markdown", "id": "166a3d1c", "metadata": {}, "source": [ "## 背景\n", "\n", "经典计算机早期的编译过程是将二进制数字转变为高低电平驱动计算机的电子器件进行运算,随后逐渐发展为便于处理书写的汇编语言;与经典计算机类似,对于量子计算机而言,量子编译就是将量子算法中的酉变换转化为一系列量子门序列从而实现量子算法的过程。目前含噪的中等规模量子 (Noisy Intermediate-Scale Quantum, NISQ) 设备存在诸如在量子比特数量、电路深度等方面的限制,这些限制给量子编译算法带来了巨大挑战。文献 [1] 提出了一种量子编译算法——量子辅助量子编译算法 (Quantum-Assisted Quantum Compiling, QAQC),能够有效地在 NISQ 设备上实现。QAQC 的目的是将未知的目标酉算子 $U$ 编译成可训练的参数化量子门序列,利用门保真度定义损失函数,通过设计变分量子电路不断优化损失函数,得到近似目标酉算子 $U$ 的 $V$,但如何衡量两个酉算子的近似程度呢?这里我们考虑 $V$ 的酉演化能够模拟 $U$ 酉演化的概率,即对输入态 $|\\psi\\rangle$,$U|\\psi\\rangle$ 和 $V|\\psi\\rangle$ 的重叠程度,也就是哈尔( Haar )分布上的保真度平均值:\n", "\n", "$$\n", "F(U,V)=\\int_{\\psi}|\\langle\\psi|V^{\\dagger}U|\\psi\\rangle|^2d\\psi,\n", "\\tag{1}\n", "$$\n", "\n", "当 $F(U,V)=1$ 时,存在$\\phi$,$V=e^{i\\phi}U$,即两个酉算子相差一个全局相位因子,此时我们称 $V$ 为 $U$ 的精确编译;当 $F(U,V)\\geq 1-\\epsilon$ 时,我们称 $V$ 为 $U$ 的近似编译,其中 $\\epsilon\\in[0,1]$ 为误差。基于此,我们可以构造以下的损失函数:\n", "\n", "$$\n", "\\begin{aligned} C(U,V)&=\\frac{d+1}{d}(1-F(U,V))\\\\\n", "&=1-\\frac{1}{d^2}|\\langle V,U\\rangle|^2\\\\\n", "&=1-\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2,\n", "\\end{aligned}\n", "\\tag{2}\n", "$$\n", "\n", "其中 $n$ 为量子比特数,$d=2^n$,$\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$ 也被称为门保真度。\n", "\n", "由 (2) 式可得当且仅当 $F(U,V)=1$ 时,$C(V,U)=0$ ,因此我们通过训练一系列门序列来最小化损失函数,从而得到近似目标酉算子 $U$ 的 $V$。" ] }, { "cell_type": "markdown", "id": "189c9359", "metadata": {}, "source": [ "## 第一种情况 —— 矩阵形式的 $U$\n", "\n", "下面我们先分析已知 $U$ 矩阵形式的情况,以 Toffoli 门为例,已知其矩阵表示为 $U_0$,搭建量子神经网络(即参数化量子电路)通过训练优化得到 $U_0$ 的近似电路分解。\n", "\n", "我们在量桨中实现上述过程,首先引入需要的包:" ] }, { "cell_type": "code", "execution_count": 17, "id": "ae8f2fdb", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import paddle\n", "import paddle_quantum\n", "from paddle_quantum.ansatz import Circuit\n", "from paddle_quantum.linalg import dagger\n", "from paddle_quantum.state import random_state, zero_state" ] }, { "cell_type": "markdown", "id": "83d5d086", "metadata": {}, "source": [ "接下来将 Toffoli 门的矩阵形式 $U_0$ 输入电路中:" ] }, { "cell_type": "code", "execution_count": 18, "id": "4663732b", "metadata": { "scrolled": true }, "outputs": [], "source": [ "n = 3 # 设定量子比特数\n", "\n", "# 输入 Toffoli 门的矩阵形式\n", "U_0 = paddle.to_tensor(np.matrix([[1, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 1, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 1, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 1, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 1, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 1],\n", " [0, 0, 0, 0, 0, 0, 1, 0]],\n", " dtype=\"float32\"))" ] }, { "cell_type": "markdown", "id": "bfc61ad3", "metadata": {}, "source": [ "### 搭建量子电路\n", "\n", "不同的量子神经网络(Quantum Neural Network, QNN)表达能力不同,此处我们选择的是量桨中内置的 `complex_entangled_layer()` 模板构造表达能力较强的电路模板搭建 QNN:" ] }, { "cell_type": "code", "execution_count": 19, "id": "4e400e2e", "metadata": {}, "outputs": [], "source": [ "# 构建量子电路\n", "def qcircuit(n, D):\n", " # 初始化 n 个量子比特的量子电路\n", " cir = Circuit(n)\n", " # 内置的包含 U3 门和 CNOT 门的强纠缠电路模板\n", " cir.complex_entangled_layer('full', D)\n", "\n", " return cir" ] }, { "cell_type": "markdown", "id": "d97698f3", "metadata": {}, "source": [ "### 配置训练模型 —— 损失函数\n", "\n", "接下来进一步定义损失函数 $C(U,V) = 1-\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$ 和训练参数。" ] }, { "cell_type": "code", "execution_count": 20, "id": "29c9ed4a", "metadata": {}, "outputs": [], "source": [ "# 定义损失函数\n", "def loss_func(cir, target_u):\n", " # 量子电路的矩阵表示\n", " V = cir.unitary_matrix()\n", " # 直接构造 (1) 式为损失函数\n", " loss = 1 - (dagger(V).matmul(target_u).trace().abs() / V.shape[0]) ** 2\n", "\n", " return loss" ] }, { "cell_type": "markdown", "id": "8229500e", "metadata": {}, "source": [ "### 配置训练模型 —— 模型参数\n", "\n", "对 QNN 进行训练前,我们还需要进行一些训练的超参数设置,主要是 QNN 计算模块的层数 $D$、学习速率 LR 以及训练的总迭代次数 ITR。此处我们设置学习速率为 0.2,迭代次数为 150 次。读者可以自行调整来直观感受下超参数调整对训练效果的影响。" ] }, { "cell_type": "code", "execution_count": 21, "id": "4046e5b0", "metadata": {}, "outputs": [], "source": [ "D = 5 # 量子电路的层数\n", "LR = 0.2 # 基于梯度下降的优化方法的学习率\n", "ITR = 150 # 训练的总迭代次数" ] }, { "cell_type": "markdown", "id": "fcde30e5", "metadata": {}, "source": [ "### 进行训练" ] }, { "cell_type": "code", "execution_count": 22, "id": "7103bf1c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "iter: 30 loss: 0.0230\n", "iter: 60 loss: 0.0006\n", "iter: 90 loss: 0.0001\n", "iter: 120 loss: 0.0000\n", "iter: 150 loss: -0.0000\n", "\n", "训练后的电路:\n", "--U----*---------x----U----*---------x----U----*---------x----U----*---------x----U----*---------x--\n", " | | | | | | | | | | \n", "--U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|--\n", " | | | | | | | | | | \n", "--U---------x----*----U---------x----*----U---------x----*----U---------x----*----U---------x----*--\n", " \n", "优化后的参数 theta:\n", " [[[[1.571 4.712 0.313]\n", " [0. 4.596 4.043]\n", " [6.283 1.43 1.599]]]\n", "\n", "\n", " [[[4.712 1.571 4.825]\n", " [1.571 1.571 3.142]\n", " [3.141 6.437 6.437]]]\n", "\n", "\n", " [[[1.571 3.927 0. ]\n", " [1.571 3.927 3.141]\n", " [0. 2.73 4.338]]]\n", "\n", "\n", " [[[1.571 1.571 3.927]\n", " [4.713 4.712 2.356]\n", " [3.142 5.064 5.537]]]\n", "\n", "\n", " [[[4.712 4.825 3.141]\n", " [0. 0.609 3.991]\n", " [3.142 0.438 5.15 ]]]]\n" ] } ], "source": [ "cir = qcircuit(n, D)\n", "# 使用 Adam 优化器\n", "opt = paddle.optimizer.Adam(learning_rate=LR, parameters=cir.parameters())\n", "\n", "# 进行迭代\n", "for itr in range(1, ITR + 1):\n", " loss = loss_func(cir, U_0)\n", " loss.backward()\n", " opt.minimize(loss)\n", " opt.clear_grad()\n", "\n", " if itr % 30 == 0:\n", " print(\"iter:\", itr, \"loss:\", \"%.4f\" % loss.numpy())\n", " if itr == ITR:\n", " print(\"\\n训练后的电路:\")\n", " print(cir)\n", "\n", "theta_opt = cir.parameters()\n", "print(\"优化后的参数 theta:\\n\", np.around(theta_opt, decimals=3))" ] }, { "cell_type": "markdown", "id": "ca9f9196", "metadata": {}, "source": [ "当已知目标酉算子的矩阵形式时,根据迭代过程及测试结果我们可以看到以 Toffoli 门的矩阵形式为例,搭建五层量子神经网络进行训练,迭代 150 次左右时,损失函数达到 0。" ] }, { "cell_type": "markdown", "id": "0861f2ce", "metadata": {}, "source": [ "### 结果验证\n", "\n", "下面我们随机选取 10 个密度矩阵,分别经过目标酉算子 $U$ 和近似酉算子 $V$ 的演化,计算真实的输出 `real_output` 和近似的输出 `simulated_output` 之间的迹距离 $ d(\\rho, \\sigma) = \\frac{1}{2}\\text{tr}\\sqrt{(\\rho-\\sigma)^{\\dagger}(\\rho-\\sigma)}$,迹距离越小,说明近似效果越好。" ] }, { "cell_type": "code", "execution_count": 23, "id": "12678dff", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sample: 1 :\n", " trace distance is 0.00017\n", "sample: 2 :\n", " trace distance is 0.00018\n", "sample: 3 :\n", " trace distance is 0.00017\n", "sample: 4 :\n", " trace distance is 0.00013\n", "sample: 5 :\n", " trace distance is 0.00022\n", "sample: 6 :\n", " trace distance is 0.00013\n", "sample: 7 :\n", " trace distance is 0.00015\n", "sample: 8 :\n", " trace distance is 0.00014\n", "sample: 9 :\n", " trace distance is 0.00014\n", "sample: 10 :\n", " trace distance is 0.00013\n" ] } ], "source": [ "s = 10 # 定义随机生成密度矩阵的数量\n", "\n", "for i in range(s):\n", " paddle_quantum.set_backend('density_matrix') # 切换至密度矩阵模式\n", " sampled = random_state(3) # 随机生成 3 量子比特的密度矩阵 sampled\n", " simulated_output = paddle.matmul(paddle.matmul(cir.unitary_matrix(), sampled.data), dagger(cir.unitary_matrix())) # sampled 经过近似酉算子演化后的结果\n", " real_output = paddle.matmul(paddle.matmul(paddle.to_tensor(U_0), sampled.data), dagger(paddle.to_tensor(U_0))) # sampled 经过目标酉算子演化后的结果\n", " print('sample:', i + 1, ':')\n", " A = simulated_output.numpy() - real_output.numpy()\n", " d = 1 / 2 * np.sum(np.abs(np.linalg.eigvals(A)))\n", " print(' trace distance is', np.around(d, decimals=5)) # 输出两种结果间的迹距离\n" ] }, { "cell_type": "markdown", "id": "567a77a3", "metadata": {}, "source": [ "可以看到各个样本分别经过 $U$ 和 $V$ 的演化后迹距离都接近 0, 说明 $V$ 近似 $U$ 的效果很好。" ] }, { "cell_type": "markdown", "id": "f2f3d7d5", "metadata": {}, "source": [ "## 第二种情况 —— 线路形式的 $U$\n", "\n", "第二种情况下,我们假设 $U$ 以黑盒的形式给出,其保真度不能再直接计算,因此它需要通过一个电路来计算。接下来我们将演示如何用量子电路计算保真度。\n", "\n", "### 利用量子电路图计算保真度" ] }, { "cell_type": "markdown", "id": "7a62ac66", "metadata": {}, "source": [ "在实现 QAQC 的过程中,我们需要设计量子电路图来训练损失函数。QAQC 的 QNN 是嵌套在一个更大的量子电路中,整个量子电路如下图所示,其中 $U$ 表示需要近似的酉算子,$V^{\\dagger}$ 是我们要训练的 QNN。这里我们利用 Toffoli 门作为黑箱。\n", "\n", "![circuit](./figures/vqcc-fig-circuit.png \"图1: QAQC 量子电路图 [1]。\")\n", "
图1: QAQC 量子电路图 [1]。
\n", "\n", "电路总共需要 $2n$ 量子比特,我们称前 $n$ 个量子比特为系统 $A$,后 $n$ 个为系统 $B$,整个电路涉及以下三步:\n", "\n", "- 首先通过通过 Hadamard 门和 CNOT 门操作生成 $A、B$ 的最大纠缠态;\n", "- 然后对 $A$ 系统进行 $U$ 操作,$B$ 系统执行 $V^{\\dagger}$(即 $V$ 的复共轭);\n", "- 最后恢复第一步中的操作并在标准基下测量(也可以理解为在贝尔基下测量)。\n", "\n", "经过上述操作,测量得到的全零态的概率即为 $\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$,关于图 1 的详细解释请参考文献 [1]。" ] }, { "cell_type": "markdown", "id": "44f2ea35", "metadata": {}, "source": [ "这里的 QNN 我们依旧采用第一种情况中的电路,黑箱为 Toffoli 门。\n", "\n", "下面我们在量桨上实现近似编译酉算子的过程:" ] }, { "cell_type": "code", "execution_count": 24, "id": "d6852694", "metadata": {}, "outputs": [], "source": [ "n = 3 # 设定量子比特数\n", "\n", "# 构建量子电路\n", "def qcircuit(n, D):\n", " \n", " # 初始化 2n 个量子比特的量子电路\n", " cir = Circuit(2 * n)\n", " cir.h(list(range(n)))\n", " for i in range(n):\n", " cir.cnot([i, n + i])\n", " \n", " # 构建 U 的电路\n", " cir.toffoli([0, 1, 2])\n", " \n", " # 构建 QNN\n", " cir.complex_entangled_layer([3, 4, 5], D)\n", "\n", " for l in range(n):\n", " cir.cnot([n - 1 - l, 2 * n - 1 - l])\n", " cir.h(list(range(n)))\n", "\n", " return cir" ] }, { "cell_type": "markdown", "id": "f0d141c7", "metadata": {}, "source": [ "### 配置训练模型 —— 损失函数\n", "\n", "接下来进一步定义损失函数 $C(U,V) = 1-\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$ 和训练参数:" ] }, { "cell_type": "code", "execution_count": 25, "id": "7ee10c61", "metadata": {}, "outputs": [], "source": [ "# 定义损失函数\n", "def loss_func(cir):\n", " paddle_quantum.set_backend('density_matrix')\n", " # 输出经过线路后量子态的密度矩阵 rho\n", " init_state = zero_state(2 * n)\n", " rho = cir(init_state)\n", " # 计算损失函数 loss,其中输出密度矩阵的第一个元素即为全零态的概率\n", " loss = 1 - paddle.real(rho.data[0][0])\n", "\n", " return loss " ] }, { "cell_type": "markdown", "id": "5bfa7b24", "metadata": {}, "source": [ "### 配置训练模型 —— 模型参数\n", "\n", "我们设置学习速率为 0.1,迭代次数为 120 次。同样读者可以自行调整来直观感受下超参数调整对训练效果的影响。" ] }, { "cell_type": "code", "execution_count": 26, "id": "280e2858", "metadata": {}, "outputs": [], "source": [ "D = 5 # 量子电路的层数\n", "LR = 0.1 # 基于梯度下降的优化方法的学习率\n", "ITR = 120 #训练的总迭代次数" ] }, { "cell_type": "markdown", "id": "bb400510", "metadata": {}, "source": [ "### 进行训练\n", "\n", "设置完训练模型的各项参数后,我们使用 Adam 优化器进行 QNN 的训练。" ] }, { "cell_type": "code", "execution_count": 27, "id": "77919a34", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/zl/miniconda3/envs/pq/lib/python3.8/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. \n", "Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n", " if data.dtype == np.object:\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "iter: 20 loss: 0.1251\n", "iter: 40 loss: 0.0174\n", "iter: 60 loss: 0.0019\n", "iter: 80 loss: 0.0002\n", "iter: 100 loss: 0.0000\n", "iter: 120 loss: 0.0000\n", "\n", "训练后的电路:\n", "--H----*--------------*-------------------------------------------------------------------------------------------------------------*----H--\n", " | | | \n", "--H----|----*---------*--------------------------------------------------------------------------------------------------------*----|----H--\n", " | | | | | \n", "--H----|----|----*----X---------------------------------------------------------------------------------------------------*----|----|----H--\n", " | | | | | | \n", "-------x----|----|----U----*---------x----U----*---------x----U----*---------x----U----*---------x----U----*---------x----|----|----x-------\n", " | | | | | | | | | | | | | | \n", "------------x----|----U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|----|----x------------\n", " | | | | | | | | | | | | \n", "-----------------x----U---------x----*----U---------x----*----U---------x----*----U---------x----*----U---------x----*----x-----------------\n", " \n" ] } ], "source": [ "cir = qcircuit(n, D)\n", "# 使用 Adam 优化器\n", "opt = paddle.optimizer.Adam(learning_rate=LR, parameters=cir.parameters())\n", "\n", "# 进行迭代\n", "for itr in range(1, ITR + 1):\n", " loss = loss_func(cir)\n", " loss.backward()\n", " opt.minimize(loss)\n", " opt.clear_grad()\n", "\n", " if itr % 20 == 0:\n", " print(\"iter:\", itr, \"loss:\", \"%.4f\" % loss.numpy())\n", " if itr == ITR:\n", " print(\"\\n训练后的电路:\")\n", " print(cir)" ] }, { "cell_type": "code", "execution_count": 28, "id": "df546ac5", "metadata": {}, "outputs": [], "source": [ "# 存储优化后的参数\n", "theta_opt = cir.parameters()" ] }, { "cell_type": "markdown", "id": "c65d72dd", "metadata": {}, "source": [ "当能够将 $U$ 的电路接入图 1 时,根据迭代过程及测试结果我们可以看到以 Toffoli 门为例,搭建一层量子神经网络进行训练,迭代 100 次左右时,损失函数趋近 0。" ] }, { "cell_type": "markdown", "id": "040bf7d5", "metadata": {}, "source": [ "### 结果验证\n", "\n", "与之前类似,我们同样随机选取 10 个密度矩阵,分别经过目标酉算子 $U$ 和近似酉算子 $V$ 的演化,计算真实的输出 `real_output` 和近似的输出 `simulated_output` 之间的迹距离,迹距离越小,说明近似效果越好。" ] }, { "cell_type": "code", "execution_count": 30, "id": "0c339ca3", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sample 1 :\n", " trace distance is 0.20668\n", "sample 2 :\n", " trace distance is 0.18735\n", "sample 3 :\n", " trace distance is 0.24465\n", "sample 4 :\n", " trace distance is 0.1156\n", "sample 5 :\n", " trace distance is 0.27939\n", "sample 6 :\n", " trace distance is 0.18128\n", "sample 7 :\n", " trace distance is 0.22863\n", "sample 8 :\n", " trace distance is 0.25865\n", "sample 9 :\n", " trace distance is 0.33706\n", "sample 10 :\n", " trace distance is 0.16921\n" ] } ], "source": [ "s = 10 # 定义随机生成密度矩阵的数量\n", "\n", "for i in range(s):\n", " paddle_quantum.set_backend('density_matrix') # 切换至密度矩阵模式 \n", " sampled = random_state(3) # 随机生成 3 量子比特的密度矩阵 sampled\n", " # 构造目标酉算子对应的电路\n", " cir_1 = Circuit(3)\n", " cir.toffoli([0, 1, 2])\n", " # sampled 经过目标酉算子演化后的结果\n", " real_output = paddle.matmul(paddle.matmul(cir_1.unitary_matrix(), sampled.data), dagger(cir_1.unitary_matrix()))\n", "\n", " # 构造近似酉算子对应的电路\n", " cir_2 = Circuit(3)\n", " for j in range(D):\n", " cir_2.u3(qubits_idx='full', param=theta_opt[j])\n", " cir_2.cnot(qubits_idx='cycle')\n", " # sampled 经过近似酉算子演化后的结果\n", " simulated_output = paddle.matmul(paddle.matmul(cir_2.unitary_matrix(), sampled.data), dagger(cir_2.unitary_matrix()))\n", "\n", " A = simulated_output.numpy() - real_output.numpy()\n", " d = 1 / 2 * np.sum(np.abs(np.linalg.eigvals(A)))\n", " print('sample', i + 1, ':')\n", " print(' trace distance is', np.around(d, decimals=5)) # 输出两种结果间的迹距离" ] }, { "cell_type": "markdown", "id": "e8b3d6c4", "metadata": {}, "source": [ "可以看到各个样本分别经过 $U$ 和 $V$ 的演化后迹距离都接近 0, 说明 $V$ 近似 $U$ 的效果很好。" ] }, { "cell_type": "markdown", "id": "afe8ea7c", "metadata": {}, "source": [ "## 总结\n", "\n", "本教程从目标酉算子输入形式为矩阵和电路形式两种情况对其进行量子编译,通过两个简单的例子利用量桨展示了量子编译的效果,并随机生成量子态密度矩阵,对分别经过目标酉算子与近似酉算子演化后的结果求迹距离检验近似效果,结果表明量子编译效果较好。" ] }, { "cell_type": "markdown", "id": "fc610a2f", "metadata": {}, "source": [ "_______\n", "\n", "## 参考文献" ] }, { "cell_type": "markdown", "id": "62dd936d", "metadata": {}, "source": [ "[1] Khatri, Sumeet, et al. \"Quantum-assisted quantum compiling.\" [Quantum 3 (2019): 140](https://quantum-journal.org/papers/q-2019-05-13-140/)." ] } ], "metadata": { "interpreter": { "hash": "f7cfecff1ef1940b21a48efa1b88278bb096bd916f13c2df11af4810c38b47e1" }, "kernelspec": { "display_name": "Python 3.8.0 ('pq')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 5 }