{ "cells": [ { "cell_type": "markdown", "id": "8a69d69c", "metadata": {}, "source": [ "# Variational Quantum Circuit Compiling" ] }, { "cell_type": "markdown", "id": "a24d6cf3", "metadata": {}, "source": [ " Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved. " ] }, { "cell_type": "markdown", "id": "596b716a", "metadata": {}, "source": [ "## Overview" ] }, { "cell_type": "markdown", "id": "07a15fd1", "metadata": {}, "source": [ "Variational quantum circuit compilation is the process of simulating an unknown unitary operator by optimizing a parameterized quantum circuit. In this tutorial we will discuss two cases of unknown unitary operators. One is that the target $U$ is given as a matrix form, the other is that the $U$ is given as a black-box. We show how to obtain the loss function in both cases in Paddle Quantum. With auto-differentiation and optimizer provided with PaddlePaddle, we could easily approximate $U$ into a trainable sequence of quantum gates (here we use $V(\\vec{\\theta})$ to denote the unitary operator represented by the sequence of parameterized quantum gates, and for simplicity, we use $V$ below). Finally, we validate the optimized circuit by comparing the trace distance of various output density matrices transformed by the approximate circuit and the target $U$." ] }, { "cell_type": "markdown", "id": "166a3d1c", "metadata": {}, "source": [ "## Background\n", "\n", "Earlier compilations of classical computers transformed binary numbers into electrical signals to drive the computer's electronic devices to perform operations, and then gradually developed into an assembly language for easy processing and writing. For quantum computers, similar to classical compilation, quantum compilation is a process of converting the unitary in a quantum algorithm into a series of the quantum gates to implement the algorithm. The current noisy intermediate-scale quantum (NISQ) devices have limitations such as the number of qubits, circuit depth, etc., which pose a great challenge to quantum compilation algorithms. In [1], a quantum compilation algorithm, the Quantum-assisted Quantum Compiling (QAQC), has been proposed for efficient implementation on NISQ devices. The idea of QAQC is to compile the unknown target unitary operator $U$ into the unitary $V$, define the loss function using the gate fidelity, and continuously optimize a variational quantum circuit by minimizing the loss function. But how to measure the similarity of the two unitary operators? Here we consider the probability that the unitary evolution of the $V$ can simulate the $U$, i.e., the degree of overlap between $U|\\psi\\rangle$ and $V|\\psi\\rangle$ for the input state $|\\psi\\rangle$, which is the average of the fidelity on the Haar distribution:\n", "\n", "$$\n", "F(U,V)=\\int_{\\psi}|\\langle\\psi|V^{\\dagger}U|\\psi\\rangle|^2d\\psi,\n", "\\tag{1}\n", "$$\n", "\n", "When $F(U,V)=1$, there is a $\\phi$ such that $V=e^{i\\phi}U$, i.e., the two unitary operators differ by a global phase factor, at which point we call $V$ an exact compilation of $U$. When $F(U,V)\\geq 1-\\epsilon$, we call $V$ an approximate compilation of $U$, where $\\epsilon$ is an error and $\\epsilon\\in[0,1]$. Based on this, we can construct the following loss function:\n", "\n", "$$\n", "\\begin{aligned} C(U,V)&=\\frac{d+1}{d}(1-F(U,V))\\\\\n", "&=1-\\frac{1}{d^2}|\\langle V,U\\rangle|^2\\\\\n", "&=1-\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2,\n", "\\end{aligned}\n", "\\tag{2}\n", "$$\n", "\n", "where $n$ is the number of qubits, $d=2^n$ and $\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$ is the gate fidelity.\n", "\n", "From (2), we have that $C(V,U)=0$ if and only if $F(U,V)=1$, so we can obtain $V$ that approximates the target unitary operator $U$ by training a sequence of rotational gates with adjustable angles to minimize the loss function." ] }, { "cell_type": "markdown", "id": "189c9359", "metadata": {}, "source": [ "## The First Scenario - Matrix Form of $U$\n", "\n", "In the first case, we suppose that $U$ is given in the form of a matrix. Taking the Toffoli gate as an example, we note its matrix representation as $U_0$. We wish to construct a quantum neural network (QNN, i.e., parameterized quantum circuit) to obtain an approximate circuit decomposition of $U_0$ by training.\n", "\n", "Let us import the necessary packages:" ] }, { "cell_type": "code", "execution_count": 1, "id": "ae8f2fdb", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import paddle\n", "from paddle_quantum.circuit import UAnsatz\n", "from paddle_quantum.utils import dagger, trace_distance\n", "from paddle_quantum.state import density_op_random" ] }, { "cell_type": "markdown", "id": "83d5d086", "metadata": {}, "source": [ "We need to get the Toffoli gate's unitary matrix:" ] }, { "cell_type": "code", "execution_count": 2, "id": "4663732b", "metadata": {}, "outputs": [], "source": [ "n = 3 # Number of qubits\n", "# The matrix form of Toffoli gate\n", "U_0 = paddle.to_tensor(np.matrix([[1, 0, 0, 0, 0, 0, 0, 0],\n", " [0, 1, 0, 0, 0, 0, 0, 0],\n", " [0, 0, 1, 0, 0, 0, 0, 0],\n", " [0, 0, 0, 1, 0, 0, 0, 0],\n", " [0, 0, 0, 0, 1, 0, 0, 0],\n", " [0, 0, 0, 0, 0, 1, 0, 0],\n", " [0, 0, 0, 0, 0, 0, 0, 1],\n", " [0, 0, 0, 0, 0, 0, 1, 0]],\n", " dtype=\"float64\"))" ] }, { "cell_type": "markdown", "id": "bfc61ad3", "metadata": {}, "source": [ "### Constructing quantum circuits\n", "\n", "Different QNNs have different expressibility. Here we choose the `complex_entangled_layer(theta, D)` function built-in in Paddle Quantum to construct QNN:" ] }, { "cell_type": "code", "execution_count": 3, "id": "4e400e2e", "metadata": {}, "outputs": [], "source": [ "# Constructing quantum circuit\n", "def Circuit(theta, n, D):\n", " # Initialize the circuit\n", " cir = UAnsatz(n)\n", " # Call the built-in QNN template\n", " cir.complex_entangled_layer(theta[:D], D)\n", "\n", " return cir" ] }, { "cell_type": "markdown", "id": "d97698f3", "metadata": {}, "source": [ "\n", "### Setting up the training model - loss function\n", "\n", "Next we define the loss function $C(U,V) = 1-\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$ and training parameters in order to optimize the parameterized circuit. " ] }, { "cell_type": "code", "execution_count": 4, "id": "29c9ed4a", "metadata": {}, "outputs": [], "source": [ "# Training loss function\n", "class Net(paddle.nn.Layer):\n", " def __init__(self, shape, dtype=\"float64\", ):\n", " super(Net, self).__init__()\n", "\n", " self.theta = self.create_parameter(shape=shape,\n", " default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * np.pi),\n", " dtype=dtype, is_bias=False)\n", "\n", " def forward(self, n, D):\n", " # The matrix form of the circuit\n", " cir = Circuit(self.theta, n, D)\n", " V = cir.U\n", " # Construct Eq.(1) as the loss function\n", " loss =1 - (dagger(V).matmul(U_0).trace().abs() / V.shape[0]) ** 2\n", "\n", " return loss, cir " ] }, { "cell_type": "markdown", "id": "8229500e", "metadata": {}, "source": [ "### Setting up the training model - model parameters\n", "\n", "Before training the QNN, we also need to set some training hyperparameters, mainly the depth (D) of repeated blocks, the learning rate (LR), and the number of iterations (ITR). Here we set the learning rate to 0.1 and the number of iterations to 150. The reader can adjust the hyperparameters to observe the impact on the training effect." ] }, { "cell_type": "code", "execution_count": 5, "id": "4046e5b0", "metadata": {}, "outputs": [], "source": [ "D = 5 # Set the depth of QNN\n", "LR = 0.1 # Set the learning rate\n", "ITR = 150 # Set the number of optimization iterations" ] }, { "cell_type": "markdown", "id": "fcde30e5", "metadata": {}, "source": [ "### Training" ] }, { "cell_type": "code", "execution_count": 6, "id": "7103bf1c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "iter: 30 loss: 0.1627\n", "iter: 60 loss: 0.0033\n", "iter: 90 loss: 0.0001\n", "iter: 120 loss: 0.0000\n", "iter: 150 loss: 0.0000\n", "\n", "The trained circuit:\n", "--U----*---------x----U----*---------x----U----*---------x----U----*---------x----U----*---------x--\n", " | | | | | | | | | | \n", "--U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|--\n", " | | | | | | | | | | \n", "--U---------x----*----U---------x----*----U---------x----*----U---------x----*----U---------x----*--\n", " \n", "The trained parameter theta:\n", " [[[ 1.571 3.142 3.927]\n", " [ 4.713 3.142 2.355]\n", " [ 5.498 1.574 2.95 ]]\n", "\n", " [[ 3.927 6.284 1.571]\n", " [ 5.498 3.142 4.712]\n", " [ 5.498 2.961 1.571]]\n", "\n", " [[ 4.712 -1.571 6.281]\n", " [ 6.284 4.77 4.655]\n", " [ 3.143 1.93 0.359]]\n", "\n", " [[ 3.142 2.668 0.917]\n", " [ 5.403 3.144 5.496]\n", " [ 3.142 4.319 5.89 ]]\n", "\n", " [[ 1.571 0.881 1.571]\n", " [ 1.571 6.972 1.571]\n", " [ 4.712 2.452 1.57 ]]\n", "\n", " [[ 4.517 4.301 0.18 ]\n", " [ 1.329 1.815 1.277]\n", " [ 1.398 0.87 2.132]]]\n" ] } ], "source": [ "# Determine shape of parameter of the network\n", "net = Net(shape=[D + 1, n, 3])\n", "# Using Adam optimizer to obtain relatively good convergence\n", "opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())\n", "\n", "# Optimization loop\n", "for itr in range(1, ITR + 1):\n", " loss, cir = net.forward(n, D)\n", " loss.backward()\n", " opt.minimize(loss)\n", " opt.clear_grad()\n", "\n", " if itr % 30 == 0:\n", " print(\"iter:\", itr, \"loss:\", \"%.4f\" % loss.numpy())\n", " if itr == ITR:\n", " print(\"\\nThe trained circuit:\")\n", " print(cir)\n", "\n", "theta_opt = net.theta.numpy()\n", "print(\"The trained parameter theta:\\n\", np.around(theta_opt, decimals=3))" ] }, { "cell_type": "markdown", "id": "ca9f9196", "metadata": {}, "source": [ "In this case, we construct a five-layer QNN and train it with an Adam optimizer. After around 150 iterations, the loss function reaches 0." ] }, { "cell_type": "markdown", "id": "a5f13518", "metadata": {}, "source": [ "### Validation of results\n", "\n", "In the following, we randomly select 10 density matrices, which are evolved by the target unitary operator $U$ and the approximate unitary operator $V$. Then we calculate the trace distance $ d(\\rho, \\sigma) = \\frac{1}{2}\\text{tr}\\sqrt{(\\rho-\\sigma)^{\\dagger}(\\rho-\\sigma)}$ between the real output `real_output` $\\rho$ and the approximate output `simulated_output` $\\sigma$. The smaller the trace distance, the better the approximation effect." ] }, { "cell_type": "code", "execution_count": 7, "id": "12678dff", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sample 1 :\n", " trace distance is 0.00054\n", "sample 2 :\n", " trace distance is 0.00047\n", "sample 3 :\n", " trace distance is 0.00047\n", "sample 4 :\n", " trace distance is 0.00046\n", "sample 5 :\n", " trace distance is 0.0005\n", "sample 6 :\n", " trace distance is 0.00043\n", "sample 7 :\n", " trace distance is 0.00054\n", "sample 8 :\n", " trace distance is 0.00049\n", "sample 9 :\n", " trace distance is 0.00045\n", "sample 10 :\n", " trace distance is 0.00045\n" ] } ], "source": [ "s = 10 # Set the number of randomly generated density matrices\n", "\n", "for i in range(s):\n", " sampled = paddle.to_tensor(density_op_random(3).astype('complex128')) # randomly generated density matrix of 3 qubits sampled\n", " simulated_output = paddle.matmul(paddle.matmul(cir.U, sampled), dagger(cir.U)) # sampled after approximate unitary evolution\n", " real_output = paddle.matmul(paddle.matmul(paddle.to_tensor(U_0), sampled), dagger(paddle.to_tensor(U_0))) # sampled after target unitary evolution\n", " print('sample', i + 1, ':')\n", " d = trace_distance(real_output.numpy(), simulated_output.numpy())\n", " print(' trace distance is', np.around(d, decimals=5)) # print trace distance\n" ] }, { "cell_type": "markdown", "id": "567a77a3", "metadata": {}, "source": [ "We can see that the trace distance of each sample after the evolution of $U$ and $V$ is close to 0, which means the $V$ approximates $U$ very well." ] }, { "cell_type": "markdown", "id": "f2f3d7d5", "metadata": {}, "source": [ "## The Second Scenario - Circuit Form of $U$\n", "\n", "In the second case, we suppose the $U$ needs approximation is given in the form of a black-box, and we only have access to its input and output. As a results, the fidelity can no longer be computed directly. Instead, it needs to be evaluate by a circuit.\n", "Next we will show how to calculate fidelity with a quantum circuit.\n", "\n", "### Calculate fidelity with a quantum circuit" ] }, { "cell_type": "markdown", "id": "7a62ac66", "metadata": {}, "source": [ "The QNN of QAQC that needs in a large quantum circuit. The whole circuit is shown below, where $U$ denotes the unitary operator to be approximated, and $V^{\\dagger}$ is the QNN we want to train. Here we use the Toffoli gate as the black-box.\n", "\n", "![circuit](./figures/vqcc-fig-circuit.png \"Figure 1: The circuit of the QAQC [1].\")\n", "
Figure 1: The circuit of the QAQC [1].
\n", "\n", "The circuit requires a total of $2n$ qubits, and we call the first $n$ qubits system $A$ and the last $n$ qubits system $B$. The whole circuit involves the following three steps:\n", "\n", "- First creating a maximally entangled state between $A$ and $B$ by performing Hadamard and CNOT gates.\n", "- Then acting with $U$ on system $A$ and with $V^{\\dagger}$ on system $B$ ($V^{\\dagger}$ is the complex conjugate of $V$), note that these two gates are performed in parallel.\n", "- Finally measuring in the bell basis(i.e., undoing the CNOTS and Hadamards then measuring in the standard basis).\n", "\n", "After the above operation, the probability of the full zero state obtained by the measurement is $\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$. For a detailed explanation of Figure 1 please refer to the literature [1]." ] }, { "cell_type": "markdown", "id": "44f2ea35", "metadata": {}, "source": [ "Here we use the same QNN that we used in the first case and use the Toffoli gate as the black-bx. \n", "\n", "Next we will implement variational quantum circuit compiling in Paddle Quantum as follows:" ] }, { "cell_type": "code", "execution_count": 8, "id": "d6852694", "metadata": {}, "outputs": [], "source": [ "n = 3 # Number of qubits\n", "\n", "# Construct the total quantum circuit\n", "def Circuit(theta, n, D):\n", " \n", " # Initialize the circuit of 2n qubits \n", " cir = UAnsatz(2 * n)\n", " for i in range(n):\n", " cir.h(i)\n", " cir.cnot([i, n + i])\n", " # Construct the circuit of U\n", " cir.ccx([0, 1, 2])\n", "\n", " # Construct QNN\n", " cir.complex_entangled_layer(theta, D, [3, 4, 5])\n", " \n", " for l in range(n):\n", " cir.cnot([n - 1 - l, 2 * n - 1 - l])\n", " for m in range(n):\n", " cir.h(m)\n", " \n", " return cir" ] }, { "cell_type": "markdown", "id": "f0d141c7", "metadata": {}, "source": [ "### Setting up the training model - loss function\n", "\n", "Next we define the loss function $C(U,V) = 1-\\frac{1}{d^2}|\\text{tr}(V^{\\dagger} U)|^2$ and training parameters in order to optimize the QNN. " ] }, { "cell_type": "code", "execution_count": 9, "id": "7ee10c61", "metadata": {}, "outputs": [], "source": [ "class Net(paddle.nn.Layer):\n", " def __init__(self, shape, dtype=\"float64\", ):\n", " super(Net, self).__init__()\n", " \n", " # Initialize the theta parameter list and fill the initial value with the uniform distribution of [0, 2*pi]\n", " self.D = D\n", " self.theta = self.create_parameter(shape=[D, n, 3],\n", " default_initializer=paddle.nn.initializer.Uniform(low=0.0, high=2 * np.pi),\n", " dtype=dtype, is_bias=False)\n", "\n", " # Define loss function and forward propagation mechanism\n", " def forward(self): \n", " # The matrix form of circuit\n", " cir = Circuit(self.theta, n, self.D)\n", " # Output the density matrix rho of the quantum state after the circuit\n", " rho = cir.run_density_matrix()\n", " # Define loss function\n", " loss = 1 - paddle.real(rho[0][0])\n", "\n", " return loss, cir" ] }, { "cell_type": "markdown", "id": "5bfa7b24", "metadata": {}, "source": [ "### Setting up the training model - model parameters\n", "\n", "Here we set the learning rate to 0.1 and the number of iterations to 120. The reader can also adjust them to observe the impact on the training effect." ] }, { "cell_type": "code", "execution_count": 10, "id": "280e2858", "metadata": {}, "outputs": [], "source": [ "D = 5 # Set the depth of QNN\n", "LR = 0.1 # Set the learning rate\n", "ITR = 120 # Set the number of optimization iterations" ] }, { "cell_type": "markdown", "id": "bb400510", "metadata": {}, "source": [ "### Training\n", "\n", "Then we commence the training process with an Adam optimizer." ] }, { "cell_type": "code", "execution_count": 11, "id": "77919a34", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "iter: 20 loss: 0.1733\n", "iter: 40 loss: 0.0678\n", "iter: 60 loss: 0.0236\n", "iter: 80 loss: 0.0020\n", "iter: 100 loss: 0.0001\n", "iter: 120 loss: 0.0000\n", "\n", "The trained circuit:\n", "Approximate circuit of U with circuit form input:\n", " --H----*------------------------*-------------------------------------------------------------------------------------------------------------*----H--\n", " | | | \n", "-------|----H----*--------------*--------------------------------------------------------------------------------------------------------*----|----H--\n", " | | | | | \n", "-------|---------|----H----*----X---------------------------------------------------------------------------------------------------*----|----|----H--\n", " | | | | | | \n", "-------x---------|---------|----U----*---------x----U----*---------x----U----*---------x----U----*---------x----U----*---------x----|----|----x-------\n", " | | | | | | | | | | | | | | \n", "-----------------x---------|----U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|----U----x----*----|----|----x------------\n", " | | | | | | | | | | | | \n", "---------------------------x----U---------x----*----U---------x----*----U---------x----*----U---------x----*----U---------x----*----x-----------------\n", " \n" ] } ], "source": [ "# Determine the parameter dimension of the network\n", "net = Net(D)\n", "\n", "# Use Adam optimizer for better performance\n", "opt = paddle.optimizer.Adam(learning_rate=LR, parameters=net.parameters())\n", "\n", "# Optimization loop\n", "for itr in range(1, ITR + 1):\n", " \n", " # Forward propagation calculates the loss function\n", " loss, cir= net.forward()\n", " \n", " # Use back propagation to minimize the loss function\n", " loss.backward()\n", " opt.minimize(loss)\n", " opt.clear_grad()\n", "\n", " # Print training results\n", " if itr % 20 == 0:\n", " print(\"iter:\",itr,\"loss:\",\"%.4f\" % loss.numpy())\n", " if itr == ITR:\n", " print(\"\\nThe trained circuit:\")\n", " print('Approximate circuit of U with circuit form input:\\n', cir)\n" ] }, { "cell_type": "code", "execution_count": 12, "id": "777ca58e", "metadata": {}, "outputs": [], "source": [ "# Storage optimized parameters\n", "theta_opt = net.theta.numpy()" ] }, { "cell_type": "markdown", "id": "c65d72dd", "metadata": {}, "source": [ "In this case, we construct a one-layer QNN and train it with a Adam optimizer. After around 100 iterations, the loss function reaches 0." ] }, { "cell_type": "markdown", "id": "040bf7d5", "metadata": {}, "source": [ "### Validation of results\n", "\n", "Similar to before, we also randomly select 10 density matrices, which are evolved by the target unitary operator $U$ and the approximate unitary operator $V$. Then calculate the trace distance between the real output `real_output` and the approximate output `simulated_output`. The smaller the trace distance, the better the approximation." ] }, { "cell_type": "code", "execution_count": 13, "id": "0c339ca3", "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sample 1 :\n", " trace distance is 0.00694\n", "sample 2 :\n", " trace distance is 0.00775\n", "sample 3 :\n", " trace distance is 0.00657\n", "sample 4 :\n", " trace distance is 0.00727\n", "sample 5 :\n", " trace distance is 0.00642\n", "sample 6 :\n", " trace distance is 0.00705\n", "sample 7 :\n", " trace distance is 0.00586\n", "sample 8 :\n", " trace distance is 0.00569\n", "sample 9 :\n", " trace distance is 0.00803\n", "sample 10 :\n", " trace distance is 0.00635\n" ] } ], "source": [ "s = 10 # Set the number of randomly generated density matrices\n", "for i in range(s):\n", " sampled = paddle.to_tensor(density_op_random(3).astype('complex128')) # randomly generated density matrix of 4 qubits sampled\n", "\n", " # Construct the circuit of target unitary\n", " cir_1 = UAnsatz(3)\n", " cir_1.ccx([0, 1, 2])\n", " # sampled after target unitary evolution\n", " real_output = paddle.matmul(paddle.matmul(cir_1.U, sampled), dagger(cir_1.U))\n", "\n", " # Construct the circuit of approximate unitary\n", " cir_2 = UAnsatz(3)\n", " cir_2.complex_entangled_layer(paddle.to_tensor(theta_opt), D, [0, 1, 2])\n", " # sampled after approximate unitary evolution\n", " simulated_output = paddle.matmul(paddle.matmul(cir_2.U, sampled), dagger(cir_2.U))\n", "\n", " d = trace_distance(real_output.numpy(), simulated_output.numpy())\n", " print('sample', i + 1, ':')\n", " print(' trace distance is', np.around(d, decimals=5)) # print trace distance" ] }, { "cell_type": "markdown", "id": "e8b3d6c4", "metadata": {}, "source": [ "We can see that the trace distance of each sample after the evolution of $U$ and $V$ is close to 0, which means the $V$ approximates $U$ very well." ] }, { "cell_type": "markdown", "id": "afe8ea7c", "metadata": {}, "source": [ "## Conclusion\n", "\n", "In this tutorial, the variational quantum circuit compiling is carried out from the input form of the target unitary operator as a matrix and as a circuit. The results of the quantum compilation are demonstrated by two simple examples using Paddle Quantum. Then the approximate effect is checked by the trace distance of the quantum states after the evolution of the target unitary and the approximate unitary respectively. Finally the results in Paddle Quantum show that the quantum compilation is good." ] }, { "cell_type": "markdown", "id": "fc610a2f", "metadata": {}, "source": [ "_______\n", "\n", "## References" ] }, { "cell_type": "markdown", "id": "62dd936d", "metadata": {}, "source": [ "[1] Khatri, Sumeet, et al. \"Quantum-assisted quantum compiling.\" [Quantum 3 (2019): 140](https://quantum-journal.org/papers/q-2019-05-13-140/)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 5 }