提交 195b41b7 编写于 作者: Q Quleaf

update to v2.3.0

上级 ca62f24a

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
......@@ -33,7 +33,7 @@ English | [简体中文](README_CN.md)
</a>
<!-- PyPI -->
<a href="https://pypi.org/project/paddle-quantum/">
<img src="https://img.shields.io/badge/pypi-v2.2.2-orange.svg?style=flat-square&logo=pypi"/>
<img src="https://img.shields.io/badge/pypi-v2.3.0-orange.svg?style=flat-square&logo=pypi"/>
</a>
<!-- Python -->
<a href="https://www.python.org/">
......@@ -90,22 +90,19 @@ pip install -e .
### Environment setup for Quantum Chemistry module
Our `qchem` module is based on `Psi4`, so before executing quantum chemistry, we have to install this Python package.
Currently, our `qchem` module uses `PySCF` as its backend to compute molecular integrals, so before executing quantum chemistry, we have to install this Python package.
> It is recommended that `Psi4` is installed in a Python 3.8 environment.
> It is recommended that `PySCF` is installed in a Python environment whose Python version >=3.6.
We highly recommend you to install `Psi4` via conda. **MacOS/Linux** user can use the command:
We highly recommend you to install `PySCF` via conda. **MacOS/Linux** user can use the command:
```bash
conda install psi4 -c psi4
conda install -c pyscf pyscf
```
For **Windows** user, the command is:
> NOTE: For **Windows** user, if your operating system is Windows10, you can install `PySCF` in Ubuntu subsystem provided by Windows 10's App Store. `PySCF` can't run directly in Windows, so we are working hard to develop more quantum chemistry backends. Our support for Windows will be improved in the coming release of Paddle Quantum.
```bash
conda install psi4 -c psi4 -c conda-forge
```
**Note:** Please refer to [Psi4](https://psicode.org/installs/v14/) for more download options.
**Note:** Please refer to [PySCF](https://pyscf.org/install.html) for more download options.
### Run example
......
......@@ -34,7 +34,7 @@
</a>
<!-- PyPI -->
<a href="https://pypi.org/project/paddle-quantum/">
<img src="https://img.shields.io/badge/pypi-v2.2.2-orange.svg?style=flat-square&logo=pypi"/>
<img src="https://img.shields.io/badge/pypi-v2.3.0-orange.svg?style=flat-square&logo=pypi"/>
</a>
<!-- Python -->
<a href="https://www.python.org/">
......@@ -91,23 +91,19 @@ pip install -e .
### 量子化学模块的环境设置
我们的量子化学模块是基于 `Psi4` 进行开发的,所以在运行量子化学模块之前需要先行安装该 Python 包。
当前我们的量子化学模块在后端使用 `PySCF` 来计算各类分子积分,所以在运行量子化学模块之前需要先行安装该 Python 包。
> 推荐在 Python3.8 环境中安装。
> 推荐在 Python>=3.6 环境中安装。
在安装 `psi4` 时,我们建议您使用 conda。对于 **MacOS/Linux** 的用户,可以使用如下指令。
在安装 `PySCF` 时,我们建议您使用 conda。对于 **MacOS/Linux** 的用户,可以使用如下指令。
```bash
conda install psi4 -c psi4
conda install -c pyscf pyscf
```
对于 **Windows** 用户,请使用
> 注:对于 **Windows** 用户,如果操作系统为 Windows10,可以在其应用商店提供的 Ubuntu 子系统中利用上述命令安装 `PySCF`。`PySCF` 并不支持直接在 Windows 下运行,我们正在努力开发更多的量子化学后端,在量桨的下一版本中将会有对 Windows 更好的支持。
```bash
conda install psi4 -c psi4 -c conda-forge
```
**注意:** 更多的下载方法请参考 [Psi4](https://psicode.org/installs/v14/)
**注意:** 更多的下载方法请参考 [PySCF](https://pyscf.org/install.html)
### 运行
......@@ -217,7 +213,7 @@ Paddle Quantum 使用 setuptools 的 develop 模式进行安装,相关代码
## 交流与反馈
- 我们非常欢迎您通过 [Github Issues](https://github.com/PaddlePaddle/Quantum/issues) 来提交问题、报告与建议。
- 我们非常欢迎您通过 [GitHub Issues](https://github.com/PaddlePaddle/Quantum/issues) 来提交问题、报告与建议。
- 技术交流QQ群:1076223166
......
# Quantum Application Model Library
- [Features](#features)
- [Installation](#installation)
- [How to Use](#how-to-use)
- [Application List](#application-list)
**Q**uantum **A**pplication **M**odel Library (QAML) is a collection of out-of-box practical quantum algorithms, it is developed by [Institute for Quantum Computing at Baidu](https://quantum.baidu.com/), and aims to be a "supermarket" of quantum solutions for industry users. Currently, models in QAML have covered popular areas listed below:
- Artificial Intelligence
- Medicine and Pharmaceuticals
- Material Simulation
- Financial Technology
- Manufacturing
- Data Analysis
QAML is implemented on Paddle Quantum, a quantum machine learning platform, which can be found at https://qml.baidu.com and https://github.com/PaddlePaddle/Quantum.
## Features
- Industrialization: 10 models closely follow the 6 major industrial directions, covering hot topics such as artificial intelligence, chemical materials, manufacturing, finance, etc.
- End-to-end: Linking the whole process from application scenarios to quantum computing and solving the last mile of quantum applications.
- Out-of-box: No special configuration is required, the model is called directly by the Paddle Quantum, eliminating the tedious installation process.
## Installation
QAML depends on the `paddle-quantum` package. Users can install it by pip.
```shell
pip install paddle-quantum
```
For those who are using old versions of Paddle Quantum, simply run `pip install --upgrade paddle-quantum` to install the latest package.
QAML locates in Paddle Quantum's GitHub repository, you can download the zip file contains QAML source code by clicking [this link](https://github.com/PaddlePaddle/Quantum/archive/refs/heads/master.zip). After unzipping the package, you will find all the models in the `applications` folder in the extracted folder.
You can also use git to get the QAML source code.
```shell
git clone https://github.com/PaddlePaddle/Quantum.git
cd Quantum/applications
```
You can check your installation by going to the `handwritten_digits_classification` folder under `applications` and running
```shell
python vsql_classification.py --example.toml
```
The installation is successful once the program terminates without errors.
## How to Use
In each application model, we provide Python scripts that can be run directly and the corresponding configuration files. The user can modify the configuration file to implement the corresponding requirements.
Take handwritten digit classification as an example, it can be used by executing `python vsql_classification.py --example.toml` in the `handwritten_digits_classification` folder. We provide tutorials for each application model, which allows users to quickly understand and use it.
## Application List
*Continue update*
Below we list instructions for all applications available in QAML, newly developed applications will be continuously integrated into QAML.
1. [Handwritten digits classification](./handwritten_digits_classification/introduction_en.ipynb)
2. [Molecular ground state energy & dipole moment calculation](./lithium_ion_battery/introduction_en.ipynb)
3. [Text classification](./text_classification/introduction_en.ipynb)
4. [Protein folding](./protein_folding/introduction_en.ipynb)
5. [Medical image classification](./medical_image_classification/introduction_en.ipynb)
6. [Quality detection](./quality_detection/introduction_en.ipynb)
7. [Option pricing](./option_pricing/introduction_en.ipynb)
8. [Quantum portfolio optimization](./portfolio_optimization/introduction_en.ipynb)
9. [Regression](./regression/introduction_en.ipynb)
10. [Quantum linear equation solver](./linear_solver/introduction_en.ipynb)
# 量子应用模型库
- [特色](#特色)
- [安装](#安装)
- [如何使用](#如何使用)
- [应用列表](#应用列表)
量子应用模型库(**Q**uantum **A**pplication **M**odel **L**ibrary, QAML)是一个开箱即用的实用量子应用模型集合,它由[百度量子计算研究所](https://quantum.baidu.com/)研发,旨在成为企业用户的量子解决方案“超市”。目前,QAML 中的模型已经覆盖了以下领域:
- 人工智能
- 医学制药
- 材料模拟
- 金融科技
- 汽车制造
- 数据分析
QAML 基于量桨这一量子机器学习平台实现,关于量桨的内容可以参考 https://qml.baidu.com 和 https://github.com/PaddlePaddle/Quantum 。
## 特色
- 产业化:10 大应用模型紧贴 6 大产业方向,涵盖人工智能、化工材料、汽车制造、金融套利等热点话题。
- 端到端:打通应用场景到量子算法的全流程,解决量子应用的最后一公里问题。
- 开箱即用:无需特殊配置,通过量桨直接完成模型调用,省去繁琐安装环节。
## 安装
QAML 依赖于量桨( `paddle-quantum` )软件包。用户可以通过 pip 来安装:
```shell
pip install paddle-quantum
```
对于那些使用旧版量桨的用户,只需运行 `pip install --upgrade paddle-quantum` 即可安装最新版量桨。
QAML 的内容在 Paddle Quantum 的 GitHub 仓库中,用户可以通过点击[此链接](https://github.com/PaddlePaddle/Quantum/archive/refs/heads/master.zip)下载包含 QAML 源代码的压缩包。QAML 的所有模型都在解压后的文件夹中的 `applications` 文件夹里。
用户也可以使用 git 来获取 QAML 的源码文件。
```shell
git clone https://github.com/PaddlePaddle/Quantum.git
cd Quantum/applications
```
用户可以进入到 `applications` 下的 `handwritten_digits_classification` 文件夹中,然后运行以下代码来检查安装是否成功。
```shell
python vsql_classification.py --example.toml
```
如果上面的程序没有报错、成功运行的话,则说明安装成功了。
## 如何使用
在每个应用模型中,我们都提供了可以直接运行的Python脚本和相应的配置文件。用户可以修改配置文件来实现对应的要求。
以手写数字识别为例,用户可以通过执行 `handwritten_digits_classification` 中的 `python vsql_classification.py --example.toml` 命令来快速使用。我们为每个应用模型提供了教程,方便用户快速理解和上手使用。
## 应用列表
*持续更新中*
我们列出了目前 QAML 的所有应用案例的教程,新开发的应用案例也会持续添加进来。
1. [手写数字识别](./handwritten_digits_classification/introduction_cn.ipynb)
2. [分子基态能量 & 偶极矩计算](./lithium_ion_battery/introduction_cn.ipynb)
3. [中文文本分类](./text_classification/introduction_cn.ipynb)
4. [蛋白质折叠](./protein_folding/introduction_cn.ipynb)
5. [医学影像判别](./medical_image_classification/introduction_cn.ipynb)
6. [材料表面质量检测](./quality_detection/introduction_cn.ipynb)
7. [量子期权定价](./option_pricing/introduction_cn.ipynb)
8. [投资组合优化](./portfolio_optimization/introduction_cn.ipynb)
9. [回归分析](./regression/introduction_cn.ipynb)
10. [线性方程组求解](./linear_solver/introduction_cn.ipynb)
task = 'test'
image_path = 'data_0.png'
is_dir = false
model_path = 'vsql.pdparams'
num_qubits = 10
num_shadow = 2
depth = 1
classes = [0, 1]
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 手写数字识别简介\n",
"\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"计算机视觉(Computer Vision, CV)是指让计算机能够从图像、视频或其它视觉输入中获取有意义的信息。它是人工智能领域中的一个非常基础且重要的组成部分。在 CV 中,手写数字识别(handwritten digit classification)是一个较为基础的任务。它在 MNIST 数据集\\[1\\]上进行训练和测试,用来验证模型是否拥有 CV 方面的基础能力。\n",
"\n",
"MNIST 数据集中包含如下图所示的手写数字。MNIST 共包含 0-9 这 10 个类别,每个数字为 28\\*28 像素的灰度图片。其中,训练集有 60000 张图片,测试集有 10000 张图片。假设我们设计了一个可以用来进行图像分类的模型,那么我们可以在 MNIST 数据集上测试该模型的分类能力。\n",
"\n",
"![mnist-example](mnist_example.png)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 使用 VSQL 模型实现 MNIST 分类\n",
"\n",
"### 数据编码\n",
"\n",
"在手写数字识别问题中,输入是一张手写数字图片,输出是该图片对应的类别(即数字 0-9)。而由于量子计算机处理的输入是量子态,因此,我们需要将图片编码为量子态。在这里,我们首先使用一个二维矩阵表示一张图片。然后将该矩阵展开为一维向量,并通过补充 0 将向量长度补充到 2 的整数次幂。再对向量进行归一化,即可得到一个量子计算机可以处理的量子态。\n",
"\n",
"### VSQL 模型简介\n",
"\n",
"变分影子量子学习(variational shadow quantum learning, VSQL)是一个在监督学习框架下的量子–经典混合算法。它使用了参数化量子电路(parameterized quantum circuit, PQC)和经典影子(classical shadow),和通常使用的变分量子算法(variational quantum algorithm, VQA)不同的是,VSQL 只从子空间获取局部特征,而不是从量子态形成的整个希尔伯特空间获取特征。\n",
"\n",
"VSQL 的模型原理图如下:\n",
"\n",
"![vsql-model](vsql_model.png)\n",
"\n",
"VSQL 处理的输入是一个量子态。对于输入的量子态,迭代地作用一个局部参数化量子电路并进行测量,得到局部的影子特征。然后将得到的所有影子特征使用经典神经网络进行计算并得到预测标签。\n",
"\n",
"### 工作流\n",
"\n",
"根据以上原理,我们只需要使用 MNIST 数据集对 VSQL 模型进行训练。得到收敛后的模型。使用该模型即可进行手写数字的分类。模型的训练流程如下图:\n",
"\n",
"\n",
"![vsql-pipeline](vsql_pipeline_cn.png)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 如何使用\n",
"\n",
"### 使用模型进行预测\n",
"\n",
"这里,我们已经给出了一个训练好的模型,可以直接用于 0 和 1 的图片的预测。只需要在 `example.toml` 这个配置文件中进行对应的配置,然后输入命令 `python vsql_classification.py --config example.toml` 即可使用训练好的 VSQL 模型对输入的图片进行测试。\n",
"\n",
"### 在线演示\n",
"\n",
"这里,我们给出一个在线演示的版本,可以在线进行测试。首先定义配置文件的内容:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"# 模型的整体配置文件。\n",
"# 输入当前的任务,可以是 'train' 或者 'test',分别代表训练和预测。这里我们使用 test,表示我们要进行预测。\n",
"task = 'test'\n",
"# 要预测的图片的文件路径。\n",
"image_path = 'data_0.png'\n",
"# 上面的图片路径是否是文件夹。对于文件夹路径,我们会对文件夹里面的所有图片文件进行预测。这种方式可以一次测试多个图片。\n",
"is_dir = false\n",
"# 训练好的模型参数文件的文件路径。\n",
"model_path = 'vsql.pdparams'\n",
"# 量子电路所包含的量子比特的数量。\n",
"num_qubits = 10\n",
"# 影子电路所包含的量子比特的数量。\n",
"num_shadow = 2\n",
"# 电路深度。\n",
"depth = 1\n",
"# 我们要预测的类别。这里我们对 0 和 1 进行分类。\n",
"classes = [0, 1]\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"接下来是预测部分的代码:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"对于输入的图片,模型有 89.22% 的信心认为它是 0,和 10.78% 的信心认为它是 1。\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings('ignore')\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.qml.vsql import train, inference\n",
"\n",
"config = toml.loads(test_toml)\n",
"task = config.pop('task')\n",
"if task == 'train':\n",
" train(**config)\n",
"elif task == 'test':\n",
" prediction, prob = inference(**config)\n",
" if config['is_dir']:\n",
" print(f\"对输入图片的预测结果分别是 {str(prediction)[1:-1]}。\")\n",
" else:\n",
" prob = prob[0]\n",
" msg = '对于输入的图片,模型有'\n",
" for idx, item in enumerate(prob):\n",
" if idx == len(prob) - 1:\n",
" msg += '和'\n",
" label = config['classes'][idx]\n",
" msg += f' {item:3.2%} 的信心认为它是 {label:d}'\n",
" msg += '。' if idx == len(prob) - 1 else ','\n",
" print(msg)\n",
"else:\n",
" raise ValueError(\"未知的任务,它可以是'train'或'test'。\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"在这里,我们只需要修改要配置文件中的图片路径,再运行整个代码,就可以快速对其它图片进行测试。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 注意事项\n",
"\n",
"我们提供的模型为二分类模型,仅可以用来分辨手写数字 0 和 1。对于其它分类任务,需要重新进行训练。\n",
"\n",
"### 数据集结构\n",
"\n",
"如果想要使用自定义数据集进行训练,只需要按照规则来准备数据集即可。在数据集文件夹中准备 `train.txt` 和 `test.txt`,如果需要验证集的话还有 `dev.txt`。每个文件里使用一行代表一条数据。每行内容包含图片的文件路径和标签,使用制表符隔开。\n",
"\n",
"### 配置文件介绍\n",
"\n",
"在 `test.toml` 里有测试所需要的完整的配置文件内容参考。在 `train.toml` 里有训练所需要的完整的配置文件内容参考。使用配置文件的方式即可快速进行模型的训练和预测。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. 引用信息\n",
"\n",
"```tex\n",
"@inproceedings{li2021vsql,\n",
" title={VSQL: Variational shadow quantum learning for classification},\n",
" author={Li, Guangxi and Song, Zhixin and Wang, Xin},\n",
" booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},\n",
" volume={35},\n",
" number={9},\n",
" pages={8357--8365},\n",
" year={2021}\n",
"}\n",
"```\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "py37",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15 (default, Nov 10 2022, 12:46:26) \n[Clang 14.0.6 ]"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "49b49097121cb1ab3a8a640b71467d7eda4aacc01fc9ff84d52fcb3bd4007bf1"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction to Handwritten Digit Classification\n",
"\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"Computer Vision (CV) refers to enabling computers to obtain meaningful information from images, videos, or other visual inputs. It is a fundamental and important field in artificial intelligence. In CV, handwritten digit classification is a relatively basic task. It is trained and tested on the MNIST dataset \\[1\\] to verify whether the model has the basic ability of CV.\n",
"\n",
"The MNIST dataset contains handwritten digits as shown in the figure below. MNIST contains a total of 10 categories from 0-9, and each digit is a grayscale image of 28\\*28 pixels. There are 60,000 images in the training set and 10,000 images in the test set. Suppose we design a model that can be used for image classification, then we can test the classification ability of the model on the MNIST dataset.\n",
"\n",
"![mnist-example](mnist_example.png)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## MNIST Classification Using VSQL Model\n",
"\n",
"### Data Encoding\n",
"\n",
"In the handwritten digit classification problem, the input is a picture of a handwritten digit and the output is the category corresponding to the picture (i.e., the digits 0-9). And since quantum computers deal with inputs that are quantum states, we need to encode the picture into a quantum state. Here, we first represent a picture using a two-dimensional matrix. This matrix is then expanded into a 1D vector, and the length of the vector is padded to an integer power of 2 by padding with zeros. The vector is then normalized to obtain a quantum state that can be processed by a quantum computer.\n",
"\n",
"\n",
"### Introduction to the VSQL Model\n",
"\n",
"Variational shadow quantum learning (VSQL) is a hybrid quantum-classical algorithm under the framework of supervised learning. It uses the parameterized quantum circuit (PQC) and the classical shadow. Unlike the common variational quantum algorithm (VQA), VSQL only obtains local features from the subspace rather than from the whole Hilbert space where the quantum states are formed.\n",
"\n",
"The schematic diagram of the VSQL model is as follows.\n",
"\n",
"![vsql-model](vsql_model.png)\n",
"\n",
"The input to the VSQL process is a quantum state. For the input quantum state, a local parameterized quantum circuit is iteratively applied and measured to obtain local shadow features. Then all the obtained shadow features are calculated using the classical neural network and the predicted labels are obtained.\n",
"\n",
"### Workflow\n",
"\n",
"Based on the above principles, we only need to train the VSQL model using the MNIST dataset to obtain a converged model. The model can be used to classify handwritten digits. The training process of the model is as follows.\n",
"\n",
"![vsql-pipeline](vsql_pipeline_en.png)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## How to Use\n",
"\n",
"### Predict Using the Model\n",
"\n",
"Here, we have given a trained model that can be used directly for the prediction of 0 and 1 images. Just make the corresponding configuration in the `example.toml` configuration file and enter the command `python vsql_classification.py --config example.toml` to test the input images with the trained VSQL model.\n",
"\n",
"### Online Demo\n",
"\n",
"Here, we give a version of the online demo that can be tested online. First define the contents of the configuration file."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"# The overall configuration file of the model.\n",
"# Enter the current task, which can be 'train' or 'test', representing training and prediction respectively. Here we use test, indicating that we want to make a prediction.\n",
"task = 'test'\n",
"# The file path of the image to be predicted.\n",
"image_path = 'data_0.png'\n",
"# Whether the image path above is a folder or not. For folder paths, we will predict all image files inside the folder. This way you can test multiple images at once.\n",
"is_dir = false\n",
"# The file path of the trained model parameter file.\n",
"model_path = 'vsql.pdparams'\n",
"# The number of qubits that the quantum circuit contains.\n",
"num_qubits = 10\n",
"# The number of qubits that the shadow circuit contains.\n",
"num_shadow = 2\n",
"# Circuit depth.\n",
"depth = 1\n",
"# The class to be predicted by the model. Here, 0 and 1 are classified.\n",
"classes = [0, 1]\n",
"\"\"\"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Next is the code for the prediction section."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"For the input image, the model has 89.22% confidence that it is 0, and 10.78% confidence that it is 1.\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings('ignore')\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.qml.vsql import train, inference\n",
"\n",
"config = toml.loads(test_toml)\n",
"task = config.pop('task')\n",
"if task == 'train':\n",
" train(**config)\n",
"elif task == 'test':\n",
" prediction, prob = inference(**config)\n",
" if config['is_dir']:\n",
" print(f\"The prediction results of the input pictures are {str(prediction)[1:-1]} respectively.\")\n",
" else:\n",
" prob = prob[0]\n",
" msg = 'For the input image, the model has'\n",
" for idx, item in enumerate(prob):\n",
" if idx == len(prob) - 1:\n",
" msg += 'and'\n",
" label = config['classes'][idx]\n",
" msg += f' {item:3.2%} confidence that it is {label:d}'\n",
" msg += '.' if idx == len(prob) - 1 else ', '\n",
" print(msg)\n",
"else:\n",
" raise ValueError(\"Unknown task, it can be train or test.\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we only need to modify the image path in the configuration file, and then run the entire code to quickly test other images."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Note\n",
"\n",
"The model we provide is a binary classification model that can only be used to distinguish handwritten digits 0 and 1. For other classification tasks, it needs to be retrained.\n",
"\n",
"### Dataset Structure\n",
"\n",
"If you want to use a custom dataset for training, you just need to prepare the dataset according to the rules. Prepare `train.txt` and `test.txt` in the dataset folder, and `dev.txt` if a validation set is needed. Use one line in each file to represent one piece of data. Each line contains the file path and label of the image, separated by tabs.\n",
"\n",
"### Introduction to the Configuration File\n",
"\n",
"In `test.toml`, there is a complete reference to the configuration files needed for testing. In `train.toml`, there is a complete reference to the configuration files needed for training. You can use the configuration file to quickly use the model to train and test."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Citation\n",
"\n",
"```tex\n",
"@inproceedings{li2021vsql,\n",
" title={VSQL: Variational shadow quantum learning for classification},\n",
" author={Li, Guangxi and Song, Zhixin and Wang, Xin},\n",
" booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},\n",
" volume={35},\n",
" number={9},\n",
" pages={8357--8365},\n",
" year={2021}\n",
"}\n",
"```\n",
"\n",
"## Reference\n",
"\n",
"\\[1\\] \"THE MNIST DATABASE of handwritten digits\". Yann LeCun, Courant Institute, NYU Corinna Cortes, Google Labs, New York Christopher J.C. Burges, Microsoft Research, Redmond."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "py37",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15 (default, Nov 10 2022, 12:46:26) \n[Clang 14.0.6 ]"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "49b49097121cb1ab3a8a640b71467d7eda4aacc01fc9ff84d52fcb3bd4007bf1"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
# The full config for testing the VSQL model.
# The task of this config. Available values: 'train' | 'test'.
task = 'test'
# The path of the input images.
image_path = 'data_0.png'
# Whether the image_path is a directory. Available values: true | false.
# The value true means the path is a directory and all the images there will be predicted.
is_dir = false
# The path of the trained model, which will be loaded.
model_path = 'vsql.pdparams'
# The number of qubits which the quantum circuit contains.
num_qubits = 10
# The number of qubits which the shadow circuit contains.
num_shadow = 2
# The depth of the quantum circuit. Default to 1.
depth = 1
# The classes of handwrite digits to be predicted.
# It will use all labels if the value is not provided.
classes = [0, 1]
# The full config for training the VSQL model.
# The task of this config. Available values: 'train' | 'test'.
task = 'train'
# The name of the model, which is used to save the model.
model_name = 'vsql-model'
# The path to save the model. Both relative and absolute paths are allowed.
# It saves the model to the current path by default.
# saved_path = './'
# The number of qubits which the quantum circuit contains.
num_qubits = 10
# The number of qubits which the shadow circuit contains.
num_shadow = 2
# The depth of the quantum circuit, default to 1.
# depth = 1
# The size of the batch samplers.
batch_size = 16
# The number of epochs to train the model.
num_epochs = 10
# The learning rate used to update the parameters, default to 0.01.
# learning_rate = 0.01
# The path of the dataset. It defaults to MNIST, which is a built-in dataset.
dataset = 'MNIST'
# The classes of handwrite digits to be predicted.
# It will use all labels if the value is not provided.
classes = [0, 1]
# Whether use the validation.
# It is true means the dataset contains training, validation and test datasets.
# It is false means the dataset only contains training datasets and test datasets.
using_validation = false
# The number of the data in the training dataset.
# The value defaults to 0 which means using all data.
# num_train = 0
# The number of the data in the validation dataset.
# The value defaults to 0 which means using all data.
# num_dev = 0
# The number of the data in the test dataset.
# The value defaults to 0 which means using all data.
# num_test = 0
# Number of epochs with no improvement after which training will be stopped.
# early_stopping = 1000
# The number of subprocess to load data, 0 for no subprocess used and loading data in main process, defaults to 0.
# num_workers = 0
# !/usr/bin/env python3
# Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import warnings
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
import argparse
import toml
from paddle_quantum.qml.vsql import train, inference
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Classify the handwritten digits by the VSQL model.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
args = parser.parse_args()
config = toml.load(args.config)
task = config.pop('task')
if task == 'train':
train(**config)
elif task == 'test':
prediction, prob = inference(**config)
if config['is_dir']:
print(f"The prediction results of the input pictures are {str(prediction)[1:-1]} respectively.")
else:
prob = prob[0]
msg = 'For the input image, the model has'
for idx, item in enumerate(prob):
if idx == len(prob) - 1:
msg += 'and'
label = config['classes'][idx]
msg += f' {item:3.2%} confidence that it is {label:d}'
msg += '.' if idx == len(prob) - 1 else ', '
print(msg)
else:
raise ValueError("Unknown task, it can be train or test.")
# The path of the input matrix A. It should be a .npy file.
A_dir = './A.npy'
# The path of the input vector b. It should be a .npy file.
b_dir = './b.npy'
# The depth of the quantum ansatz circuit.
depth = 4
# Number optimization cycls. Use 100 as a starting point and adjust depend on how low the loss function reaches.
# Ideally, you want to reach 0.0001
iterations = 100
# The learning rate of the optimizer.
LR = 0.1
# Threshold for loss value to end optimization early, default is 0.
gamma = 0
\ No newline at end of file
A_dir = './A.npy'
b_dir = './b.npy'
depth = 4
iterations = 100
LR = 0.1
gamma = 0
\ No newline at end of file
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# 变分量子线性求解器\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"## 背景介绍\n",
"\n",
"线性方程组是数学中一个基本但非常有用的工具。 一个例子是,在经济学中,可以使用线性方程对经济进行建模。 此外,它还为非线性的大型系统提供了简单的估计。 因此求解线性方程组是一项重要的任务。\n",
"\n",
"变分量子线性求解器(Variational quantum linear solver, VQLS)是一种求解线性方程组的变分量子算法,采用了经典-量子混合的方案,可以在近期的含噪中等规模量子计算机上运行。具体来说,对于一个矩阵 $A$ 和一个向量 $\\boldsymbol{b}$,我们的目标是找到一个向量 $\\boldsymbol{x}$ 使得 $A \\boldsymbol{x} = \\boldsymbol{b}$. 使用 VQLS 算法可以得到一个与 $\\boldsymbol{x}$ 成比例的量子态,即一个归一化的向量 $|x\\rangle = \\frac{\\boldsymbol{x}}{\\lVert \\boldsymbol{x} \\rVert_2}$。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 模型原理\n",
"\n",
"量子场景的线性方程求解问题和通常的设定略有不同,因为量子计算需要将酉算子应用到量子态上。对于输入的矩阵 $A$,我们需要将其分解成酉算子的线性组合 $A = \\sum_n c_n A_n$,其中每个 $A_n$ 都是酉算子,可以在量子线路上运行。对于输入的向量 $\\boldsymbol{b}$,我们需要假设它是一个能够被某个酉算子 $U$ 制备的量子态 $|b\\rangle$,即 $U|0\\rangle = |b\\rangle$。我们可以用下面这张图来概括 VQLS 算法的整体架构:\n",
"\n",
"![VQLS](vqls.png)\n",
"\n",
"可以看到,VQLS 算法是一种混合优化算法,可以分为经典和量子两部分,需要在量子计算机上准备参数化量子电路 $V(\\alpha)$ 并计算损失函数 $C(\\alpha)$,然后在经典计算机上对参数 $\\alpha$ 进行优化从而最小化损失函数,直到损失低于某个阈值,最后输出目标量子态 $|x\\rangle$。其中参数化电路 $V(\\alpha)$ 可以生成一个量子态 $|\\psi(\\alpha)\\rangle$,电路 $F(A)$ 可以计算 $A|\\psi(\\alpha)\\rangle$ 与 $|b\\rangle$ 的近似程度,即损失函数 $C(\\alpha)$。当量子态 $A|\\psi(\\alpha)\\rangle$ 与 $|b\\rangle$ 足够接近时,这就意味着量子态 $|\\psi(\\alpha)\\rangle$ 与目标态 $|x\\rangle$ 足够接近,我们可以输出量子态 $|\\psi(\\alpha)\\rangle$ 作为目标态 $|x\\rangle$ 的近似。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 量桨实现\n",
"\n",
"我们使用量桨中的 `Circuit` 类结合飞桨优化器来实现 VQLS 算法,其中量子部分中参数化量子电路 $V(\\alpha)$ 为 `Circuit` 中内置的 `complex_entangled_layer` 模板,损失函数计算电路 $F(A)$ 由 Hadamard Test 或 Hadamard-Overlap Test 组成,主要使用了量桨中的 `oracle` 量子门来实现控制 $A_n$ 门,在经典优化部分中我们使用 Adam 优化器来最小化损失函数。\n",
"\n",
"用户可以使用 toml 文件指定算法的输入,矩阵 $A$ 和向量 $\\boldsymbol{b}$,分别以 `.npy` 文件形式存储。用户可以使用以下代码,通过改变$n$的值随机生成一个 $n\\times n$ 的矩阵 $A$ 以及向量 $\\boldsymbol{b}$。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"这是一个随机生成的A:\n",
"[[4.1702199e+00+7.203245j 1.1437482e-03+3.0233257j\n",
" 1.4675589e+00+0.9233859j 1.8626021e+00+3.4556072j\n",
" 3.9676747e+00+5.3881674j ]\n",
" [2.0445225e+00+8.781175j 2.7387592e-01+6.704675j\n",
" 4.1730480e+00+5.5868983j 1.4038694e+00+1.9810148j\n",
" 8.0074453e+00+9.682616j ]\n",
" [8.7638912e+00+8.946067j 8.5044211e-01+0.39054784j\n",
" 1.6983042e+00+8.781425j 9.8346835e-01+4.2110763j\n",
" 9.5788956e+00+5.3316526j ]\n",
" [6.8650093e+00+8.346256j 1.8288277e-01+7.5014434j\n",
" 9.8886108e+00+7.4816566j 2.8044400e+00+7.892793j\n",
" 1.0322601e+00+4.4789352j ]\n",
" [2.8777535e+00+1.3002857j 1.9366957e-01+6.7883554j\n",
" 2.1162813e+00+2.6554666j 4.9157314e+00+0.5336254j\n",
" 5.7411761e+00+1.4672858j ]]\n",
"这是一个随机生成的b:\n",
"[4.191945 +6.852195j 3.1342418+6.9232264j 6.9187713+3.1551564j\n",
" 9.085955 +2.9361415j 5.8930554+6.9975834j]\n"
]
}
],
"source": [
"n = 5\n",
"\n",
"import numpy as np\n",
"\n",
"\n",
"np.random.seed(1)\n",
"A = np.zeros([n, n], dtype=\"complex64\")\n",
"b = np.zeros(n, dtype=\"complex64\")\n",
"for i in range(n):\n",
" for j in range(n):\n",
" x = np.random.rand() * 10\n",
" y = np.random.rand() * 10\n",
" A[i][j] = complex(x, y)\n",
" x = np.random.rand() * 10\n",
" y = np.random.rand() * 10\n",
" b[i] = complex(x, y)\n",
"np.save(\"./A.npy\", A)\n",
"np.save(\"./b.npy\", b)\n",
"print(\"这是一个随机生成的A:\")\n",
"print(A)\n",
"print(\"这是一个随机生成的b:\")\n",
"print(b)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"用户可以在 toml 文件中指定 VQLS 算法的参数 `depth`,`iterations`,`LR` 以及 `gamma`,分别对应参数化量子电路 $V(\\alpha)$ 的层数,优化器的迭代次数,优化器的学习率,和损失函数的阈值。在命令行输入 `python vqls.py --config config.toml` 即可完成线性方程组求解。这里我们给出一个在线演示的例子,首先定义配置文件的内容如下,用户可以自行更改 `test_toml` 中的参数:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"# 存储矩阵A的.npy文件的路径。\n",
"A_dir = './A.npy'\n",
"# 存储向量b的.npy文件的路径。\n",
"b_dir = './b.npy'\n",
"# 参数化量子电路的层数。\n",
"depth = 4\n",
"# 优化器迭代次数。\n",
"iterations = 200\n",
"# 优化器的学习率。\n",
"LR = 0.1\n",
"# 损失函数的阈值。默认为0。\n",
"gamma = 0\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"运行 VQLS 算法如下:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"c:\\Users\\yuzhan01\\Miniconda3\\envs\\pq_model\\lib\\site-packages\\paddle\\tensor\\creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. \n",
"Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n",
" if data.dtype == np.object:\n",
" 88%|████████▊ | 176/200 [02:04<00:16, 1.42it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Threshold value gamma reached, ending optimization\n",
"这是求解Ax=b的x: [ 1.3475237 -0.7860472j 0.22970617-0.88826376j -0.35111237-0.31225887j\n",
" 0.07606918+1.2138402j -0.729564 +0.48393282j]\n",
"实际b的值: [4.191945 +6.852195j 3.1342418+6.9232264j 6.9187713+3.1551564j\n",
" 9.085955 +2.9361415j 5.8930554+6.9975834j]\n",
"算法得到的Ax的值: [4.185339 +6.8523855j 3.1297188+6.923625j 6.924285 +3.1467872j\n",
" 9.092921 +2.932943j 5.8879805+6.999589j ]\n",
"相对误差: 0.0008446976\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"import argparse\n",
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n",
"\n",
"import toml\n",
"import numpy as np\n",
"import paddle\n",
"from paddle_quantum.data_analysis.vqls import compute\n",
"\n",
"paddle.seed(0)\n",
"\n",
"if __name__ == \"__main__\":\n",
" config = toml.loads(test_toml)\n",
" A_dir = config.pop(\"A_dir\")\n",
" A = np.load(A_dir)\n",
" b_dir = config.pop(\"b_dir\")\n",
" b = np.load(b_dir)\n",
" result = compute(A, b, **config)\n",
"\n",
" print(\"求解 Ax=b 的x:\", result)\n",
" print(\"实际 b 的值:\", b)\n",
" print(\"算法得到的 Ax 的值:\", np.matmul(A, result))\n",
" relative_error = np.linalg.norm(b - np.matmul(A, result)) / np.linalg.norm(b)\n",
" print(\"相对误差: \", relative_error)\n",
" np.save(\"./answer.npy\", result)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 引用信息\n",
"\n",
"```\n",
"@misc{bravo-prieto2020variational,\n",
" title = {Variational {{Quantum Linear Solver}}},\n",
" author = {{Bravo-Prieto}, Carlos and LaRose, Ryan and Cerezo, M. and Subasi, Yigit and Cincio, Lukasz and Coles, Patrick J.},\n",
" year = {2020},\n",
" month = jun,\n",
" number = {arXiv:1909.05820},\n",
" eprint = {1909.05820},\n",
" eprinttype = {arxiv},\n",
" doi = {10.48550/arXiv.1909.05820}\n",
"}\n",
"```\n",
"\n",
"## 参考文献\n",
"\n",
"[1] “Variational Quantum Linear Solver: A Hybrid Algorithm for Linear Systems.” Carlos Bravo-Prieto, Ryan LaRose, Marco Cerezo, Yigit Subasi, Lukasz Cincio, Patrick J. Coles. arXiv:1909.05820, 2019."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "pq-dev",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.15 (default, Nov 10 2022, 13:17:42) \n[Clang 14.0.6 ]"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "5fea01cac43c34394d065c23bb8c1e536fdb97a765a18633fd0c4eb359001810"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Variational Quantum Linear Solver\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"## Background\n",
"\n",
"System of linear equations is a basic yet extremely useful tool in mathematics. An example is that in economics, you can model the economy using linear equations. Also, it provides simple estimation for large system of non-linear systems. Hence solving system of linear equations is an important task.\n",
"\n",
"Variational Quantum Linear Solver (VQLS) is a variational quantum algorithm for solving system of linear equations. It's a classical-quantum hybrid algorithm that can run on recent Noisy Intermediate-Scale Quantum (NISQ) devices. To be more specific, given a matrix $A$ and a vector $\\boldsymbol{b}$, our goal is to find a vector $\\boldsymbol{x}$ so that $A \\boldsymbol{x} = \\boldsymbol{b}$. Using VQLS, we can obtain a quantum state that is proportional to $\\boldsymbol{x}$, i.e. a normalised vector $|x\\rangle = \\frac{\\boldsymbol{x}}{\\lVert \\boldsymbol{x} \\rVert_2}$."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Principle\n",
"\n",
"Solving linear equations in the quantum setting is different to the general setting due to the requirement in quantum computing that we can only apply unitary operators to a quantum state. For the input matrix $A$, we need to decompose it to a linear combination of unitary operators $A = \\sum_n c_n A_n$ where each $A_n$ is a unitary operator. For the input vector $\\boldsymbol{b}$, we need to assume that it's a quantum state that can be prepared by unitary operator $U$, i.e. $U|0\\rangle = |b\\rangle$.\n",
"\n",
"![VQLS](vqls.png)\n",
"\n",
"We can see that the algorithm consists of two parts. On a quantum computer, we prepare a parameterized quantum circuit (PQC) $V(\\alpha)$ and compute the loss function $C(\\alpha)$, then on a classical computer, we minimize parameters $\\alpha$ until the loss function is below a certain threshold, denoted as $\\gamma$. At the end, we output the target quantum state $|x\\rangle$. The main idea behind the algorithm is that PQC $V(\\alpha)$ gives us a quantum state $|\\psi(\\alpha)\\rangle$, circuit $F(A)$ then computes how similar $A|\\psi(\\alpha)\\rangle$ and $|b\\rangle$ are, which is what the loss function $C(\\alpha)$ is measuring. When the loss is small, $A|\\psi(\\alpha)\\rangle$ and $|b\\rangle$ are very close, it means $|\\psi(\\alpha)\\rangle$ and the target $|x\\rangle$ are very close, so we output $|\\psi(\\alpha)\\rangle$ as an approximation to $|x\\rangle$."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Paddle Quantum Implementation\n",
"\n",
"We use the `Circuit` class in Paddle Quantum and optimizer in Paddle Paddle to implement VQLS. For the quantum part, we use built-in `complex_entangled_layer` ansatz to build our PQC $V(\\alpha)$. To compute the loss function, we use Hadamard Test and Hadamard-Overlap Test which utilizes `oracle` gate to implement the controlled-$A_n$ gates. For the classical optimization part, we used Adam optimizer to minimize the loss function.\n",
"\n",
"User can use toml file to specify the input to the algorithm, matrix $A$ and vector $\\boldsymbol{b}$, stored as '.npy' files. You can run the following code to randomly generate a $n\\times n$ matrix $A$ and vector $\\boldsymbol{b}$:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is a randomly generated A:\n",
"[[4.1702199e+00+7.203245j 1.1437482e-03+3.0233257j\n",
" 1.4675589e+00+0.9233859j 1.8626021e+00+3.4556072j\n",
" 3.9676747e+00+5.3881674j ]\n",
" [2.0445225e+00+8.781175j 2.7387592e-01+6.704675j\n",
" 4.1730480e+00+5.5868983j 1.4038694e+00+1.9810148j\n",
" 8.0074453e+00+9.682616j ]\n",
" [8.7638912e+00+8.946067j 8.5044211e-01+0.39054784j\n",
" 1.6983042e+00+8.781425j 9.8346835e-01+4.2110763j\n",
" 9.5788956e+00+5.3316526j ]\n",
" [6.8650093e+00+8.346256j 1.8288277e-01+7.5014434j\n",
" 9.8886108e+00+7.4816566j 2.8044400e+00+7.892793j\n",
" 1.0322601e+00+4.4789352j ]\n",
" [2.8777535e+00+1.3002857j 1.9366957e-01+6.7883554j\n",
" 2.1162813e+00+2.6554666j 4.9157314e+00+0.5336254j\n",
" 5.7411761e+00+1.4672858j ]]\n",
"Here is a randomly generated b:\n",
"[4.191945 +6.852195j 3.1342418+6.9232264j 6.9187713+3.1551564j\n",
" 9.085955 +2.9361415j 5.8930554+6.9975834j]\n"
]
}
],
"source": [
"n = 5\n",
"\n",
"import numpy as np\n",
"\n",
"\n",
"np.random.seed(1)\n",
"A = np.zeros([n, n], dtype=\"complex64\")\n",
"b = np.zeros(n, dtype=\"complex64\")\n",
"for i in range(n):\n",
" for j in range(n):\n",
" x = np.random.rand() * 10\n",
" y = np.random.rand() * 10\n",
" A[i][j] = complex(x, y)\n",
" x = np.random.rand() * 10\n",
" y = np.random.rand() * 10\n",
" b[i] = complex(x, y)\n",
"np.save(\"./A.npy\", A)\n",
"np.save(\"./b.npy\", b)\n",
"print(\"Here is a randomly generated A:\")\n",
"print(A)\n",
"print(\"Here is a randomly generated b:\")\n",
"print(b)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"User can specify the parameters of the VQLS in the toml file. They are `depth`, `iterations`, `LR` and `gamma`, which correspond to the number of layer in the PQC $V(\\alpha)$, number of iterations of the optimizer, learning rate of the optimizer and threshold of the loss function to end optimization early. By entering `python vqls.py --config config.toml` one could solve the linear system. Here we present an example of an online demo. First, define the content of the configuration file as follows, user can try out different settings by changing the parameters of `test_toml`:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"# The path of the input matrix A. It should be a .npy file.\n",
"A_dir = './A.npy'\n",
"# The path of the input vector b. It should be a .npy file.\n",
"b_dir = './b.npy'\n",
"# The depth of the quantum ansatz circuit.\n",
"depth = 4\n",
"# Number optimization cycles.\n",
"iterations = 200\n",
"# The learning rate of the optimizer.\n",
"LR = 0.1\n",
"# Threshold for loss value to end optimization early, default is 0.\n",
"gamma = 0\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we run the VQLS:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
" 88%|████████▊ | 176/200 [02:03<00:16, 1.43it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Threshold value gamma reached, ending optimization\n",
"Here is x that solves Ax=b: [ 1.3475237 -0.7860472j 0.22970617-0.88826376j -0.35111237-0.31225887j\n",
" 0.07606918+1.2138402j -0.729564 +0.48393282j]\n",
"This is actual b: [4.191945 +6.852195j 3.1342418+6.9232264j 6.9187713+3.1551564j\n",
" 9.085955 +2.9361415j 5.8930554+6.9975834j]\n",
"This is Ax using estimated x: [4.185339 +6.8523855j 3.1297188+6.923625j 6.924285 +3.1467872j\n",
" 9.092921 +2.932943j 5.8879805+6.999589j ]\n",
"Relative error: 0.0008446976\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"import argparse\n",
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n",
"\n",
"import toml\n",
"import numpy as np\n",
"import paddle\n",
"from paddle_quantum.data_analysis.vqls import compute\n",
"\n",
"paddle.seed(0)\n",
"\n",
"if __name__ == \"__main__\":\n",
" config = toml.loads(test_toml)\n",
" A_dir = config.pop(\"A_dir\")\n",
" A = np.load(A_dir)\n",
" b_dir = config.pop(\"b_dir\")\n",
" b = np.load(b_dir)\n",
" result = compute(A, b, **config)\n",
"\n",
" print(\"Here is x that solves Ax=b:\", result)\n",
" print(\"This is actual b:\", b)\n",
" print(\"This is Ax using estimated x:\", np.matmul(A, result))\n",
" relative_error = np.linalg.norm(b - np.matmul(A, result)) / np.linalg.norm(b)\n",
" print(\"Relative error: \", relative_error)\n",
" np.save(\"./answer.npy\", result)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Citation\n",
"\n",
"```\n",
"@misc{bravo-prieto2020variational,\n",
" title = {Variational {{Quantum Linear Solver}}},\n",
" author = {{Bravo-Prieto}, Carlos and LaRose, Ryan and Cerezo, M. and Subasi, Yigit and Cincio, Lukasz and Coles, Patrick J.},\n",
" year = {2020},\n",
" month = jun,\n",
" number = {arXiv:1909.05820},\n",
" eprint = {1909.05820},\n",
" eprinttype = {arxiv},\n",
" doi = {10.48550/arXiv.1909.05820}\n",
"}\n",
"```\n",
"\n",
"## References\n",
"\n",
"[1] “Variational Quantum Linear Solver: A Hybrid Algorithm for Linear Systems.” Carlos Bravo-Prieto, Ryan LaRose, Marco Cerezo, Yigit Subasi, Lukasz Cincio, Patrick J. Coles. arXiv:1909.05820, 2019."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "pq-dev",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.15 (default, Nov 10 2022, 13:17:42) \n[Clang 14.0.6 ]"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "5fea01cac43c34394d065c23bb8c1e536fdb97a765a18633fd0c4eb359001810"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
# !/usr/bin/env python3
# Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Variational Quantum Linear Solver
"""
import argparse
import os
import warnings
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
import toml
import logging
import numpy as np
from paddle_quantum.data_analysis.vqls import compute
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Solve system of linear equations.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
args = parser.parse_args()
config = toml.load(args.config)
A_dir = config.pop('A_dir')
A = np.load(A_dir)
b_dir = config.pop('b_dir')
b = np.load(b_dir)
result = compute(A, b, **config)
print('Here is x that solves Ax=b:', result)
relative_error = np.linalg.norm(b- np.matmul(A,result))/np.linalg.norm(b)
print('Relative error: ', relative_error)
logging.basicConfig(
filename='./linear_solver.log',
filemode='w',
format='%(asctime)s %(levelname)s %(message)s',
level=logging.INFO
)
msg = f"Relative error: {relative_error}"
logging.info(msg)
np.save('./answer.npy', result)
\ No newline at end of file
# A description of the task of this configuration file, this is optional. "GroundState" stands for calculate the ground state energy of the molecule.
task = 'GroundState'
# This field stores information related to the molecule is provided.
[molecule]
# Symbols of atoms inside the molecule.
symbols = ['H', 'H']
# The cartesian coordinates of each atom inside the molecule.
coords = [ [ 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.7 ] ]
# Quantum chemistry basis set used in the computation, see here for more information of the basis set, https://baike.baidu.com/item/%E5%9F%BA%E7%BB%84/6445527?fr=aladdin, Default is "sto-3g".
basis = 'sto-3g'
# Which unit system is used in the `coords` provided above.
# If set to `true` will use Angstrom.
# If set to `false` will use Bohr.
use_angstrom = true
# This field specifies configurations of classical quantum chemistry driver used to calculate the molecular integrals. NOTE: Classical quantum chemistry package needs to be preinstalled.
[driver]
# If set to `pyscf`, means PySCF is used (currently only support `pyscf` driver, will add more classical driver in the future).
name = 'pyscf'
# This field specifies configurations related to the quantum circuit in VQE is specified.
# NOTE: currently only support HardwareEfficient ansatz, more ansatz will come later!
[ansatz.HardwareEfficient]
# The depth of the HardwareEfficient ansatz. NOTE: on a personal laptop, we suggest the depth of the circuit should no more than 10.
depth = 2
# This field stores configurations of the variational quantum eigensolver (VQE) method.
[VQE]
# Number of optimization cycles, default is 100.
num_iterations = 100
# The convergence criteria for the VQE optimization, default is 1e-5.
tol = 1e-5
# The number of optimization steps after which we record the loss value.
save_every = 10
# This field specifies the optimizer used in the VQE method, default is `Adam`, see here for available optimizers https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/Overview_cn.html
[optimizer.Adam]
# The learning rate of the optimizer, see here for more details https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/Adam_cn.html, default is 0.4.
learning_rate = 0.4
# !/usr/bin/env python3
# Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict
import time
import logging
import argparse
import toml
import paddle.optimizer as optim
from paddle_quantum import qchem
from paddle_quantum.qchem import Molecule
from paddle_quantum.qchem import PySCFDriver
from paddle_quantum.qchem import GroundStateSolver
from paddle_quantum.qchem import energy, dipole_moment
#BUG: basicConfig changed in python3.7
logging.basicConfig(filename="log", filemode="w", format="%(message)s", level=logging.INFO)
def main(args):
time_start = time.strftime("%Y%m%d-%H:%M:%S", time.localtime())
logging.info(f"Job start at {time_start:s}")
parsed_configs: Dict = toml.load(args.config)
# create molecule
atom_symbols = parsed_configs["molecule"]["symbols"]
basis = parsed_configs["molecule"].get("basis", "sto-3g")
multiplicity = parsed_configs["molecule"].get("multiplicity")
charge = parsed_configs["molecule"].get("charge")
use_angstrom = parsed_configs["molecule"].get("use_angstrom", True)
if parsed_configs.get("driver") is None or parsed_configs["driver"]["name"] == "pyscf":
driver = PySCFDriver()
else:
raise NotImplementedError("Drivers other than PySCFDriver are not implemented yet.")
if isinstance(atom_symbols, str):
raise NotImplementedError("`load_geometry` function is not implemented yet.")
elif isinstance(atom_symbols, list):
atom_coords = parsed_configs["molecule"]["coords"]
geometry = list(zip(atom_symbols, atom_coords))
mol = Molecule(geometry, basis, multiplicity, charge, use_angstrom=use_angstrom, driver=driver)
else:
raise ValueError("Symbols can only be string or list, e.g. 'LiH' or ['H', 'Li']")
mol.build()
# create ansatz
num_qubits = mol.num_qubits
ansatz_settings = parsed_configs["ansatz"]
ansatz_name = list(ansatz_settings.keys())[0]
ansatz_class = getattr(qchem, ansatz_name)
ansatz = ansatz_class(num_qubits, **ansatz_settings[ansatz_name])
# load optimizer
if parsed_configs.get("optimizer") is None:
optimizer_name = "Adam"
optimizer_settings = {
"Adam": {
"learning_rate": 0.4
}
}
optimizer = optim.Adam
else:
optimizer_settings = parsed_configs["optimizer"]
optimizer_name = list(optimizer_settings.keys())[0]
optimizer = getattr(optim, optimizer_name)
# calculate properties
if parsed_configs.get("VQE") is None:
vqe_settings = {
"num_iterations": 100,
"tol": 1e-5,
"save_every": 10
}
else:
vqe_settings = parsed_configs["VQE"]
solver = GroundStateSolver(optimizer, **vqe_settings)
_, psi = solver.solve(mol, ansatz, **optimizer_settings[optimizer_name])
e = energy(psi, mol)
d = dipole_moment(psi, mol)
logging.info("\n#######################################\nSummary\n#######################################")
logging.info(f"Ground state energy={e:.5f}")
logging.info(f"dipole moment=({d[0]:.5f}, {d[1]:.5f}, {d[2]:.5f}).")
time_stop = time.strftime("%Y%m%d-%H:%M:%S", time.localtime())
logging.info(f"\nJob end at {time_stop:s}\n")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Quantum chemistry task with paddle quantum.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
main(parser.parse_args())
[molecule]
symbols = ['H', 'H']
coords = [ [ 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 0.74 ] ]
# NOTE: currently only support HardwareEfficient ansatz, more ansatz will come later!
# NOTE: on a personal laptop, we suggest the depth of the circuit should no more than 10.
[ansatz.HardwareEfficient]
depth = 2
此差异已折叠。
此差异已折叠。
task = 'test'
image_path = 'pneumoniamnist'
num_samples = 10
model_path = 'qnnmic.pdparams'
num_qubits = [8, 8]
num_depths = [2, 2]
observables = [['Z0', 'Z1', 'Z2', 'Z3'], ['X0', 'X1', 'X2', 'X3']]
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 医学图像分类简介\n",
"\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"医学图像分类(Medical image classification)是计算机辅助诊断系统的关键技术。医学图像分类问题主要是如何从图像中提取特征并进行分类,从而识别和了解人体的哪些部位受到特定疾病的影响。在这里我们主要使用量子神经网络对公开数据集 MedMNIST 中的胸腔数据进行分类。其中数据形式如下图所示:\n",
"\n",
"<img src=\"./med_image_example.png\" width=\"20%\" height=\"20%\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 使用 QNNMIC 模型进行医学图像分类\n",
"\n",
"### QNNMIC 模型简介\n",
"QNNMIC 模型是一个可以用于医学图像分类的量子机器学习模型(Quantum Machine Learning,QML)。我们具体称其为一种量子神经网络 (Quantum Neural Network, QNN),它结合了参数化量子电路(Parameterized Quantum Circuit, PQC)和经典神经网络。对于医学图像数据,QNNMIC 可以达到 85% 以上的分类准确率。模型主要分为量子和经典两部分,结构图如下:\n",
"\n",
"<img src=\"./qnnmic_model_cn.png\" width=\"60%\" height=\"60%\"/>\n",
"\n",
"\n",
"注:\n",
"- 通常我们使用主成分分析将图片数据进行降维处理,使其更容易通过编码电路将经典数据编码为量子态。\n",
"- 参数化电路的作用是特征提取,其电路参数可以在训练中调整。\n",
"- 量子测量由一组测量算子表示,是将量子态转化为经典数据的过程,我们可以对得到的经典数据做进一步处理。\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 如何使用\n",
"\n",
"### 使用模型进行预测\n",
"\n",
"这里,我们已经给出了一个训练好的模型,可以直接用于医学图片的预测。只需要在 `test.toml` 这个配置文件中进行对应的配置,然后输入命令 `python qnn_medical_image.py --config test.toml` 即可使用训练好的医学图片分类模型对输入的图片进行测试。\n",
"\n",
"### 在线演示\n",
"\n",
"这里,我们给出一个在线演示的版本,可以在线进行测试。首先定义配置文件的内容对测试集中图片进行预测:\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import toml\n",
"\n",
"test_toml = r\"\"\"\n",
"# 模型的整体配置文件。\n",
"# 图片的文件路径。\n",
"image_path = 'pneumoniamnist'\n",
"\n",
"# 训练集中的数据个数,默认值为 -1 即使用全部数据。\n",
"num_samples = 20\n",
"\n",
"# 训练好的模型参数文件的文件路径。\n",
"model_path = 'qnnmic.pdparams'\n",
"\n",
"# 量子电路所包含的量子比特的数量。\n",
"num_qubits = [8, 8]\n",
"\n",
"# 每一层量子电路中的电路深度。\n",
"num_depths = [2, 2]\n",
"\n",
"# 量子电路中可观测量的设置。\n",
"observables = [['Z0', 'Z1', 'Z2', 'Z3'], ['X0', 'X1', 'X2', 'X3']]\n",
"\"\"\"\n",
"\n",
"config = toml.loads(test_toml)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"图片的预测结果分别为 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0\n",
"图片的实际标签分别为 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0\n"
]
}
],
"source": [
"from paddle_quantum.qml.qnnmic import inference\n",
"\n",
"prediction, prob, label = inference(**config)\n",
"print(f\"图片的预测结果分别为 {str(prediction)[1:-1]}\")\n",
"print(f\"图片的实际标签分别为 {str(label)[1:-1]}\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"其中标签 0 代表肺部异常,标签 1 代表正常。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"在 `test_toml` 配置文件中:\n",
"- `model_path`: 为训练好的模型,这里固定为 `qnnmic.pdparams`;\n",
"- `num_qubits`、`num_depths`、`observables` 三个参数应与训练好的模型 ``qnnmic.pdparams`` 相匹配。`num_qubits = [8,8]` 表示量子部分一共两层电路;每层电路为 8 的量子比特;`num_depths = [2,2]` 表示每层参数化电路深度为 2;`observables` 表示每层测量算子的具体形式。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"对于数据集中的某张肺部异常的图片:\n",
"\n",
"<img src=\"./medical_image/image_label_0.png\" width=\"20%\" height=\"20%\"/>"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"对于上述输入的图片,模型有 98.30% 的置信度检测出肺部异常。\n"
]
}
],
"source": [
"# 使用模型进行预测并得到对应概率值\n",
"msg = f'对于上述输入的图片,模型有 {prob[10][1]:3.2%} 的置信度检测出肺部异常。'\n",
"print(msg)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 注意事项\n",
"\n",
"我们通常考虑调整 `num_qubits`,`num_depths`,`observables` 三个主要超参数,对模型的影响较大。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 引用信息\n",
"\n",
"```\n",
"@article{medmnistv2,\n",
" title={MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification},\n",
" author={Yang, Jiancheng and Shi, Rui and Wei, Donglai and Liu, Zequan and Zhao, Lin and Ke, Bilian and Pfister, Hanspeter and Ni, Bingbing},\n",
" journal={arXiv preprint arXiv:2110.14795},\n",
" year={2021}\n",
"}\n",
"```\n",
"\n",
"## 参考文献\n",
"\\[1\\] Yang, Jiancheng, et al. \"Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification.\" arXiv preprint arXiv:2110.14795 (2021)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "modellib",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15 (default, Nov 24 2022, 18:44:54) [MSC v.1916 64 bit (AMD64)]"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "dfa0523b1e359b8fd3ea126fa0459d0c86d49956d91b464930b80cba21582eac"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Medical image classification\n",
"\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"Medical image classification is the key technology of computer-aided diagnosis systems. The problem of medical image classification is how to extract features from images and classify them, so as to identify and understand which parts of the human body are affected by specific diseases. Here, we mainly use a quantum neural network to classify the chest data in the open data set MedMNIST, which has the following form\n",
"\n",
"<img src=\"./med_image_example.png\" width=\"20%\" height=\"20%\"/>\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## QNNMIC model for medical image classification\n",
"\n",
"### Introduction of QNNMIC\n",
"QNNMIC model is a quantum machine learning (QML) model that can be used for medical image classification. We specifically call it a quantum neural network (QNN), which combines parameterized quantum circuit (PQC) and a classical neural network. For medical image data, QNNMIC can achieve more than 85% classification accuracy. The model is mainly divided into quantum and classical parts. The structure diagram is as follows:\n",
"\n",
"<img src=\"./qnnmic_model_en.png\" width=\"60%\" height=\"60%\"/>\n",
"\n",
"\n",
"Remarks:\n",
"- In general, we use principal component analysis (PCA) to reduce the dimension of the image data, making it easier to encode classical data into quantum states through coding circuits.\n",
"- The parameterized circuit is used for feature extraction, and its circuit parameters can be adjusted during training.\n",
"- Quantum measurement, represented by a set of measurement operators, is the process of converting quantum states into classical data, which can be further processed.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quick start\n",
"\n",
"### Use the model to make predictions\n",
"\n",
"Here, we have given a trained model saved in the format `qnnmic.pdparams` which can be directly used to distinguish medical images. One only needs to do the corresponding configuration in this file `test.toml`, and enter the command `python qnn_medical_image.py --config test.toml` to predict the input images.\n",
"\n",
"### Online Test\n",
"\n",
"The following shows how to configure the test file `test_toml` to make medical image prediction."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"import toml\n",
"\n",
"test_toml = r\"\"\"\n",
"# The config for testing the QNNMIC model.\n",
"\n",
"# The path of the input image.\n",
"image_path = 'pneumoniamnist'\n",
"\n",
"# The number of data in the test dataset.\n",
"# The value defaults to -1 which means using all data.\n",
"num_samples = 20\n",
"\n",
"# The path of the trained model, which will be loaded.\n",
"model_path = 'qnnmic.pdparams'\n",
"\n",
"# The number of qubits of the quantum circuit in each layer.\n",
"num_qubits = [8, 8]\n",
"\n",
"# The depth of the quantum circuit in each layer.\n",
"num_depths = [2, 2]\n",
"\n",
"# The observables of the quantum circuit in each layer.\n",
"observables = [['Z0','Z1','Z2','Z3'], ['X0','X1','X2','X3']]\n",
"\"\"\"\n",
"\n",
"config = toml.loads(test_toml)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The prediction results of the input images are 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0 respectively.\n",
"The labels of the input images are 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0 respectively.\n"
]
}
],
"source": [
"from paddle_quantum.qml.qnnmic import inference\n",
"\n",
"prediction, prob, label = inference(**config)\n",
"print(f\"The prediction results of the input images are {str(prediction)[1:-1]} respectively.\")\n",
"print(f\"The labels of the input images are {str(label)[1:-1]} respectively.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here label 0 means abnormal lungs, and label 1 means normal lungs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In `test_toml` :\n",
"- `model_path`: this is the trained model, here we set it as `qnnmic.pdparams`.\n",
"- `num_qubits`, `num_depths`, `observables` these parameters correspond to the model ``qnnmic.pdparams``, `num_qubits = [8,8]` represents the quantum part of a total of two layers of circuit, each layer of the circuit has 8 qubits; `num_depths = [2,2]` represents the depth of parameterized circuit of each layer is 2;`observables` is the specific form of the measurement operator at each layer.\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"For an abnormal image from the dataset\n",
"\n",
"<img src=\"./medical_image/image_label_0.png\" width=\"20%\" height=\"20%\"/>\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"For this input image, the model can detect the abnormality of the lung with 98.30% confidence.\n"
]
}
],
"source": [
"# Use the model to make predictions and get the corresponding probability\n",
"msg = f'For this input image, the model can detect the abnormality of the lung with {prob[10][1]:3.2%} confidence.'\n",
"print(msg)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Remarks\n",
"\n",
"- We usually consider adjusting three hyperparameters,`num_qubits`, `num_depths` and `observables`, which have a greater impact on the model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reference information\n",
"\n",
"```\n",
"@article{medmnistv2,\n",
" title={MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification},\n",
" author={Yang, Jiancheng and Shi, Rui and Wei, Donglai and Liu, Zequan and Zhao, Lin and Ke, Bilian and Pfister, Hanspeter and Ni, Bingbing},\n",
" journal={arXiv preprint arXiv:2110.14795},\n",
" year={2021}\n",
"}\n",
"```\n",
"\n",
"## Reference\n",
"\n",
"\\[1\\] Yang, Jiancheng, et al. \"Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification.\" arXiv preprint arXiv:2110.14795 (2021)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "modellib",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15 (default, Nov 24 2022, 18:44:54) [MSC v.1916 64 bit (AMD64)]"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "dfa0523b1e359b8fd3ea126fa0459d0c86d49956d91b464930b80cba21582eac"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"from PIL import Image\n",
"image = np.load('pneumoniamnist.npz')\n",
"x_train = image['train_images']\n",
"x_label = image['train_labels']\n",
"\n",
"im1 = Image.fromarray(x_train[28])\n",
"im0 = Image.fromarray(x_train[-20])\n",
"\n",
"im1.save('image_label_1.png')\n",
"im0.save('image_label_0.png')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "modellib",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "dfa0523b1e359b8fd3ea126fa0459d0c86d49956d91b464930b80cba21582eac"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
# !/usr/bin/env python3
# Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import warnings
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
import toml
from paddle_quantum.qml.qnnmic import train, inference
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Detect whether there are cracks on the surface of images by the QNNMIC model.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
args = parser.parse_args()
config = toml.load(args.config)
task = config.pop('task')
if task == 'train':
train(**config)
elif task == 'test':
prediction, prob, label = inference(**config)
print(f"The prediction results of the input pictures are {str(prediction)[1:-1]} respectively.")
else:
raise ValueError("Unknown task, it can be train or test.")
# The config for testing the QNNMIC model.
# The task of this config. Available values: 'train' | 'test'.
task = 'test'
# The path of the input image.
image_path = 'pneumoniamnist'
# The number of the data in the test dataset.
# The value defaults to -1 which means using all data.
num_samples = 10
# The path of the trained model, which will be loaded.
model_path = 'qnnmic.pdparams'
# The number of qubits of quantum circuit in each layer.
num_qubits = [8, 8]
# The depth of quantum circuit in each layer.
num_depths = [2, 2]
# The observables of quantum circuit in each layer.
observables = [['Z0', 'Z1', 'Z2', 'Z3'], ['X0', 'X1', 'X2', 'X3']]
\ No newline at end of file
# The config for training the QNNMIC model.
# The task of this config. Available values: 'train' | 'test'.
task = 'train'
# The name of the model, which is used to save the model.
model_name = 'qnnmic'
# The path to save the model. Both relative and absolute paths are allowed.
# It saves the model to the current path by default.
# saved_path = './'
# The number of qubits of the quantum circuit in each layer.
num_qubits = [8, 8]
# # The depth of the quantum circuit in each layer.
num_depths = [2, 2]
# The observables of the quantum circuit in each layer.
observables = [['Z0', 'Z1', 'Z2', 'Z3'], ['X0', 'X1', 'X2', 'X3']]
# The size of the batch samplers.
batch_size = 40
# The number of epochs to train the model.
num_epochs = 20
# The learning rate used to update the parameters, default to 0.01.
learning_rate = 0.1
# The path of the dataset. It defaults to breastmnist.
dataset = 'pneumoniamnist'
# The path used to save logs. Default to ``./``.
saved_dir = './'
# Whether use the validation.
# It is true means the dataset contains training, validation and test datasets.
# It is false means the dataset only contains training datasets and test datasets.
using_validation = true
# The number of the data in the training dataset.
# The value defaults to -1 which means using all data.
num_train = 500
# The number of the data in the validation dataset.
# The value defaults to -1 which means using all data.
num_val = -1
# The number of the data in the test dataset.
# The value defaults to -1 which means using all data.
num_test = -1
# Number of epochs with no improvement after which training will be stopped.
early_stopping = 1000
# The number of subprocess to load data, 0 for no subprocess used and loading data in main process, default to 0.
num_workers = 0
# The overall profile used to calculate the option pricing model
# initial price
initial_price = 100
# strike price
strike_price = 110
# risk-free rate
interest_rate = 0.05
# Market volatility
volatility = 0.1
# Option expiration date (in years)
maturity_date = 1
# Estimation accuracy index
degree_of_estimation = 5
# !/usr/bin/env python3
# Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import warnings
from typing import Dict
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
import toml
from paddle_quantum.finance import EuroOptionEstimator
def main(args):
parsed_configs: Dict = toml.load(args.config)
# input option settings
initial_price = parsed_configs["initial_price"]
strike_price = parsed_configs["strike_price"]
interest_rate = parsed_configs["interest_rate"]
volatility = parsed_configs["volatility"]
maturity_date = parsed_configs["maturity_date"]
degree_of_estimation = parsed_configs["degree_of_estimation"]
estimator = EuroOptionEstimator(initial_price, strike_price,
interest_rate, volatility,
maturity_date, degree_of_estimation)
print("The risk-neutral price of this option is", estimator.estimate())
print("Below is the circuit realization of this quantum solution.")
estimator.plot()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="European option pricing with paddle quantum.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
main(parser.parse_args())
此差异已折叠。
此差异已折叠。
# The configuration file of quantum portfolio optimization problem.
# This field specifies whether to use random data for stocks, or stock data provided by the customer.
# stock = 'random'
# stock = 'custom'
stock = 'demo'
# Get stock data from demo file or custom file
demo_data_path = 'demo_stock.csv'
custom_data_path = 'file_name.csv'
# Specifies the start_time and end_time of random stock
[random_data]
start_time = [2016, 1, 1]
end_time = [2016, 1, 30]
# This field stores information about the asset, risk, etc.
[stock_para]
# Number of investable projects
num_asset = 7
# The risk factor of investment decision making
risk_weight = 0.5
# The budget
budget = 0
#The penalty
penalty = 0
# This field stores parameters for the parametric quantum circuits
[train_para]
# The depth of the quantum circuits
circuit_depth = 2
# Number of optimization cycles, default is 100.
iterations = 600
# The rate of optimization of gradient descent, default is 0.4.
learning_rate = 0.2
closePrice0,closePrice1,closePrice2,closePrice3,closePrice4,closePrice5,closePrice6,closePrice7,closePrice8,closePrice9,closePrice10,closePrice11
16.87,32.56,5.4,3.71,5.72,7.62,3.7,6.94,5.43,3.46,8.22,4.56
17.18,32.05,5.48,3.75,5.75,7.56,3.7,6.89,5.37,3.45,8.14,4.54
17.07,31.51,5.46,3.73,5.74,7.68,3.68,6.91,5.37,3.41,8.1,4.55
17.15,31.76,5.49,3.79,5.81,7.75,3.7,6.97,5.42,3.5,8.16,4.58
16.66,31.68,5.39,3.72,5.69,7.79,3.63,6.85,5.29,3.42,7.93,4.48
16.79,32.2,5.47,3.77,5.79,7.84,3.66,6.94,5.41,3.46,8.02,4.56
16.69,31.46,5.46,3.76,5.77,7.82,3.63,7.01,5.42,3.48,8,4.53
16.99,31.68,5.53,3.74,5.8,7.8,3.63,7.03,5.39,3.47,8.03,4.53
16.76,31.39,5.5,3.78,5.89,7.92,3.66,7.04,5.45,3.48,8.05,4.56
16.52,30.49,5.47,3.71,5.78,7.96,3.63,7.01,5.43,3.45,7.95,4.47
16.33,30.53,5.39,3.61,5.7,7.93,3.6,6.99,5.35,3.4,7.92,4.42
16.39,30.46,5.35,3.58,5.69,7.87,3.59,6.95,5.26,3.41,7.93,4.38
16.45,29.87,5.37,3.61,5.75,7.86,3.63,6.96,5.54,3.42,7.99,4.35
16,29.21,5.24,3.53,5.7,7.82,3.6,6.87,6.09,3.32,7.9,4.32
16.09,30.11,5.26,3.5,5.71,7.9,3.61,6.87,6.7,3.33,8.01,4.34
15.54,28.98,5.08,3.42,5.54,7.7,3.54,6.58,7.37,3.23,7.73,4.13
13.99,26.63,4.57,3.08,4.99,6.93,3.19,5.92,8.11,2.91,6.96,3.72
14.6,27.62,4.44,2.95,4.89,6.91,3.27,5.78,8.1,2.96,7.01,3.51
14.63,27.64,4.5,3.04,4.94,7.18,3.27,5.89,8.91,3.02,7.06,3.61
14.77,27.9,4.56,3.05,5.08,7.31,3.31,5.94,9.8,3.06,7.08,3.88
14.62,27.5,4.52,3.05,5.39,7.35,3.3,5.93,10.78,3.05,7.07,3.87
14.5,28.67,4.59,3.13,5.35,7.53,3.32,6.06,11.86,3.13,7.15,3.9
14.79,29.08,4.66,3.12,5.23,7.47,3.33,6.16,10.67,3.15,7.17,3.91
14.77,29.08,4.67,3.14,5.26,7.48,3.38,6.18,11.36,3.17,7.21,3.95
14.65,29.95,4.66,3.11,5.19,7.35,3.36,6.15,10.56,3.14,7.19,3.94
15.03,30.8,4.72,3.07,5.18,7.33,3.34,6.11,9.56,3.15,7.29,3.96
15.37,30.42,4.84,3.23,5.31,7.46,3.39,6.35,9.15,3.18,7.41,4.04
15.2,29.7,4.81,3.3,5.33,7.47,3.39,6.34,9.11,3.17,7.4,4.06
15.24,29.65,4.84,3.31,5.31,7.39,3.37,6.26,8.89,3.12,7.34,3.99
15.59,29.85,4.88,3.3,5.38,7.47,3.42,6.44,8.36,3.15,7.42,4.04
15.58,29.25,4.89,3.33,5.39,7.48,3.43,6.46,8.68,3.16,7.52,4.03
15.23,28.9,4.82,3.31,5.41,8.06,3.37,6.41,8.77,3.12,7.41,3.97
15.04,29.33,4.74,3.22,5.28,8.02,3.32,6.32,9.65,3.06,7.31,3.9
14.99,30.11,4.84,3.31,5.3,8.01,3.36,6.32,9.11,3.13,7.51,3.9
15.11,29.67,4.79,3.25,5.38,8.11,3.37,6.42,8.41,3.15,7.5,3.88
14.5,29.59,4.63,3.12,5.12,7.87,3.3,6.15,8.4,3.08,7.18,3.76
\ No newline at end of file
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 量子金融投资组合优化简介\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"假如你是一位资产管理人,想要将数额为$K$的基金一次性投入到$N$个可投资的项目中,各项目都有自己的投资回报率和风险,你的目标就是在考虑到市场影响和交易费用的的基础上找到一个最佳的投资组合,使得该笔资产以最优的投资方案实施。\n",
"\n",
"为了方便建模,我们做如下两点假设:\n",
" 1.每个项目都是等额投资的;\n",
" 2.给定的预算是投资一个项目金额的整数倍,且必须全部花完。\n",
"\n",
"在投资组合的基本理论中,投资组合的总体风险与项目间的协方差有关,而协方差与任意两项目的相关系数成正比。相关系数越小,其协方差就越小,投资组合的总体风险也就越小。在这里我们给出了采用均值方差组合优化的方法下的该问题的建模方程:\n",
"$$\n",
"\\omega=\\max _{x \\in\\{0,1\\}^n} \\mu^T x-q x^T S x \\quad \\text { subject to: } 1^T x=B,\n",
"$$\n",
"该式子中各符号代表的含义如下:\n",
"- $x \\in \\{0, 1\\}^{n}$ 表示一个向量,其中每一个元素均为二进制变量,即如果资产$i$被投资了,则 $x_i$=1,如果没有被选择,则 $x_i=0$;\n",
"- $\\mu \\in \\mathbb{R}^n$ 表示投资每个项目的预期回报率;\n",
"- $S \\in \\mathbb{R}^{n \\times n}$ 表示各投资项目回报率之间的协方差矩阵;\n",
"- $q > 0$ 表示做出该投资决定的风险系数;\n",
"- $B$ 代表投资预算,即我们可以投资的项目数。\n",
"\n",
"让我们对这个方程的含义进行说明。$\\mu^T x$ 刻画 $x$ 代表的投资方案的预期收益。$x^T S x$ 刻画投资项目之间的关联性,乘上风险系数 $q$ 之后,代表该投资方案包含的风险。$1^T x=B$ 要求我们投资的项目数等于我们的预算总数。因此,当我们对所有的投资方案寻找等式右边的最大值,得到的 $\\omega$ 就是我们理论上可以得到的最大收益。\n",
"\n",
"为了方便寻找使收益最大化的投资组合,我们定义如下的损失函数:\n",
"$$\n",
"C_x=q \\sum_i \\sum_j S_{j i} x_i x_j-\\sum_i x_i \\mu_i+A\\left(B-\\sum_i x_i\\right)^2,\n",
"$$\n",
"其中,约束条件以拉格朗日乘子的形式进入方程。于是,我们的任务转化成寻找使损失函数最小的$x$。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 量子编码及求解\n",
"我们通过变换 $x_i \\mapsto \\frac{I-Z_i}{2}$ 将损失函数转为一个哈密顿量,从而完成投资组合优化问题的编码。这里$Z_i=I \\otimes I \\otimes \\ldots \\otimes Z \\otimes \\ldots \\otimes I$,即 $Z_{i}$ 是作用在第$i$ 个比特上的Pauli算符。我们用这个映射将 $C_x$ 转化成量子比特数为 $n$ 的系统的哈密顿矩阵 $H_C$,其基态即为投资组合优化问题的最优解。为了寻找这一哈密顿量的基态,我们使用变分量子算法的思想,通过一个参数化量子线路,生成一个试验态 $|\\theta^* \\rangle$。我们通过量子线路获得哈密顿量在该试验态上的期望值,然后,通过经典的梯度下降算法调节参数化量子线路的参数,使期望值向基态能量靠拢。重复若干次之后,我们找到最优解:\n",
"$$\n",
"|\\theta^* \\rangle = \\argmin_\\theta L(\\vec{\\theta})=\\argmin_\\theta \\left\\langle\\vec{\\theta}\\left|H_C\\right| \\vec{\\theta}\\right\\rangle.\n",
"$$\n",
"最后,我们读出测量结果的概率分布:$p(z)=\\left|\\left\\langle z \\mid \\vec{\\theta}^*\\right\\rangle\\right|^2$,即由量子编码还原出原先比特串的信息。某个比特串出现的概率越大,意味着其是投资组合优化问题最优解的可能性越大。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 使用教程\n",
"### 配置文件\n",
"我们给出了一个设置好参数,可以直接进行组合优化计算的配置文件。用户只需在`config.toml`里修改相应的参数,并在终端运行\n",
"`python qpo.py --config config.toml --logger qpo_log.log`,即可计算最优投资组合。\n",
"### 输出结果\n",
"运行结果将输出到文件 `qpo_log.log` 中。我们的优化过程将被记录在日志中。用户可以看到随着循环数的增加,损失大小的变化。最后我们会输出优化得到的方案选择。\n",
"\n",
"### 参数说明\n",
"- `stock`,默认为 `'demo'`,即使用我们提供的样本数据。也可选 `'random'` 或 `'custom'` 来随机生成或使用自定义数据。\n",
"若用户选择随机生成数据,用户可以通过修改 `start_time` 和 `end_time` 参数来选择股票数据的起止日期。对于自定义数据,用户可以使用格式和表头命名规则(即 `csv` 文件的第一行)与 `demo_stock.csv` 文件相同的自定义文件,并在配置文件修改该文件路径:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"custom_data_path = 'file_name.csv'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 在线演示\n",
"这里,我们给出一个在线演示的版本,可以在线进行测试。首先定义配置文件的内容:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"config_toml = r\"\"\"\n",
"# 用于计算金融组合优化问题模型的整体配置文件。\n",
"# 使用样例股票数据\n",
"stock = 'demo' \n",
"demo_data_path = 'demo_stock.csv'\n",
"# 可投资项目的数目\n",
"num_asset = 7\n",
"# 决策风险系数\n",
"risk_weight = 0.5\n",
"# 投资预算\n",
"budget = 0\n",
"# 投资惩罚\n",
"penalty = 0\n",
"# 量子电路深度\n",
"circuit_depth = 2\n",
"# 优化循环次数\n",
"iterations = 600\n",
"# 梯度下降优化的学习速率\n",
"learning_rate = 0.2 \n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"量桨 PaddleQuantum 的金融模块实现了量子金融优化的数值模拟。我们可以从 ``paddle_quantum.finance.qpo`` 模块里导入 ``portfolio_combination_optimization`` 来解决配置好的金融组合优化问题。"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 600/600 [01:15<00:00, 7.93it/s]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"******************* 最优组合为: [2, 5, 6] *******************\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"import pandas as pd\n",
"\n",
"import toml\n",
"from paddle_quantum.finance.qpo import portfolio_combination_optimization\n",
"from paddle_quantum.finance import DataSimulator\n",
"\n",
"config = toml.loads(config_toml)\n",
"demo_data_path = config[\"demo_data_path\"]\n",
"num_asset = config[\"num_asset\"]\n",
"risk_weight = config[\"risk_weight\"]\n",
"budget = config[\"budget\"]\n",
"penalty = config[\"penalty\"]\n",
"circuit_depth = config[\"circuit_depth\"]\n",
"iterations = config[\"iterations\"]\n",
"learning_rate = config[\"learning_rate\"]\n",
"\n",
"stocks_name = [(\"STOCK%s\" % i) for i in range(num_asset)]\n",
"source_data = pd.read_csv(demo_data_path)\n",
"processed_data = [source_data['closePrice'+str(i)].tolist() for i in range(num_asset)]\n",
"data = DataSimulator(stocks_name)\n",
"data.set_data(processed_data)\n",
"\n",
"invest = portfolio_combination_optimization(num_asset, data, iterations, learning_rate, risk_weight, budget,\n",
" penalty, circuit=circuit_depth)\n",
"print(f\"******************* 最优组合为: {invest} *******************\")\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"## 注意事项\n",
"若投资方案数较小(`num_asset`$< 12$),我们可以通过严格对角化哈密顿量来计算真实的损失最小值,并与优化的结果比较。若二者的差别较大,该优化结果不可靠,需要重新选择训练参数。\n",
"## 相关论文以及引用信息\n",
"```\n",
"@article{ORUS2019100028,\n",
"title = {Quantum computing for finance: Overview and prospects},\n",
"journal = {Reviews in Physics},\n",
"volume = {4},\n",
"pages = {100028},\n",
"year = {2019},\n",
"issn = {2405-4283},\n",
"doi = {https://doi.org/10.1016/j.revip.2019.100028},\n",
"url = {https://www.sciencedirect.com/science/article/pii/S2405428318300571},\n",
"author = {Román Orús and Samuel Mugel and Enrique Lizaso}\n",
"}\n",
"\n",
"@ARTICLE{2020arXiv200614510E,\n",
" author = {{Egger}, Daniel J. and {Gambella}, Claudio and {Marecek}, Jakub and {McFaddin}, Scott and {Mevissen}, Martin and {Raymond}, Rudy and {Simonetto}, Andrea and {Woerner}, Stefan and {Yndurain}, Elena},\n",
" title = \"{Quantum Computing for Finance: State of the Art and Future Prospects}\",\n",
" journal = {arXiv e-prints},\n",
" keywords = {Quantum Physics, Quantitative Finance - Statistical Finance},\n",
" year = 2020,\n",
" month = jun,\n",
" eid = {arXiv:2006.14510},\n",
" pages = {arXiv:2006.14510},\n",
"archivePrefix = {arXiv},\n",
" eprint = {2006.14510},\n",
" primaryClass = {quant-ph},\n",
" adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200614510E},\n",
" adsnote = {Provided by the SAO/NASA Astrophysics Data System}\n",
"}\n",
"\n",
"@article{10.2307/2975974,\n",
" ISSN = {00221082, 15406261},\n",
" URL = {http://www.jstor.org/stable/2975974},\n",
" author = {Harry Markowitz},\n",
" journal = {The Journal of Finance},\n",
" number = {1},\n",
" pages = {77--91},\n",
" publisher = {[American Finance Association, Wiley]},\n",
" title = {Portfolio Selection},\n",
" urldate = {2022-12-07},\n",
" volume = {7},\n",
" year = {1952}\n",
"}\n",
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.13 ('pq')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "d3caffbb123012c2d0622db402df9f37d80adc57c1cef1fdb856f61446d88d0a"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction of quantum portfolio optimization\n",
"*Copyright (c) 2022 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"If you are an active investment manager who wants to invest $K$ dollars to $N$ projects, each with its return and risk, your goal is to find an optimal way to invest in the projects, taking into account the market impact and transaction costs.\n",
"\n",
"To make the modeling easy to formulate, two assumptions are made to constrain the problem:\n",
" 1.Each asset is invested with an equal amount of money;\n",
" 2.Budget is a multiple of each investment amount and must be fully spent.\n",
"\n",
"In the theory of portfolio optimization, the overall risk of a portfolio is related to the covariance between assets, which is proportional to the correlation coefficients of any two assets. The smaller the correlation coefficients, the smaller the covariance, and then the smaller the overall risk of the portfolio. Here we use the mean-variance approach to model this problem:\n",
"$$\n",
"\\omega=\\max _{x \\in\\{0,1\\}^n} \\mu^T x-q x^T S x \\quad \\text { subject to: } 1^T x=B,\n",
"$$\n",
"where each symbol has the following meaning:\n",
"- $x \\in \\{0, 1\\}^{n}$ denotes the vector of binary decision variables, which indicate which each assets is picked ($x_i$=1) or not ($x_i=0$)\n",
"- $\\mu \\in \\mathbb{R}^n$ defines the expected returns for the assets\n",
"- $S \\in \\mathbb{R}^{n \\times n}$ represents the covariances between the assets\n",
"- $q > 0$ represents the risk factor of investment decision making\n",
"- $B$ denotes the budget, i.e. the number of assets to be selected out of $N$\n",
"\n",
"Let us illustrate on the meaning of this equation. $\\mu^T x$ describes the expected benefit of the investment plan represented by $x$. $x^T S x$ describes the correlation between the projects, which, after producting with the risk coefficient $q$, represents the risk incorporated in the investment plan. The restriction $1^T x=B$ requires the number of invested projects equals to our total budget. Therefore, $\\omega$ represents the largest benefit we could get theoretically.\n",
"\n",
"In order to find the optimal investment plan more easily, we can define the loss function\n",
"$$\n",
"C_x=q \\sum_i \\sum_j S_{j i} x_i x_j-\\sum_i x_i \\mu_i+A\\left(B-\\sum_i x_i\\right)^2,\n",
"$$\n",
"where the restriction condition enters the function with the form of Lagrange multiplier. Therefore, our task becomes finding the investment plan $x$ that minimizes the loss $C_x$."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quantum encoding and solution\n",
"\n",
"We now need to transform the cost function $C_x$ into a Hamiltonian to realize the encoding of the portfolio optimization problem. One just needs to do the following transformation:\n",
"$\n",
"x_i \\mapsto \\frac{I-Z_i}{2},\n",
"$\n",
"where $Z_i=I \\otimes I \\otimes \\ldots \\otimes Z \\otimes \\ldots \\otimes I$, i.e., $Z_{i}$ is the Pauli operator acting solely on the $i$-th qubit. Thus using the above mapping, we can transform the cost function $C_x$ into a Hamiltonian $H_C$ for the system of $n$ qubits, the ground state of which represents the solution of the portfolio optimization problem. In order to find the ground state, we use the idea of variational quantum algorithms. We implement a parametric quantum circuit, and use it to generate a trial state $|\\theta^* \\rangle$. We use the quantum circuit to measure the expectation value of the Hamiltonian on this state. Then, classical gradient descent algorithm is implemented to adjust the parameters of the parametric circuit, where the expectation value evolves towards the ground state energy. After some iterations, we arrive at the optimal value\n",
"$$\n",
"|\\theta^* \\rangle = \\argmin_\\theta L(\\vec{\\theta})=\\argmin_\\theta \\left\\langle\\vec{\\theta}\\left|H_C\\right| \\vec{\\theta}\\right\\rangle.\n",
"$$\n",
"\n",
"Finally, we read out the probability distribution from the measurement result (i.e. decoding the quantum problem to give information about the original bit string)\n",
"$\n",
"p(z)=\\left|\\left\\langle z \\mid \\vec{\\theta}^*\\right\\rangle\\right|^2.\n",
"$\n",
"In the case of quantum parameterized circuits with sufficient expressiveness, the greater the probability of a certain bit string, the greater the probability that it corresponds to an optimal solution to the portfolio optimization problem."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## User's guide\n",
"### Configuration file and input parameters\n",
"We provide a configuration file with previously chosen parameter. The user just needs to change the parameters in the `config.toml` file, and run `python qpo.py --config config.toml --logger qpo_log.log` in the terminal, to solve the portfolio optimization problem.\n",
"### Output\n",
"The results will be output to the `qpo_log.log` file. First of all, the process of optimization will be documented in the log. Users can see the evolution of loss function as the looping times increases. \n",
"### Parameters\n",
"- `stock`, default is `'demo'`, i.e., using the stock data we provide in the demo file. Users can switch to `'random'` or `'custom'` to generate random stock data or use custom stock data. If user chooses to generate data randomly, the parameters `start_time` and `endtime` can be altered to specify the start and end date of the stock data. If user chooses to use custom data, he or she can store the information of the stock in a csv file, and write in the configuration file:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"custom_data_path = 'file_name.csv'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Online demonstration\n",
"Here, we provide an online demonstration version. Firstly, we define the parameters in the configuration file:"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"config_toml = r\"\"\"\n",
"# # The configuration file of quantum portfolio optimization problem.\n",
"# Use demo stock data\n",
"stock = 'demo' \n",
"demo_data_path = 'demo_stock.csv'\n",
"# Number of investable projects\n",
"num_asset = 7\n",
"# Risk of decision making\n",
"risk_weight = 0.5\n",
"# Budget\n",
"budget = 0\n",
"# Penalty\n",
"penalty = 0\n",
"# The depth of the quantum circuit\n",
"circuit_depth = 2\n",
"# Number of loop cycles used in the optimization\n",
"iterations = 600\n",
"# Learning rate of gradient descent\n",
"learning_rate = 0.2\n",
"\"\"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The finance module in PaddleQuantum realizes the numerical simulation of the quantum portfolio optimization problem."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 600/600 [01:04<00:00, 9.24it/s]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"******************* The optimal investment plan is: [2, 5, 6] *******************\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"import pandas as pd\n",
"\n",
"import toml\n",
"from paddle_quantum.finance.qpo import portfolio_combination_optimization\n",
"from paddle_quantum.finance import DataSimulator\n",
"\n",
"config = toml.loads(config_toml)\n",
"demo_data_path = config[\"demo_data_path\"]\n",
"num_asset = config[\"num_asset\"]\n",
"risk_weight = config[\"risk_weight\"]\n",
"budget = config[\"budget\"]\n",
"penalty = config[\"penalty\"]\n",
"circuit_depth = config[\"circuit_depth\"]\n",
"iterations = config[\"iterations\"]\n",
"learning_rate = config[\"learning_rate\"]\n",
"\n",
"stocks_name = [(\"STOCK%s\" % i) for i in range(num_asset)]\n",
"source_data = pd.read_csv(demo_data_path)\n",
"processed_data = [source_data['closePrice'+str(i)].tolist() for i in range(num_asset)]\n",
"data = DataSimulator(stocks_name)\n",
"data.set_data(processed_data)\n",
"\n",
"invest = portfolio_combination_optimization(num_asset, data, iterations, learning_rate, risk_weight, budget,\n",
" penalty, circuit=circuit_depth)\n",
"print(f\"******************* The optimal investment plan is: {invest} *******************\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Note\n",
"If the number of investable projects is small (`num_asset`$< 12$), we can diagonalize the Hamiltonian exactly, and compare the real minimum loss value with that found by the optimization process. If the difference is large, the optimization result may be unreliable, and re-choosing of the training parameters might be necessary. Finally, we output the optimal investment plan."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"```\n",
"@article{ORUS2019100028,\n",
"title = {Quantum computing for finance: Overview and prospects},\n",
"journal = {Reviews in Physics},\n",
"volume = {4},\n",
"pages = {100028},\n",
"year = {2019},\n",
"issn = {2405-4283},\n",
"doi = {https://doi.org/10.1016/j.revip.2019.100028},\n",
"url = {https://www.sciencedirect.com/science/article/pii/S2405428318300571},\n",
"author = {Román Orús and Samuel Mugel and Enrique Lizaso}\n",
"}\n",
"\n",
"@ARTICLE{2020arXiv200614510E,\n",
" author = {{Egger}, Daniel J. and {Gambella}, Claudio and {Marecek}, Jakub and {McFaddin}, Scott and {Mevissen}, Martin and {Raymond}, Rudy and {Simonetto}, Andrea and {Woerner}, Stefan and {Yndurain}, Elena},\n",
" title = \"{Quantum Computing for Finance: State of the Art and Future Prospects}\",\n",
" journal = {arXiv e-prints},\n",
" keywords = {Quantum Physics, Quantitative Finance - Statistical Finance},\n",
" year = 2020,\n",
" month = jun,\n",
" eid = {arXiv:2006.14510},\n",
" pages = {arXiv:2006.14510},\n",
"archivePrefix = {arXiv},\n",
" eprint = {2006.14510},\n",
" primaryClass = {quant-ph},\n",
" adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200614510E},\n",
" adsnote = {Provided by the SAO/NASA Astrophysics Data System}\n",
"}\n",
"\n",
"@article{10.2307/2975974,\n",
" ISSN = {00221082, 15406261},\n",
" URL = {http://www.jstor.org/stable/2975974},\n",
" author = {Harry Markowitz},\n",
" journal = {The Journal of Finance},\n",
" number = {1},\n",
" pages = {77--91},\n",
" publisher = {[American Finance Association, Wiley]},\n",
" title = {Portfolio Selection},\n",
" urldate = {2022-12-07},\n",
" volume = {7},\n",
" year = {1952}\n",
"}\n",
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.13 ('pq')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "d3caffbb123012c2d0622db402df9f37d80adc57c1cef1fdb856f61446d88d0a"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
# !/usr/bin/env python3
# Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Quantum portfolio optimization.
"""
import os
import sys
from typing import Dict
import logging
import argparse
import toml
import datetime
import pandas as pd
from paddle_quantum.finance.qpo import portfolio_combination_optimization
from paddle_quantum.finance import DataSimulator
def main(args):
# logger configure
log_path = args.logger
logger = logging.Logger(name='logger_qpo')
logger_file_handler = logging.FileHandler(log_path)
logger_file_handler.setFormatter(logging.Formatter(r'%(levelname)s %(asctime)s %(message)s'))
logger_file_handler.setLevel(logging.INFO)
logger.addHandler(logger_file_handler)
logger.warning("------------------- Process starts -------------------")
# data preparation
parsed_configs: Dict = toml.load(args.config)
num_asset = parsed_configs["stock_para"]["num_asset"]
if parsed_configs['stock'] == 'demo':
stock_file_path = os.path.join(this_file_path, './demo_stock.csv')
stocks_name = [("STOCK%s" % i) for i in range(num_asset)]
source_data = pd.read_csv(stock_file_path)
processed_data = [source_data['closePrice'+str(i)].tolist() for i in range(num_asset)]
data = DataSimulator(stocks_name)
data.set_data(processed_data)
logger.warning(f"******************* {num_asset} stocks processed *******************")
elif parsed_configs['stock'] == 'random':
stocks_name = [("STOCK%s" % i) for i in range(num_asset)]
data = DataSimulator(stocks=stocks_name, start=datetime.datetime(
*parsed_configs['random_data']['start_time']), end=datetime.datetime(*parsed_configs['random_data']['end_time']))
data.randomly_generate()
logger.warning(f"******************* {num_asset} stocks randomly generated *******************")
elif parsed_configs['stock'] == 'custom':
stock_file_path = parsed_configs["custom_data_path"]
stocks_name = [("STOCK%s" % i) for i in range(num_asset)]
source_data = pd.read_csv(stock_file_path)
processed_data = [source_data['closePrice'+str(i)].tolist() for i in range(num_asset)]
data = DataSimulator(stocks_name)
data.set_data(processed_data)
logger.warning(f"******************* {num_asset} stocks processed *******************")
# load model parameters
risk_weight = parsed_configs["stock_para"]["risk_weight"]
budget = parsed_configs["stock_para"]["budget"]
penalty = parsed_configs["stock_para"]["penalty"]
circuit_depth = parsed_configs["train_para"]["circuit_depth"]
iters = parsed_configs["train_para"]["iterations"]
lr = parsed_configs["train_para"]["learning_rate"]
# optimization
logger.warning("******************* Train starts *******************")
invest = portfolio_combination_optimization(num_asset, data, iters, lr, risk_weight, budget,
penalty, circuit=circuit_depth, logger=logger, compare=True)
logger.warning("******************* Train ends *******************")
logger.warning(f"******************* Output is {invest} *******************")
logger.warning("------------------- Process ends -------------------")
if __name__ == "__main__":
this_file_path = sys.path[0]
parser = argparse.ArgumentParser(description="Quantum chemistry task with paddle quantum.")
parser.add_argument(
"--config", default=os.path.join(this_file_path, './config.toml'), type=str, help="The path of toml format config file.")
parser.add_argument(
"--logger", default=os.path.join(this_file_path, './qpo_log.log'), type=str, help="The path of log file saved.")
main(parser.parse_args())
# The configuration file for protein folding problem
# The amino acids sequence that form a protein.
# Valid amino acid labels are (https://en.wikipedia.org/wiki/Amino_acid):
# C: Cysteine
# M: Methionine
# F: Phenylalanine
# I: Isoleucine
# L: Leucine
# V: Valine
# W: Tryptophan
# Y: Tyrosine
# A: Alanine
# G: Glycine
# T: Threonine
# S: Serine
# N: Asparagine
# Q: Glutamine
# D: Aspartate
# E: Glutamate
# H: Histidine
# R: Arginine
# K: Lysine
# P: Proline
# NOTE: the more amino acids in the sequence, the longer program will run
# NOTE: the example below takes approximately 0.5h!
amino_acids = ["A", "P", "R", "L", "R", "F", "Y"]
# Pair of indices indicates the potentially interact amino acide pair, below indicates that
# the 0-th and 5-th acids will interact and 1-th and 6-th acids will interact.
possible_contractions = [[0, 5], [1, 6]]
# Depth of the quantum circuit used in VQE
depth = 1
# Number of VQE iterations
num_iterations = 200
# The condition for VQE convergence
tol = 1e-3
# The number of steps between two consecutive loss records
save_every = 10
# learning rate for the optimizer
learning_rate = 0.5
\ No newline at end of file
# !/usr/bin/env python3
# Copyright (c) 2021 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import toml
import time
import os
import warnings
import logging
from paddle import optimizer as paddle_optimizer
from paddle_quantum.ansatz import Circuit
from paddle_quantum.biocomputing import Protein
from paddle_quantum.biocomputing import ProteinFoldingSolver
from paddle_quantum.biocomputing import visualize_protein_structure
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
logging.basicConfig(filename="log", filemode="w", level=logging.INFO, format="%(message)s")
def circuit(num_qubits: int, depth: int) -> Circuit:
r"""Ansatz used in protein folding VQE.
"""
cir = Circuit(num_qubits)
cir.superposition_layer()
for _ in range(depth):
cir.ry()
cir.cx()
cir.ry()
return cir
def main(args):
time_start = time.strftime("%Y%m%d-%H:%M:%S", time.localtime())
logging.info(f"Job start at {time_start:s}")
# construct the protein
parsed_configs = toml.load(args.config)
aa_seq = parsed_configs["amino_acids"]
contact_pairs = parsed_configs["possible_contactions"]
num_aa = len(aa_seq)
protein = Protein("".join(aa_seq), {(0, 1): 1, (1, 2): 0, (num_aa-2, num_aa-1): 3}, contact_pairs)
# build the solver
cir_depth = parsed_configs["depth"]
cir = circuit(protein.num_qubits, cir_depth)
penalty_factors = [10.0, 10.0]
alpha = 0.5
optimizer = paddle_optimizer.Adam
num_iterations = parsed_configs["num_iterations"]
tol = parsed_configs["tol"]
save_every = parsed_configs["save_every"]
learning_rate = parsed_configs["learning_rate"]
problem = ProteinFoldingSolver(penalty_factors, alpha, optimizer, num_iterations, tol, save_every)
_, protein_str = problem.solve(protein, cir, learning_rate=learning_rate)
# parse results & plot the 3d structure of protein
num_config_qubits = protein.num_config_qubits
bond_directions = [1, 0]
bond_directions.extend(int(protein_str[slice(i, i + 2)], 2) for i in range(0, num_config_qubits, 2))
bond_directions.append(3)
visualize_protein_structure(aa_seq, bond_directions)
logging.info("\n#######################################\nSummary\n#######################################")
logging.info(f"Protein bonds direction: {bond_directions}.")
time_stop = time.strftime("%Y%m%d-%H:%M:%S", time.localtime())
logging.info(f"\nJob end at {time_stop:s}\n")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Protein folding task with paddle quantum.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
main(parser.parse_args())
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册