提交 09959fef 编写于 作者: Q Quleaf

update to v2.4.0

上级 28845151
......@@ -33,7 +33,7 @@ English | [简体中文](README_CN.md)
</a>
<!-- PyPI -->
<a href="https://pypi.org/project/paddle-quantum/">
<img src="https://img.shields.io/badge/pypi-v2.3.0-orange.svg?style=flat-square&logo=pypi"/>
<img src="https://img.shields.io/badge/pypi-v2.4.0-orange.svg?style=flat-square&logo=pypi"/>
</a>
<!-- Python -->
<a href="https://www.python.org/">
......@@ -83,7 +83,7 @@ pip install paddle-quantum
or download all the files and finish the installation locally,
```bash
git clone http://github.com/PaddlePaddle/quantum
git clone https://github.com/PaddlePaddle/quantum
cd quantum
pip install -e .
```
......
......@@ -34,7 +34,7 @@
</a>
<!-- PyPI -->
<a href="https://pypi.org/project/paddle-quantum/">
<img src="https://img.shields.io/badge/pypi-v2.3.0-orange.svg?style=flat-square&logo=pypi"/>
<img src="https://img.shields.io/badge/pypi-v2.4.0-orange.svg?style=flat-square&logo=pypi"/>
</a>
<!-- Python -->
<a href="https://www.python.org/">
......@@ -84,7 +84,7 @@ pip install paddle-quantum
用户也可以选择下载全部文件后进行本地安装,
```bash
git clone http://github.com/PaddlePaddle/quantum
git clone https://github.com/PaddlePaddle/quantum
cd quantum
pip install -e .
```
......
# The config for credit risk analysis model
# number of assets.
num_assets = 4
# basic default probabilities.
base_default_prob = [0.15, 0.25, 0.39, 0.58]
# sensitivity.
sensitivity = [0.37, 0.21, 0.32, 0.02]
# loss given default.
lgd = [5, 1, 3, 4]
# confidence level.
confidence_level = 0.99
# degree of simulation. Higher the degree, preciser the simulation.
degree_of_simulation = 4
# !/usr/bin/env python3
# Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import warnings
import logging
import time
from typing import Dict
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
import toml
from paddle_quantum.finance import CreditRiskAnalyzer
def main(args):
time_start = time.strftime("%Y%m%d-%H:%M:%S", time.localtime())
logging.info(f"Job start at {time_start:s}")
parsed_configs: Dict = toml.load(args.config)
# input credit portfolio settings
num_assets = parsed_configs["num_assets"]
base_default_prob = parsed_configs["base_default_prob"]
sensitivity = parsed_configs["sensitivity"]
lgd = parsed_configs["lgd"]
confidence_level = parsed_configs["confidence_level"]
degree_of_simulation = parsed_configs["degree_of_simulation"]
estimator = CreditRiskAnalyzer(num_assets, base_default_prob, sensitivity, lgd,
confidence_level, degree_of_simulation)
print("The Value at Risk of these assets are", estimator.estimate_var())
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Credit risk analysis with paddle quantum.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
main(parser.parse_args())
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# 信贷风险分析\n",
"\n",
"*Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"风险分析的任务是评估给定资产在面临金融系统波动时的风险,即在险价值 Value at Risk(VaR)。该价值可以衡量在金融系统下给定资产的潜在损失。决定资产在险价值的因素有很多,包括金融机构对于风险的不同偏好、金融系统的波动特征和资产本身的含风险性质。2014 年 Rutkowski 与 Tarca [1] 给出了一个计算风险资本的数学模型。该模型认为整体金融系统的波动应该遵循高斯布朗运动,从而能够计算出较高可信度下处于该系统中资产损失上界的最小值。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"当应用于信贷风险分析问题时,该模型可以用于评估银行的信贷风险监管资本,即银行所持有信贷资产组合在当前金融系统下的在险价值。在该模型中,我们记给定的信贷资产组合为 $\\{ 0, ..., K-1 \\}$, 且决定资产组合总损失的随机变量为\n",
"$$\n",
"\\mathcal{L} = \\sum_{k=0}^{K - 1} \\lambda_k X_k(Z)\n",
"。$$\n",
"这里,$\\lambda_k$ 为第 $k$ 个信贷资产的违约损失(loss given default);$Z$ 为描述金融系统波动过程的潜在随机变量(latent random variable),且其概率分布默认为标准正态分布;$X_k(Z)$ 为描述金融系统下资产违约过程的伯努利随机变量。具体地,当 $Z = z$ 时,随机变量 $X_k(Z)$ 的参数 $p_k(z)$,即第 $k$ 个资产的违约概率,取决于该资产对于系统风险的敏感度(sensitivity)$\\rho_k \\in [0, 1]$ 以及无系统风险下该资产的基础违约概率(base default probability)$p_k^{(0)} \\in [0, 1]$。在标定置信度(confidence level)之后,信贷资产组合的在险价值应定义为『于置信度 $\\alpha$ 下,信贷资产损失上界的最小值』,即\n",
"$$\n",
"\\textrm{VaR}_\\alpha(\\mathcal{L}) := \\inf \\{ x \\,|\\, \\textrm{Pr}(\\mathcal{L} \\leq x) \\geq \\alpha \\}\n",
"。$$\n",
"例如,若银行持有的一组信贷资产在置信度 $99\\%$ 下的在险价值正好为 $100$ 万,那么当该资产组合出现违约情况时,由于违约而造成的资产损失超过 $100$ 万的概率不高于 $1\\%$。在经典计算中,信贷风险分析问题中的在险价值可以通过经典蒙特卡罗(classical Monte Carlo)方法和二分搜索(bisection search)方法估算:\n",
"1. 根据资产组合性质,选定初始 VaR 猜测值 $\\check{x}$。\n",
"2. 使用经典蒙特卡罗算法估算概率值 $p_{\\check{x}} = \\textrm{Pr}(\\mathcal{L} \\leq \\check{x})$。\n",
"3. 将概率值 $p_{\\check{x}}$ 与 $\\alpha$ 比较,并根据比较结果和二分搜索方法更新猜测值 $\\check{x}$。\n",
"4. 若更新值达到收敛标准,输出 $\\check{x} = \\textrm{VaR}_\\alpha(\\mathcal{L})$;否则返回第 $2$ 步。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 量子解决方案\n",
"\n",
"不同于经典算法,量子计算可以使用量子振幅估计(quantum amplitude estimation)算法替换上述第 2 步中使用的经典蒙特卡罗算法,来提高概率值 $p_{\\check{x}}$ 的估算效率。通过量子叠加和纠缠的特性,量子方案与经典方案相比有望在采样次数上获取平方加速的优势 [2]。接下来我们展示如何使用量桨来模拟该量子方案,从而完成信贷风险分析的在险价值计算问题。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 在线演示"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"我们已经给出了一个设置好的参数,可以直接用于风险资产组合的在险价值计算。只需要在 `config.toml` 这个配置文件中进行对应的配置,然后输入命令 \n",
"`python credit_risk.py --config config.toml`\n",
"即可对配置好的资产组合进行在险价值计算。\n",
"\n",
"这里,我们给出一个在线演示的版本,可以在线进行测试。首先定义配置文件的内容:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"cra_toml = r\"\"\"\n",
"# 用于信贷风险分析模型的整体配置文件。\n",
"# 资产数量。\n",
"num_assets = 4\n",
"# 基础违约概率。\n",
"base_default_prob = [0.15, 0.25, 0.39, 0.58]\n",
"# 敏感度。\n",
"sensitivity = [0.37, 0.21, 0.32, 0.02]\n",
"# 违约损失。\n",
"lgd = [5, 1, 3, 4]\n",
"# 置信度。\n",
"confidence_level = 0.99\n",
"# 估计精度系数。系数越高,则估计的结果越精确。\n",
"degree_of_simulation = 4\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"量桨的金融模块实现了量子解决方案的数值模拟。我们可以从 ``paddle_quantum.finance`` 模块里导入 ``CreditRiskAnalyzer`` 来解决配置好的在险价值计算问题。"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-----------------------------------------------------------------------------------\n",
"Begin bisection search for VaR with confidence level >= 99.0%.\n",
"-----------------------------------------------------------------------------------\n",
"Lower guess: level Middle guess: level Upper guess: level \n",
" -1 : 0.000 6 : 0.691 13 : 1.000 \n",
" 6 : 0.691 9 : 0.941 13 : 1.000 \n",
" 9 : 0.941 11 : 0.962 13 : 1.000 \n",
" 11 : 0.962 12 : 0.990 13 : 1.000 \n",
"-----------------------------------------------------------------------------------\n",
"Estimated VaR is 12 with confidence level 99.0%.\n",
"-----------------------------------------------------------------------------------\n",
"该资产组合的在险价值大约为 12\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.finance import CreditRiskAnalyzer\n",
"\n",
"config = toml.loads(cra_toml)\n",
"num_assets = config[\"num_assets\"]\n",
"base_default_prob = config[\"base_default_prob\"]\n",
"sensitivity = config[\"sensitivity\"]\n",
"lgd = config[\"lgd\"]\n",
"confidence_level = config[\"confidence_level\"]\n",
"degree_of_simulation = config[\"degree_of_simulation\"]\n",
"\n",
"estimator = CreditRiskAnalyzer(num_assets, base_default_prob, sensitivity, lgd, confidence_level, degree_of_simulation)\n",
"print(\"该资产组合的在险价值大约为\", estimator.estimate_var())"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"___\n",
"\n",
"# 注意事项\n",
"\n",
"这里提供的模型只用于特定模型的信贷风险分析问题。\n",
"\n",
"# 引用信息\n",
"\n",
"```\n",
"@article{rutkowski2015regulatory,\n",
" title={Regulatory capital modeling for credit risk},\n",
" author={Rutkowski, Marek and Tarca, Silvio},\n",
" journal={International Journal of Theoretical and Applied Finance},\n",
" volume={18},\n",
" number={05},\n",
" pages={1550034},\n",
" year={2015},\n",
" publisher={World Scientific}\n",
"}\n",
"\n",
"@article{egger2020credit,\n",
" title={Credit risk analysis using quantum computers},\n",
" author={Egger, Daniel J and Guti{\\'e}rrez, Ricardo Garc{\\'\\i}a and Mestre, Jordi Cahu{\\'e} and Woerner, Stefan},\n",
" journal={IEEE Transactions on Computers},\n",
" volume={70},\n",
" number={12},\n",
" pages={2136--2145},\n",
" year={2020},\n",
" publisher={IEEE}\n",
"}\n",
"```\n",
"\n",
"# 参考文献\n",
"\n",
"[1] Rutkowski, Marek, and Silvio Tarca. \"Regulatory capital modeling for credit risk.\" International Journal of Theoretical and Applied Finance 18.05 (2015): 1550034.\n",
"\n",
"[2] Egger, Daniel J., et al. \"Credit risk analysis using quantum computers.\" IEEE Transactions on Computers 70.12 (2020): 2136-2145."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "pq",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Credit Risk Analysis\n",
"\n",
"*Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The task of risk analysis is to access the risk of given assets facing the fluctuations of financial systems, i.e. the Value at Risk (VaR). Such quantity can measure the potent loss of given assets in a financial system. There are many factors determining the Value of Risk of assets, including the favor of risk for different financial organizations, the characteristics of system fluctuations and the risk-related properties of assets. In 2014 Rutkowski and Tarca [1] provided a mathematical model that estimates the risk measure. This model suggests that the whole financial system can be modelled by a Brownian motion process, so that the minimum upper bound of assets' loss can be computed under an appropriate confidence level."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Credit risk analysis is a direct application of such model, which can evaluate the regulatory capital for credit risk in banking industry i.e. the VaR of credit asset combination held under the current financial . In this model, we denote the given asset combination as $\\{ 0, ..., K-1 \\}$, and the risk measures of the total loss as\n",
"$$\n",
"\\mathcal{L} = \\sum_{k=0}^{K - 1} \\lambda_k X_k(Z)\n",
".$$\n",
"Here, $\\lambda_k$ is the loss given default of the $k$-th asset; $Z$ is the standard Gaussian variable that implicitly models the process of financial system; $X_k(Z)$ is the Bernoulli variable that models the default process of the $k$-th asset. In particular, when $Z = z$, the parameter $p_k(z)$ of random variable $X_k(Z)$ i.e. the default probability of the $k$-th asset, is dependent on the basic default probability without the effect of financial system $p_k^{(0)} \\in [0, 1]$, and the asset sensitivity to the system risk $\\rho_k \\in [0, 1]$. After settling the confidence level, the Value at Risk in the problem of credit risk analysis shall be defined as \"the minimum upper bound of credit asset combination under confidence level $\\alpha$\". That is,\n",
"$$\n",
"\\textrm{VaR}_\\alpha(\\mathcal{L}) := \\inf \\{ x \\,|\\, \\textrm{Pr}(\\mathcal{L} \\leq x) \\geq \\alpha \\}\n",
".$$\n",
"For example, if the VaR of a combination of credit assets held by a bank happens to be one million under confidence level $99\\%$, then the probability that the total loss given default of such combination is greater than one million is less than $1\\%$. In classical calculations, the VaR in the problem of credit risk analysis can be estimated by classical Monte Carlo and bisection search methods:\n",
"1. Choose an appropriate VaR guess $\\check{x}$ according to the properties of assets.\n",
"2. Use classical Monte Carlo method to estimate the probability $p_{\\check{x}} = \\textrm{Pr}(\\mathcal{L} \\leq \\check{x})$.\n",
"3. Compare the probability with $\\alpha$, and update the VaR guess $\\check{x}$ according to the comparison result and bisection search method.\n",
"4. If the convergence criteria is met, output $\\check{x} = \\textrm{VaR}_\\alpha(\\mathcal{L})$; otherwise return to step $2$."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quantum solution\n",
"\n",
"Unlike classical algorithms, quantum computation can use the algorithm of quantum amplitude estimation (QAE) to replace the classical Monte Carlo method in the second step mentioned above, to enhance the efficiency of probability estimation. Through the characteristics of quantum superposition and entanglement, such quantum scheme is expected to obtain the advantage of quadratic acceleration compared with classical schemes [2]. Next, we will show how to use Paddle Quantum to simulate this quantum scheme to complete the VaR computational problem of credit risk analysis."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Online demonstration"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We have set a parameter that can be used directly for the VaR computation of asset combinations. Just configure it in the configuration file `config.toml` and enter the command \n",
"`python credit_risk.py --config config.toml`.\n",
"The VaR of configured assets is then be computed.\n",
"\n",
"Here, we give a version of the online demo that can be tested online. First define the contents of the configuration file:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"cra_toml = r\"\"\"\n",
"# The config for credit risk analysis model\n",
"# number of assets.\n",
"num_assets = 4\n",
"# basic default probabilities.\n",
"base_default_prob = [0.15, 0.25, 0.39, 0.58]\n",
"# sensitivity.\n",
"sensitivity = [0.37, 0.21, 0.32, 0.02]\n",
"# loss given default.\n",
"lgd = [5, 1, 3, 4]\n",
"# confidence level.\n",
"confidence_level = 0.99\n",
"# degree of simulation. Higher the degree, preciser the simulation.\n",
"degree_of_simulation = 4\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The financial module of the Paddle Quantum enables numerical simulation of quantum amplitude estimation scheme. We can import ``CreditRiskAnalyzer`` from the ``paddle_quantum.finance`` module to solve the configured VaR computational problem."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-----------------------------------------------------------------------------------\n",
"Begin bisection search for VaR with confidence level >= 99.0%.\n",
"-----------------------------------------------------------------------------------\n",
"Lower guess: level Middle guess: level Upper guess: level \n",
" -1 : 0.000 6 : 0.691 13 : 1.000 \n",
" 6 : 0.691 9 : 0.941 13 : 1.000 \n",
" 9 : 0.941 11 : 0.962 13 : 1.000 \n",
" 11 : 0.962 12 : 0.990 13 : 1.000 \n",
"-----------------------------------------------------------------------------------\n",
"Estimated VaR is 12 with confidence level 99.0%.\n",
"-----------------------------------------------------------------------------------\n",
"The Value at Risk of these assets are 12\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"warnings.filterwarnings(\"ignore\")\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.finance import CreditRiskAnalyzer\n",
"\n",
"config = toml.loads(cra_toml)\n",
"num_assets = config[\"num_assets\"]\n",
"base_default_prob = config[\"base_default_prob\"]\n",
"sensitivity = config[\"sensitivity\"]\n",
"lgd = config[\"lgd\"]\n",
"confidence_level = config[\"confidence_level\"]\n",
"degree_of_simulation = config[\"degree_of_simulation\"]\n",
"\n",
"estimator = CreditRiskAnalyzer(num_assets, base_default_prob, sensitivity, lgd, confidence_level, degree_of_simulation)\n",
"print(\"The Value at Risk of these assets are\", estimator.estimate_var())"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"___\n",
"\n",
"# Note\n",
"\n",
"The model presented here is only intended to solve the credit risk analysis problem of a particular model.\n",
"\n",
"# Citation\n",
"\n",
"```\n",
"@article{rutkowski2015regulatory,\n",
" title={Regulatory capital modeling for credit risk},\n",
" author={Rutkowski, Marek and Tarca, Silvio},\n",
" journal={International Journal of Theoretical and Applied Finance},\n",
" volume={18},\n",
" number={05},\n",
" pages={1550034},\n",
" year={2015},\n",
" publisher={World Scientific}\n",
"}\n",
"\n",
"@article{egger2020credit,\n",
" title={Credit risk analysis using quantum computers},\n",
" author={Egger, Daniel J and Guti{\\'e}rrez, Ricardo Garc{\\'\\i}a and Mestre, Jordi Cahu{\\'e} and Woerner, Stefan},\n",
" journal={IEEE Transactions on Computers},\n",
" volume={70},\n",
" number={12},\n",
" pages={2136--2145},\n",
" year={2020},\n",
" publisher={IEEE}\n",
"}\n",
"```\n",
"\n",
"# References\n",
"\n",
"[1] Rutkowski, Marek, and Silvio Tarca. \"Regulatory capital modeling for credit risk.\" International Journal of Theoretical and Applied Finance 18.05 (2015): 1550034.\n",
"\n",
"[2] Egger, Daniel J., et al. \"Credit risk analysis using quantum computers.\" IEEE Transactions on Computers 70.12 (2020): 2136-2145."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "pq",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
# A description of the task of this configuration file, this is optional. "Binding energy" stands for calculating deuteron binding energy.
task = "Binding energy"
# Set the dimension of the fermionic Hamiltonian
N = 3
# Set the parameters used in VQE
hbar_omega = 7
V0 = -5.6865811
# Whether return the exact ground state energy of the Hamiltonian, **NOTE: should use `true` or `false` instead of `True` or `False`**
calc_exact = true
# This field stores configurations of the variational quantum eigensolver (VQE) method.
[VQE]
# Number of optimization cycles, default is 100.
num_iterations = 100
# The convergence criteria for the VQE optimization, default is 1e-5.
tol = 1e-5
# The number of optimization steps after which we record the loss value.
save_every = 10
# This field specifies the optimizer used in the VQE method, default is `Adam`, see here for available optimizers https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/Overview_cn.html
[optimizer.Adam]
# The learning rate of the optimizer, see here for more details https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/Adam_cn.html, default is 0.4.
learning_rate = 0.5
\ No newline at end of file
# !/usr/bin/env python3
# Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict
import warnings
import time
import logging
import argparse
import toml
import numpy as np
import paddle.optimizer as optim
from paddle_quantum.gate import RY, X
from paddle_quantum.qchem import GroundStateSolver, HartreeFock
from utils import DeuteronHamiltonian
logging.basicConfig(filename="log", filemode="w", format="%(message)s", level=logging.INFO)
def main(args):
time_start = time.strftime("%Y%m%d-%H:%M:%S", time.localtime())
logging.info(f"Job start at {time_start:s}")
parsed_configs: Dict = toml.load(args.config)
# build deuteron Hamiltonian
num_qubits = parsed_configs["N"]
omega = parsed_configs["hbar_omega"]
V0 = parsed_configs["V0"]
deuham = DeuteronHamiltonian(omega, V0)
deuhN = deuham.get_hamiltonian(num_qubits)
# build HartreeFock circuit
cir = HartreeFock(num_qubits)
cir.insert(0, X(0))
cir.insert(1, RY(range(1, num_qubits)))
# load optimizer
if parsed_configs.get("optimizer") is None:
optimizer_name = "Adam"
optimizer_settings = {
"Adam": {
"learning_rate": 0.4
}
}
optimizer = optim.Adam
else:
optimizer_settings = parsed_configs["optimizer"]
optimizer_name = list(optimizer_settings.keys())[0]
optimizer = getattr(optim, optimizer_name)
# calculate properties
if parsed_configs.get("VQE") is None:
vqe_settings = {
"num_iterations": 100,
"tol": 1e-5,
"save_every": 10
}
else:
vqe_settings = parsed_configs["VQE"]
solver = GroundStateSolver(optimizer, **vqe_settings)
e, _ = solver.solve(cir, ham=deuhN, **optimizer_settings[optimizer_name])
logging.info("\n#######################################\nSummary\n#######################################")
logging.info(f"Binding energy={e:.5f}")
if parsed_configs.get("calc_exact") is True:
warnings.warn(f"Calculate exact binding energy will diagonalize {2**num_qubits:d}*{2**num_qubits} matrix, shouldn't be used if `N` is large.")
exact_e = np.linalg.eigvalsh(deuhN.construct_h_matrix())[0]
logging.info(f"Exact binding energy={exact_e:.5f}")
logging.info(f"Relative error={abs(e-exact_e)/abs(exact_e):.5f}")
time_stop = time.strftime("%Y%m%d-%H:%M:%S", time.localtime())
logging.info(f"\nJob end at {time_stop:s}\n")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Calculate deuteron binding energy.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
main(parser.parse_args())
\ No newline at end of file
# !/usr/bin/env python3
# Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional
import logging
from paddle_quantum import Hamiltonian
from paddle_quantum.ansatz import Circuit
from openfermion import FermionOperator, jordan_wigner
import numpy as np
__all__ = ["DeuteronHamiltonian", "DeuteronUCC2", "DeuteronUCC3"]
class DeuteronHamiltonian(object):
def __init__(self, omega: float, V0: float) -> None:
self.omega = omega
self.V0 = V0
logging.info("\n#######################################\nDeuteron Hamiltonian\n#######################################")
logging.info(f"hbar_omega: {omega:.5f}")
logging.info(f"V0: {V0:.5f}")
def get_hamiltonian(self, N: int) -> Hamiltonian:
h = FermionOperator("0^ 0", self.V0)
for i in range(N):
h += 0.5*self.omega*(2*i+1.5)*FermionOperator(f"{i:d}^ {i:d}")
if i < N-1:
h += -0.5*self.omega*np.sqrt((i+1)*(i+1.5))*FermionOperator(f"{i+1:d}^ {i:d}")
if i > 0:
h += -0.5*self.omega*np.sqrt(i*(i+0.5))*FermionOperator(f"{i-1:d}^ {i:d}")
h_qubit = jordan_wigner(h)
return Hamiltonian.from_qubit_operator(h_qubit)
class DeuteronUCC2(Circuit):
def __init__(self, theta: Optional[float] = None):
num_qubits = 2
super().__init__(num_qubits)
self.x(0)
self.ry(1)
self.cx([1, 0])
if isinstance(theta, float):
self.update_param(theta)
class DeuteronUCC3(Circuit):
def __init__(self, theta: Optional[np.array] = None):
num_qubits = 3
super().__init__(num_qubits)
self.x(0)
self.ry(1)
self.ry(2)
self.cx([2, 0])
self.cx([0, 1])
self.ry(1)
self.cx([0, 1])
self.cx([1, 0])
if isinstance(theta, np.ndarray):
self.update_param(theta)
if __name__ == "__main__":
deuteron_h = DeuteronHamiltonian(7, -5.6865811)
h1 = deuteron_h.get_hamiltonian(1)
h2 = deuteron_h.get_hamiltonian(2)
h3 = deuteron_h.get_hamiltonian(3)
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -17,7 +16,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -46,7 +44,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -89,7 +86,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -142,15 +138,13 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"在这里,我们只需要修改配置文件中的图片路径,再运行整个代码,就可以快速对其它图片进行测试。"
"在这里,我们只需要修改配置文件中的图片路径,再运行整个代码,就可以快速对其它图片进行测试。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -168,7 +162,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -190,7 +183,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "py37",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
......@@ -204,12 +197,11 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15 (default, Nov 10 2022, 12:46:26) \n[Clang 14.0.6 ]"
"version": "3.7.16"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "49b49097121cb1ab3a8a640b71467d7eda4aacc01fc9ff84d52fcb3bd4007bf1"
"hash": "d3caffbb123012c2d0622db402df9f37d80adc57c1cef1fdb856f61446d88d0a"
}
}
},
......
# The full config for inferring the QC-AAN model.
# The mode of this config. Available values: 'train' | 'inference'.
mode = 'inference'
# The name of the model, which is used to save the model.
# model_name = 'qcaan-model'
# The path to load the parameters of the trained model. Both relative and absolute paths are allowed.
params_path = "params"
# The number of qubits which the quantum circuit contains.
num_qubits = 8
# The number of depths of complex entangled layers which the quantum circuit contains.
num_depths = 4
# latent feature numbers which represents the input dimension of the generator.
latent_dim = 16
# The manual seed for reproducibility.
manual_seed = 20230313
# !/usr/bin/env python3
# Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Handwritten digit generation via quantum-circuit associative adversarial networks (QCAAN)
"""
import os
import warnings
import argparse
import toml
from paddle_quantum.qml.qcaan import train, model_test
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Generating the handwritten digits by the QC-AAN model.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
args = parser.parse_args()
config = toml.load(args.config)
mode = config.pop('mode')
if mode == 'train':
train(**config)
elif mode == 'inference':
model_test(**config)
else:
raise ValueError("Unknown mode, it can be 'train' or 'inference'.")
# The full config for training the QC-AAN model.
# The mode of this config. Available values: 'train' | 'inference'.
mode = 'train'
# The name of the model, which is used to save the model.
#model_name = 'qcaan-model'
# The path to save the model. Both relative and absolute paths are allowed.
# It saves the model to the current path by default.
# saved_path = './'
# The number of qubits which the quantum circuit contains.
num_qubits = 8
# The number of depths of complex entangled layers which the quantum circuit contains.
num_depths = 4
# The learning rate used to update the QNN parameters, default to 0.005.
lr_qnn = 0.005
# The batch size in each iteration
batch_size = 128
# Latent feature numbers which represents the input dimension of the generator.
latent_dim = 16
# The number of epochs to train the model.
epochs = 21
# The learning rate used to update the generator parameters, default to 0.0002.
lr_g = 0.0002
# The learning rate used to update the discriminator parameters, default to 0.0002.
lr_d = 0.0002
# The beta1 used in Adam optimizer of generator and discriminator, default to 0.5.
beta1 = 0.5
# The beta2 used in Adam optimizer of generator and discriminator, default to 0.9.
beta2 = 0.9
# The manual seed for reproducibility.
manual_seed = 888
task = 'test'
text = '查宁波到北京的火车票'
num_filter = 1
kernel_size = 5
circuit_depth = 2
padding = 2
model_path = 'decoder.pdparams'
bert_model = 'bert-base-chinese'
hidden_size = 768
classes = ['火车', '音乐', '天气', '短信', '电话', '航班', '新闻']
# !/usr/bin/env python3
# Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import warnings
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
import argparse
import toml
from paddle_quantum.qml.bert_qtc import train, inference
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Classify the intent by the BERT-QTC model.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
args = parser.parse_args()
config = toml.load(args.config)
task = config.pop('task')
if task == 'train':
train(**config)
elif task == 'test':
prediction = inference(**config)
text = config['text']
print(f'The input text is {text}.')
print(f'The prediction of the model is {prediction}.')
else:
raise ValueError("Unknown task, it can be train or test.")
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 意图识别简介\n",
"\n",
"*Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"自然语言处理(Natural Language Processing, NLP)是一种机器学习技术,使计算机具有解释、理解和使用人类语言的能力。现在的各类政企拥有大量的语音和文本数据,这些数据来自各种通信渠道,如电子邮件、文本信息、社交媒体新闻报道、视频、音频等等。他们使用NLP软件来自动处理这些数据,分析信息中的意图或情绪,并实时回应人们的沟通。\n",
"\n",
"意图识别是自然语言处理中的基础任务之一,在搜索引擎、智能客服、机器人等产品中都有着重要的应用。\n",
"\n",
"这里,我们使用 BERT-QTC [1] 这一量子经典混合模型来实现意图识别任务。这里,我们的意图识别任务是针对输入的文本,确定这句话所对应的意图,如聊天、询问菜谱、询问电视节目等。\n",
"\n",
"我们使用 [SMP2017 数据集](https://github.com/HITlilingzhi/SMP2017ECDT-DATA) [2]来进行实验展示,我们选取了其中七个类别来进行训练,分别是火车,音乐,天气,短信,电话,航班,新闻。数据的样本如下:\n",
"\n",
"- 查宁波到北京的火车票\n",
"- 我想知道浙江义乌的天气\n",
"- 帮我查一下明天广州到长沙的航班\n",
"- 我想听最新军事新闻。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 使用 BERT-QTC 实现意图识别\n",
"\n",
"BERT-QTC模型是一个量子经典混合模型。它的模型结构如下:\n",
"\n",
"![the arch of the bert-qtc model](bert_qtc_arch.png)\n",
"\n",
"模型的具体流程如下:\n",
"\n",
"1. 使用 BERT [3] 对输入文本进行特征提取,得到句子级别的特征表示。\n",
"2. 对于 BERT 提取到的特征,使用量子时序卷积(Quantum Temporal Convolution, QTC)和全局最大池化(Global Maxing Pooling, GMP)进行进一步的特征提取和降维。\n",
"3. 使用全连接层,进行分类预测,得到分类结果。\n",
"\n",
"### 工作流\n",
"\n",
"BERT-QTC 模型是学习类的模型。我们需要先使用数据集对模型进行训练。在训练收敛后,我们便得到了一个训练好的模型,这个模型可以对这类数据进行分类。其中,由于 BERT 模型是一个大型语言模型。因此,我们使用预训练好的模型来进行特征提取。在之后的模型训练过程中,BERT 部分的参数不再进行训练。\n",
"\n",
"总结来说,其工作流如下:\n",
"\n",
"1. 制备意图识别的数据集。\n",
"2. 使用数据集对 BERT-QTC 模型训练,得到训练好的模型。\n",
"3. 使用该模型对输入的文本进行预测,得到预测结果。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 如何使用\n",
"\n",
"这里,我们给出一个训练好的模型供测试使用。只需要在 `example.toml` 这个配置文件中进行对应的配置。然后输入 `python intent_classification.py --config example.toml` 即可对输入的文本进行测试。\n",
"\n",
"### 在线演示\n",
"\n",
"这里,我们给出一个在线演示的版本。首先定义配置文件的内容:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"task = 'test'\n",
"text = '查宁波到北京的火车票'\n",
"num_filter = 1\n",
"kernel_size = 5\n",
"circuit_depth = 2\n",
"padding = 2\n",
"model_path = 'decoder.pdparams'\n",
"bert_model = 'bert-base-chinese'\n",
"hidden_size = 768\n",
"classes = ['火车', '音乐', '天气', '短信', '电话', '航班', '新闻']\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"接下来是测试部分的代码:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"输入的文本是:查宁波到北京的火车票。\n",
"模型的预测结果是:火车。\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings('ignore')\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.qml.bert_qtc import train, inference\n",
"\n",
"config = toml.loads(test_toml)\n",
"task = config.pop('task')\n",
"if task == 'train':\n",
" train(**config)\n",
"elif task == 'test':\n",
" prediction = inference(**config)\n",
" text = config['text']\n",
" print(f'输入的文本是:{text}。')\n",
" print(f'模型的预测结果是:{prediction}。')\n",
"else:\n",
" raise ValueError(\"未知的任务,它可以是'train'或'test'。\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"在这里,我们只需要修改要配置文件中的 text 的内容,再运行整个代码,就可以快速对其它文本测试。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 注意事项\n",
"\n",
"在这里,我们提供的仅仅是一个demo模型,其效果只用来展示。对于实际应用场景来说,需要进行针对性的设计和训练,才能达到更好的效果。\n",
"\n",
"### 数据集结构\n",
"\n",
"如果想要使用自定义数据集进行训练,只需要按照规则来准备数据集即可。在数据集文件夹中准备 `train.txt` 和 `test.txt`,如果需要验证集的话还有 `dev.txt`。每个文件里使用一行代表一条数据。每行内容包含文本和对应的标签,使用制表符隔开。文本是由空格隔开的文字组成。\n",
"\n",
"### 配置文件介绍\n",
"\n",
"在 `test.toml` 里有测试所需要的完整的配置文件内容参考。在 `train.toml` 里有训练所需要的完整的配置文件内容参考。使用 `python intent_classification --config train.toml` 可以对模型进行训练。使用 `python intent_classification --config test.toml` 可以加载训练好的模型进行测试。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 参考文献\n",
"\n",
"[1] Yang C H H, Qi J, Chen S Y C, et al. When BERT meets quantum temporal convolution learning for text classification in heterogeneous computing[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 8602-8606.\n",
"\n",
"[2] Zhang W N, Chen Z, Che W, et al. The first evaluation of Chinese human-computer dialogue technology[J]. arXiv preprint arXiv:1709.10217, 2017.\n",
"\n",
"[3] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019: 4171-4186."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "temp",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.15"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction to Intent Classification\n",
"\n",
"*Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"Natural language processing (NLP) is a machine learning technology that gives computers the ability to interpret, manipulate, and comprehend human language. Organizations today have large volumes of voice and text data from various communication channels like emails, text messages, social media newsfeeds, video, audio, and more. They use NLP software to automatically process this data, analyze the intent or sentiment in the message, and respond in real time to human communication.\n",
"\n",
"Intent classification is one of the fundamental tasks in NLP and has important applications in products such as search engines, intelligent customer service, and robotics.\n",
"\n",
"Here, we use BERT-QTC [1], a quantum-classical hybrid model, to implement the intent recognition task. Here, our intention recognition task is to determine, for the input text, the intention corresponding to this sentence, such as chatting, asking for a recipe, asking for a TV channel, etc.\n",
"\n",
"We used the [SMP2017 dataset](https://github.com/HITlilingzhi/SMP2017ECDT-DATA) [2] for our experiment. We select four of the classes for training, which are train, music, weather, message, telephone, flight, and news. A sample of the data is as follows:\n",
"\n",
"- 查宁波到北京的火车票\n",
"- 我想知道浙江义乌的天气\n",
"- 帮我查一下明天广州到长沙的航班\n",
"- 我想听最新军事新闻。\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Intent Classification Using BERT-QTC Model\n",
"\n",
"The BERT-QTC model is a quantum-classical hybrid model. It has the following model structure:\n",
"\n",
"![the arch of the bert-qtc model](bert_qtc_arch.png)\n",
"\n",
"The workflow of the model is as follows:\n",
"\n",
"1. Use BERT [3] to extract the feature of the input text to obtain a sentence-level feature representation.\n",
"2. For the features extracted by BERT, use Quantum Temporal Convolution (QTC) and Global Maxing Pooling (GMP) for further feature extraction and dimensionality reduction.\n",
"3. Use the fully connected layer to perform classification prediction and obtain classification results.\n",
"\n",
"### Workflow\n",
"\n",
"BERT-QTC is a learning model. Thus, We need to use the dataset to train the model first. After the training converges, we get a trained model that can classify this type of data. Among them, because the BERT model is a large language model. Therefore, we use pre-trained models for feature extraction. In the subsequent model training process, the parameters of the BERT part are no longer trained.\n",
"\n",
"In summary, its workflow is as follows:\n",
"\n",
"1. Prepare a dataset for intent classification.\n",
"2. Use the dataset to train the BERT-QTC model to get the trained model.\n",
"3. Use the model to predict the input text and get the prediction result."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## How To Use\n",
"\n",
"Here, we give a trained model for testing. You only need to make corresponding configurations in the `example.toml` configuration file. Then enter `python intent_classification.py --config example.toml` to test the entered text.\n",
"\n",
"### Online Demo\n",
"\n",
"Here, we give a version of the online demo. First define the contents of the configuration file. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"task = 'test'\n",
"text = '查宁波到北京的火车票'\n",
"num_filter = 1\n",
"kernel_size = 5\n",
"circuit_depth = 2\n",
"padding = 2\n",
"model_path = 'decoder.pdparams'\n",
"bert_model = 'bert-base-chinese'\n",
"hidden_size = 768\n",
"classes = ['火车', '音乐', '天气', '短信', '电话', '航班', '新闻']\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Next is the code for the prediction section."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The input text is 查宁波到北京的火车票.\n",
"The prediction of the model is 火车.\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings('ignore')\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.qml.bert_qtc import train, inference\n",
"\n",
"config = toml.loads(test_toml)\n",
"task = config.pop('task')\n",
"if task == 'train':\n",
" train(**config)\n",
"elif task == 'test':\n",
" prediction = inference(**config)\n",
" text = config['text']\n",
" print(f'The input text is {text}.')\n",
" print(f'The prediction of the model is {prediction}.')\n",
"else:\n",
" raise ValueError(\"Unknown task, it can be train or test.\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we only need to modify the content of the text in the configuration file, and then run the entire code to quickly test other texts."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Note\n",
"\n",
"Here, we provide only a demo model, and its effect is only for demonstration. For practical application scenarios, targeted design and training are needed to achieve better results.\n",
"\n",
"### The structure of the dataset\n",
"\n",
"If you want to use a custom dataset for training, you just need to prepare the dataset according to the rules. Prepare `train.txt` and `test.txt` in the dataset folder, and `dev.txt` if a validation set is needed. One line is used to represent one piece of data in each file. Each line contains text and a corresponding label, separated by tabs. Text is composed of space-separated words.\n",
"\n",
"### Introduction to the Configuration File\n",
"\n",
"In `test.toml`, there is a complete reference to the configuration files needed for testing. In `train.toml`, there is a complete reference to the configuration files needed for training. Use `python intent_classification --config train.toml` to train the model. Use `python intent_classification --config test.toml` to load the trained model for testing.\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"\n",
"[1] Yang C H H, Qi J, Chen S Y C, et al. When BERT meets quantum temporal convolution learning for text classification in heterogeneous computing[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 8602-8606.\n",
"\n",
"[2] Zhang W N, Chen Z, Che W, et al. The first evaluation of Chinese human-computer dialogue technology[J]. arXiv preprint arXiv:1709.10217, 2017.\n",
"\n",
"[3] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2019: 4171-4186."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "temp",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.15"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
你知道小黄鸡吗 0
你在说啥啊 0
喝啥子哩 0
你娘亲是谁 0
你想吃什么呀 0
你知道怎么拍照么 0
笨笨家在哪 0
做的还是不够好 0
有汽车票吗 0
你是宇宙飞船吗 0
这有什么厉害的 0
怎么我问你什么你都不知道 0
你喜欢玩肥皂么? 0
会lol吗 0
喝哪种酒 0
哪个学校的 0
哪里牛逼了 0
你怎么就知道我不会呢? 0
谁是小萌萌 0
你的梦想是什么呢 0
就不告诉你又能咋样我你再跟我那么叫我起床!跟我说句话!估计你给我发个图或者跟你给我发个表情我才能告诉你!要不然你就得跟我换个秘密! 0
笨笨你爸比和帮主是什么关系 0
真的吗 0
你妈逼你考大学了吗 0
干啥呢 0
你什么意思 0
凭啥 0
问题是有些东西你听不懂怎么聊天 0
没听清吗 0
什么语无伦次的 0
你是说我有意思吗 0
对呀+你咋这么笨呢 0
这有啥意思 0
能告诉我徐乐乐的信息吗 0
你在玩什么呀 0
徐梓翔是谁 0
能发点其他的吗 0
车车美么 0
你咋不上天呢 0
这么神奇吗 0
你是哪里的人啊 0
谁是猪 0
是谁呀 0
提示什么 0
你咋知道我是老师? 0
你现在在做什么啊 0
你唉啥 0
你是穆斯林吗 0
我就不是好孩子了怎么了 0
我喜欢好朋友吗 0
你什么梦想 0
怎么安慰啊 0
这么傲娇 0
你的性格是什么 0
是我说话太直接了么 0
你们你过生日吃蛋糕吗 0
你是妹妹还是哥哥 0
两点半夜鸡叫什么了不少钱包 0
你这都跟哪啊快雪的 0
你好能看懂表情吗呲牙 0
这么笨啊怪不得叫笨笨 0
刘挺是谁? 0
哪天的? 0
几八痒 0
除了会说这些还会说哪行啊 0
是啊你说的那几句话到底啥意思 0
你是用什么训练的 0
你咋没上幼儿园 0
你是白痴吗 0
你有兄弟么 0
谁最聪明 0
是什么呢 0
爱她该怎么做 0
满意吗 0
谁是你的爱 0
其实我不跟你说话么办 0
娃咋么杨你因为呵呵演完呀不到你 0
你生日是什么时候 0
你下蛋吗 0
在干吗啊 0
我是不是好没用 0
哼谁是你亲爱的 0
有房有车吗微笑 0
一共有几个小动物 0
谁子 0
摸啥 0
你怎么变笨了 0
订机票今天吃什么 0
我都不害羞你害羞什么 0
今天什么天气 0
推荐几部呗 0
嗯你叫啥呀 0
下面有什么 0
聊这么 0
我想去上海坐火车得几天 0
你怎么笨笨的? 0
什么鬼玩意 0
它在哪里 0
什么曰子啊 0
啥呀 0 0
这么逗我好玩是不 0
怎么上分 0
什么一年多了 0
你爱玩什么英雄 0
谁告诉你的 0
你能给我唱歌吗 0
你长什么样子 0
你怎么认识我的 0
哪有小猫 0
你怎么会不知道能 0
要啥? 0
欧洲杯谁能夺冠 0
我能把你教坏吗 0
你不是本人吗 0
你觉得我嫁的出去吗 0
天才中有什么重大新闻 0
谈的怎么样 0
怎么这么热受不了了 0
你们这么爽 0
那聊什么呢 0
晚餐吃点什么呢 0
你谁什么 0
你叫什么名字? 0
什么有点意思啊 0
你能做点什么 0
可以教教吗? 0
你还会什么古诗 0
那你哈尔滨那边有什么好吃的呀你今天吃什么呀 0
你能曝照吗 0
朱泽圻女朋友你认识吗 0
好有爱? 0
你肥吗 0
可以告诉我你的性别么 0
怎么走爽了呢 0
弄啥类 0
你怎么知道~ 0
怎么教你记住东西 0
那你为什么要养家糊口 0
那草为啥会绿 0
啊我要订票你怎么吟诗了 0
你们学校机械学院都有哪些好老师他们都叫什么名字 0
怎么写 0
么么么哒 0
笨笨你是没有年龄么 0
最近有啥新鲜事阿 0
你能听懂我说的是啥吗 0
我们去爬山怎么样啊 0
这是哪年的事 0
不行么 0
有答案了吗 0
顾超是谁 0
为啥穷 0
可以帮买火车票吗 0
怎么 0
银耳莲子粥怎么做 1
蛋黄焗薯条怎么做 1
扬州炒饭 1
糖醋鲤鱼怎么做啊?你只负责吃,c则c。 1
素鸡 1
萝卜炖排骨汤怎么做? 1
我说吃飞蟹,怎么做? 1
黑芝麻怎么炒? 1
板栗怎么煮? 1
鱼香肉丝,怎么炒? 1
搜索烤饼的做法。 1
小白菜拌豆腐怎么做呀? 1
猪肉丸子汤怎么做啊? 1
做苹果麦片粥需要哪写材料? 1
帮我搜索一下红烧排骨。 1
怎么样煮方便面。 1
黑米红豆粥的做法。 1
我想我想做老鸭汤用什么材料? 1
豆沙包。 1
做可乐鸡翅需要哪些材料? 1
赵柏田鸡怎么做啊? 1
做黄焖小土豆怎么做啊? 1
酱油鸡。 1
好吧,我想知道稻香排骨是怎么做的? 1
红烧牛肉面我想来一碗。 1
帮我找出红烧肉的做法。 1
做糖醋排骨需要哪些材料呀? 1
煲鸡汤。 1
芒果西米露怎么做啊 1
胡萝卜怎么炒好吃? 1
红烧肉的做法啊! 1
做花生拌豆腐需要什么材料啊? 1
就拉下的饺子怎么做? 1
做火腿炒茄瓜需要什么材料? 1
酸辣藕丁的做法。 1
用电压力锅怎么蒸米粉肉。 1
请问红烧肉的做法?你,你。 1
孜然排骨怎么做? 1
干锅香辣虾 1
肥肠的做法 1
海虾的做法。 1
用菜籽油做面包 1
打豆浆剩下的豆渣怎样做,好吃? 1
脆瓜。 1
灌汤小笼包怎样做。 1
做红烧肉怎么做? 1
做蛋炒饭需要什么材料? 1
酸菜鱼怎样做 1
打了怎么做红烧肉的做法? 1
蒜茄子怎么做? 1
木瓜汤怎么做 1
炝拌土豆丝。 1
宫爆鸡丁的菜谱。 1
江苏路红薯粉怎么做好吃? 1
烧鸭怎么做 1
姐你红烧肉怎么做呀 1
臭豆腐怎么做啊? 1
做红烧肉。 1
不懂饭的做法。 1
鸡蛋的做法 1
红烧带鱼怎么做?你? 1
鱼香肉丝的做法啊! 1
周黑鸭怎么做? 1
那好,那你告诉我西红柿炒鸡蛋,怎么去做? 1
干贝怎么做? 1
怎么做?酸菜鱼? 1
鸡爪怎么炒? 1
干锅鱼怎么做 1
怎样做凉拌? 1
糖醋排骨怎么做?好了额,你。 1
清蒸鱼怎么做?吴? 1
土豆粉怎么做好吃? 1
五梆瓜鸡蛋汤怎么做? 1
刘林面怎么做啊? 1
酸菜粉条肉的做法。 1
童子鸡的做法。 1
讯飞语点请你搜索一下番茄炒蛋怎么做? 1
红烧肉怎么煮啊 1
面包怎么做 1
请问鱼块的做法。 1
猪大肠怎么炒? 1
小猪哨糖醋排骨的做法。 1
嗯咯鸡爪怎么做的。 1
鲤鱼怎么做? 1
酸菜鱼,怎么烧? 1
炸里鸡的做法。 1
不,做个红烧肉怎么做? 1
福寿鱼怎么煮比较好吃? 1
拔丝芋头怎么做 1
播放2005年励志动漫 2
最近有什么电影可以看一下。 2
请搜电影画皮 2
家有儿女第二部 2
张艺谋的甄嬛传 2
小c我想看北京青年 2
美国电影敢死队二 2
发现之美 2
要看特种兵 2
我要看电视了 2
青春四十 2
电影十二生肖 2
整人专家的电影 2
青年奥林匹克运动会 2
汽车总动员 2
恶熊出没 2
第二梦 2
日韩好看的电视剧 2
帮我看下最近有什么好看的电影? 2
播放非诚勿扰 2
蝎子王蝎子王 2
锁梦楼 2
武林风视频 2
找一下非诚勿扰娱乐节目 2
我要看功夫熊猫帮我播放出来 2
请帮我播放樱桃 2
查雷神电影 2
青瓷电视剧 2
请帮我打开电影 2
最近新上映的电视剧。 2
黑猫截长 2
小c帮我搜一下断剑视频 2
我要看西游记啊 2
夜访吸血鬼 2
帮我看一下最近哪些电影是新上映的。 2
尼古拉斯凯奇 2
憨豆特工 2
文艺片 2
体育赛事直播 2
爱情片 2
美国电影破碎之城 2
大耳朵图图获 2
搜幕府风云 2
三女休夫 2
放年代短片看 2
惊情四百年 2
魔幻电影 2
80年代成龙演的电影 2
我想看爱在春天 2
抽身游戏 2
大耳朵图图二 2
最近好评的晚会 2
预约还珠格格第八集 2
小破孩动画片 2
金枝欲孽 2
爱情工寓 2
智慧树我 2
最近最热的电视剧 2
我想看电影台北飘雪 2
音乐视频 2
哪个频道有刘德华出演的电视剧 3
哪个台播放节目 3
今天晚上有没有中国好声音呢? 3
你知道现在广东卫视在演什么吗? 3
今晚看什么电影? 3
金鹰卡通今天晚上放什么节目? 3
电影频道今天放什么 3
15号的我愿意 3
北京青年频道谁在说 3
看一下今天晚上广西台有没有警戒线? 3
世界军事中央台世界军事 3
广西电视台现在播放什么节目? 3
帮我搜下什么时间有演星跳水立方 3
CCTV9明晚有转播仁心解码2呢 3
一站到底是哪个电视台播放的。 3
查下最近1周有转播X女特工 3
第二十二条婚规什么时候结束 3
动漫频道三天之后有开始播吸血鬼日记第四季呢 3
搜一下今天中午开始放郭的秀 3
读书频道上个星期五有播放新闻联播吗 3
湖南卫视有什么节目 3
现在有什么好的电影呢? 3
哪个台在放非诚勿扰 3
明晚真人秀频道有播放过百变大咖秀呢 3
现在有没有什么大片? 3
湖北卫视今天下午六点以后开始播男生女生向前冲 3
帮我查查北京电视台-6上周六开始美味的想念呢 3
中央六台的节目单 3
回放昨晚的焦点访谈 3
下周有什么好看的恐怖电影 3
明天安徽卫视演什么 3
江西卫视三天之后放一站到底么 3
贵州卫视今晚看什么电视剧? 3
湖南卫视现在演什么节目 3
查下安徽电视台今天节目单 3
年代秀什么时候播 3
春饼哪家好 0
那你到底是男生还是女生 0
你是帅哥还是美女 0
你只管帅哥吗 0
那你男神是谁 0
你是谁造的 0
那用什么来形容 0
你可以干什么 0
这么久在干嘛呢 0
你有多少最爱啊 0
你会说情话吗 0
你来吗 0
嘿干什么呢 0
认识俞霖霖吗 0
你是男生还是女生呀 0
是和我没话了吗 0
你认识么 0
没有其他的吗 0
是不是笨笨你觉得你笨吗 0
我说你喜欢我么 0
你是不是 0
你是哪儿人啊 0
小时候住过哪里 0
哥哥是谁 0
我去过陈中宝吗 0
我想想你刚刚说什么来着 0
你确定你能懂我说的是什么么 0
天啊是谁 0
你在犹豫什么 0
怎么考好呢 0
告诉我怎么去 0
长大了要干嘛 0
你QQ号吗诗多少 0
见谁 0
不是说你是难道是说我吗 0
会唱小星星吗 0
为什么要矜持 0
你到开心果是什么问题 0
发照片,看看你长什么样 0
太阳是不是有下雨吗 0
你知道你的名字是什么吗 0
我找男朋友了家里人不同意怎么办 0
你怎么这么猥琐 0
你特长是谁 0
搞笑吗 0
大家都说我瘦怎么办呀 0
有你漂亮么 0
你是不是有病呀 0
你需要睡觉吗 0
朱泽圻是谁 0
你和Siri是好朋友吗 0
素烩汤。 1
麻辣豆腐怎么做啊? 1
大薯条怎么做? 1
今晚红烧肉的做法。 1
皮冻的做法。 1
腊鱼头怎么做? 1
帮我看一下糖醋排骨怎么烧? 1
小军儿炖土豆该怎么做? 1
牛肉怎么烧 1
桂林米粉,哦。 1
红烧肉要哪些材料了 1
麻婆豆腐怎么做 1
艺锅肉怎么做? 1
怎样做肉类? 1
来做烧卖。 1
教我煮红烧排骨 1
酸菜鱼怎么烧的? 1
酱牛肉怎么做 1
你如何做螃蟹? 1
放鱼怎么做? 1
怎样给宝宝做胡萝卜泥。 1
桂花糕。 1
拌面该怎么做? 1
酒鬼花生怎么做? 1
怎么做番茄汤 1
腰花可以怎么做? 1
了,鸡蛋怎么做? 1
热狗了。 1
鸡蛋怎么炒最好吃? 1
鲜肉玉米羹怎么做? 1
人心无穷烧肉的做法。 1
酱油汁的做法。 1
请问红烧茄子怎么做? 1
做酸菜鱼。 1
糖醋藕怎么做? 1
搜索西红柿的做法。 1
刀削面怎么样做。 1
土豆怎么做才好吃? 1
地三鲜怎么做?你? 1
鸡蛋怎么做? 1
保险,酸辣土豆丝怎么炒? 1
怎样煮酸甜排骨。 1
鸡怎么烧 1
烤鸡翅怎么做呀? 1
北京烤鸭做法。 1
糖醋排骨怎么做呢 1
糖醋里脊啊,啊 1
糯米肉怎么做? 1
制作酸梅汤怎么做? 1
肉炒饭怎么做? 1
咸鸡,的制作方法。 1
我想听回锅肉做法。 1
啤酒鸭怎么做? 1
即鱼汤怎么做? 1
板栗如何烧? 1
怎么做馒头 1
国画怎样做土豆烧鸡块 1
有油炸酱料怎么做? 1
面条怎么样煮啊? 1
做卤菜需要什么材料? 1
冰激凌怎么做哦 1
苹果麦片粥需要哪些材料? 1
布丁需要哪些材料? 1
炖鸡 1
有什么下酒菜吗 1
怎么做番茄炒蛋? 1
清蒸武昌鱼。 1
怎样做面筋 1
白萝卜汤怎么做? 1
腐乳肉怎么做? 1
鸡煲啊! 1
鱼香茄子怎么做 1
青鱼做法视频 1
请问酸辣土豆丝是怎么炒的 1
帮我搜索一下羊肉面片。 1
汉堡。 1
回锅肉怎么做啊 1
煮水饺。 1
烧鸡块怎么做? 1
怎样做糖醋排骨。 1
羊肉怎么烧? 1
梅菜扣肉怎么做啊? 1
小鸡炖蘑菇的做法爸 1
红烧鱼怎么烧啊? 1
死肉怎么烧好吃? 1
做凉拌青瓜炒肉用哪些材料? 1
请问牛排怎么做 1
豆豉鲮鱼油麦菜 1
日本豆腐的做法。 1
哈哈,吃了水煮鱼怎么做? 1
午夜凶铃 2
看最近有什么电影? 2
播放落日批示 2
福尔摩斯探案集 2
最新电影 2
看毛泽东在武汉的故事 2
一路向西 2
台剧 2
叶落长安电视剧 2
穷孩子富孩子 2
神兵小将 2
需要你帮我推荐一下电影 2
古墓丽影 2
播放卡洛斯沙尔阿 2
天才碰麻瓜 2
死亡笔记 2
给我播放最近好评的电影 2
封神榜 2
光阴 2
最近有什么好看的电影?啊? 2
美国电影少年派的奇幻漂流 2
共同关注 2
蓝猫龙骑团 2
电视剧粘豆包 2
最新有什么热门电影 2
最美好的时光便时计 2
最近有什么热播的电视剧。 2
熊出没啊 2
播放搞笑动漫 2
封神太子 2
有什么好看的电影没有 2
最近有哪些电影的资金字。 2
粉红女郎 2
你给我推荐个电影 2
泰国的节目 2
美人心计电视连续剧 2
孙俪的电视剧 2
动作电影 2
我要看综艺 2
张绍刚的综艺节目 2
给我搜个电影 2
中国霸王花 2
最近流行电影。 2
帮我搜索一下最近有什么电影可看? 2
新水浒传 2
抬头见喜 2
我要看巴拉小魔仙 2
天龙八部第二集 2
大西南剿匪记 2
荒野求 2
有没有九十年代热门的动漫 2
好评的喜剧电影 2
超级战舰电影 2
我想看电影搜索 2
2012年美国公告牌音乐大奖颁奖礼 2
喂我想看电影喂 2
最近有什么大片儿? 2
虎胆龙威 2
播放吸血鬼日记第二季 2
那金花和她的女婿 2
科幻篇 2
打狗棍播出时间表 3
电视上播放什么节目 3
chc高清的节目单 3
北京生活频道节目单 3
哪个台播出爱情公寓 3
爸爸去哪几点结束呀 3
越策越开心这个节目是哪个台播的呀? 3
什么时候有泰囧 3
湖南卫视电视节目表 3
中央三今天有什么节目 3
今天晚上天津卫视放什么年岁月 3
湖南卫视节目录 3
湖南卫视的快乐大本营什么时候播 3
淮南电视台综合频道节目。 3
明天什么节目啊? 3
现在浙江卫视在放什么节目 3
CCTV-9的节目单 3
湖南卫视现在在播什么 3
浙江卫视今晚有中国好声音吗? 3
最近下午有什么电影? 3
中央一一台有什么节目吗 3
青海卫视频道的节目单 3
中央一套昨天的新闻联播 3
前天中央一套的节目单 3
帮我查下下周演新闻联播 3
电视回看安徽卫视 3
十二频道今天有什么节目 3
12月9日北京卫视节目单 3
搜索东方卫视的电视节目。 3
财经频道今天有什么节目 3
浙江高清台的节目单 3
等一下七点半湖南卫视会播出什么节目 3
四川卫视频道的节目单 3
2012年的恐怖电影 3
现在有什么好看的电视剧吗? 3
安徽卫视今晚放什么节目啊? 3
此差异已折叠。
# The full config for test the BERT-QTC model.
# The task of this config. Available values: 'train' | 'test'.
task = 'test'
# The text to be tested.
text = 'The text to be predicted.'
# The path of the trained model, which will be loaded.
model_path = 'decoder.pdparams'
# The number of the filters
num_filter = 1
# The size of the kernel
kernel_size = 5
# The depth of the quantum circuit
circuit_depth = 2
# The length to pad
padding = 2
# The pretrained bert model
bert_model = 'bert-base-chinese'
# The size of the hidden state obtained through the BERT model
hidden_size = 768
# The classes of input text to be predicted.
classes = ['火车', '音乐', '天气', '短信', '电话', '航班', '新闻']
# The full config for train the BERT-QTC model.
# The task of this config. Available values: 'train' | 'test'.
task = 'train'
# The name of the model, which is used to save the model.
model_name = 'decoder'
# The path to save the model. Both relative and absolute paths are allowed.
# It save the model to the current path by default.
# saved_path = './'
# The number of the filters
num_filter = 1
# The size of the kernel
kernel_size = 5
# The depth of the quantum circuit
circuit_depth = 2
# The length to pad
padding = 2
# The pretrained bert model
bert_model = 'bert-base-chinese'
# The size of the hidden state obtained through the BERT model
hidden_size = 768
# The size of the batch samplers.
batch_size = 8
# The number of epochs to train the model.
num_epochs = 10
# The learning rate used to update the parameters, defaults to 0.01.
# learning_rate = 0.01
# The path of the dataset.
dataset = 'smp_data'
# Whether use the validation.
# It is true means the dataset contains training, validation and test datasets.
# It is false means the dataset only contains training datasets and test datasets.
using_validation = true
# Number of epochs with no improvement after which training will be stopped.
# early_stopping = 1000
\ No newline at end of file
......@@ -88,7 +88,7 @@ def main(args):
else:
vqe_settings = parsed_configs["VQE"]
solver = GroundStateSolver(optimizer, **vqe_settings)
_, psi = solver.solve(mol, ansatz, **optimizer_settings[optimizer_name])
_, psi = solver.solve(ansatz, mol=mol, **optimizer_settings[optimizer_name])
e = energy(psi, mol)
d = dipole_moment(psi, mol)
......
......@@ -200,7 +200,7 @@
" cir = HardwareEfficient(mol.num_qubits, cir_depth)\n",
"\n",
" solver = GroundStateSolver(Adam, num_iterations=100, tol=1e-5)\n",
" e, psi = solver.solve(mol, cir, learning_rate=0.5)\n",
" e, psi = solver.solve(cir, mol=mol, learning_rate=0.5)\n",
" energies.append(e)\n",
"\n",
" # calculate dipole moments\n",
......
......@@ -29,7 +29,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quantum solutions\n",
"## Quantum solution\n",
"Unlike classical algorithms, the core of the quantum Monte Carlo method is to simulate probability distributions with quantum circuits and store asset prices in quantum states, and then calculate the average of returns in parallel through quantum amplitude estimation algorithms. Through the characteristics of quantum superposition and entanglement, quantum schemes have the advantage of quadratic acceleration compared with classical schemes. Next, we take a European call option as an example to show how to use Paddle Quantum to simulate this quantum scheme to complete the risk-neutral pricing problem of European call options."
]
},
......@@ -46,8 +46,8 @@
"metadata": {},
"source": [
"We have set a parameter that can be used directly for the pricing of European call options. Just configure it in the configuration file `config.toml` and enter the command \n",
"`python euro_pricing.py --config config.toml`\n",
"The configured European options can be priced.\n",
"`python euro_pricing.py --config config.toml`.\n",
"The configured European options can then be priced.\n",
"\n",
"Here, we give a version of the online demo that can be tested online. First define the contents of the configuration file:"
]
......@@ -148,7 +148,7 @@
"\n",
"# Note\n",
"\n",
"The models presented here are only intended to solve the option pricing problem of the Black-Scholes model.\n",
"The model presented here is only intended to solve the option pricing problem of the Black-Scholes model.\n",
"\n",
"# Citation\n",
"\n",
......
......@@ -87,7 +87,7 @@ def main(args):
if __name__ == "__main__":
this_file_path = sys.path[0]
parser = argparse.ArgumentParser(description="Quantum chemistry task with paddle quantum.")
parser = argparse.ArgumentParser(description="Quantum portfolio optimization with paddle quantum.")
parser.add_argument(
"--config", default=os.path.join(this_file_path, './config.toml'), type=str, help="The path of toml format config file.")
parser.add_argument(
......
# The path of the input data. It should be a .txt file and in the IEEE Common Data Format.
data_dir = './ieee5cdf.txt'
# Threshold for loss value to end optmization for power flow, default is 1e-3
threshold = 1e-3
# Minimum number of iterations of power flow optimization.
minIter = 3
# Maximum number of iteration of power flow optimization.
maxIter = 100
# The depth of the quantum ansatz circuit.
depth = 4
# Number optimization cycles of quantum circuit.
iterations = 100
# The learning rate of the optimizer.
LR = 0.1
# Threshold for loss value to end optimization for quantum circuit early, default is 0.
gamma = 0
data_dir = './ieee5cdf.txt'
threshold = 1e-1
minIter = 1
maxIter = 100
depth = 4
iterations = 100
LR = 0.1
gamma = 0
08/19/93 UW ARCHIVE 100.0 1962 W IEEE 14 Bus Test Case
BUS DATA FOLLOWS 14 ITEMS
1 Bus 1 HV 1 1 3 1.060 0.0 0.0 0.0 000.0 -00.0 0.0 1.060 0.0 0.0 0.0 0.0 0
2 Bus 2 HV 1 1 0 1.000 -0.00 20.0 10.0 40.0 30.0 0.0 1.045 50.0 -40.0 0.0 0.0 0
3 Bus 3 HV 1 1 0 1.000 -12.72 45.0 15.0 0.0 00.0 0.0 1.010 40.0 0.0 0.0 0.0 0
4 Bus 4 HV 1 1 0 1.000 -10.33 40.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
5 Bus 5 HV 1 1 0 1.000 -8.78 60.0 10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
-999
BRANCH DATA FOLLOWS 20 ITEMS
1 2 1 1 1 0 0.02000 0.06000 0.0600 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
1 3 1 1 1 0 0.08000 0.24000 0.0500 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 3 1 1 1 0 0.06000 0.18000 0.0400 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 4 1 1 1 0 0.06000 0.18000 0.0400 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
2 5 1 1 1 0 0.04000 0.12000 0.0300 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
3 4 1 1 1 0 0.01000 0.03000 0.0200 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
4 5 1 1 1 0 0.08000 0.24000 0.0500 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
-999
LOSS ZONES FOLLOWS 1 ITEMS
1 IEEE 14 BUS
-99
INTERCHANGE DATA FOLLOWS 1 ITEMS
1 2 Bus 2 HV 0.0 999.99 IEEE14 IEEE 14 Bus Test Case
-9
TIE LINES FOLLOWS 0 ITEMS
-999
END OF DATA
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# 电力潮流计算\n",
"*Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"## 背景介绍\n",
"\n",
"电力潮流计算是电力系统中一类重要的分析计算,其目的是在给定的运行条件下求取电力系统各个节点的电压及功率分布等信息。潮流计算几乎是所有电力系统分析的基础,同时也为电力系统的规划,扩建,和运行方式提供了支撑。一个简单的例子是,对于一个仍在规划中的电力系统,我们通过潮流计算可以检验所提出的电力系统规划方案能否满足各种运行方式的要求。因此电力潮流计算有着非常重要的实际意义。\n",
"\n",
"在潮流模型中,电力系统的各个部分由节点和连接节点的线表示。对于交流电而言,每个节点所需要考虑的参数一般有四个,分别是电压幅度(voltage magnitude),相位(phase angle),有功功率(active power)和无功功率(reactive power)。其中有功功率指的是电路实际将电能转化为其他形式能量的功率,而无功功率指的是能量在电源和负载之间不断来回跳动产生的功率。根据给定的初始条件,通过解节点电压方程,我们即可得到每个节点上未知的参数。从数学模型角度来看,潮流计算可以规约于解决一组非线性方程。具体地,对于一个拥有 $n$ 个节点的电力系统里的第 $i$ 个节点,我们有:\n",
"\n",
"$$\\begin{cases}\n",
"P_i-U_i\\sum^n_{j=1}U_j(G_{ij}\\cos\\delta_{ij}+B_{ij}\\sin\\delta_{ij}) &= 0 \\\\\n",
"Q_i-U_i\\sum^n_{j=1}U_j(G_{ij}\\sin\\delta_{ij}-B_{ij}\\cos\\delta_{ij}) &= 0 \n",
"\\end{cases}$$\n",
"其中 $P_i$ 表示第 $i$ 个节点的净注入有功功率,$Q_i$ 表示第 $i$ 个节点的净注入无功功率, $U_i$ 表示第 $i$ 个节点的电压。同时,这里 $G$ 表示导纳矩阵的实部, $B$ 表示导纳矩阵的虚部, $\\delta_{ij}$ 表示第 $i$ 个和第 $j$ 个节点的相位差。\n",
"\n",
"非线性方程的求解是一个困难的问题,因此我们常常使用一些数值方法去求取近似解。其中比较常用的一种方法是牛顿迭代法(Newton-Raphson method),它可以通过迭代次数的增加,而越来越接近方程的解。简单来说,牛顿迭代法就是从一个初始点出发,通过求解导数来找到合适的优化方向,不断更新我们的近似解,从而得到更加接近真实解的结果。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 量子解决方案\n",
"\n",
"我们已经知道牛顿迭代法是解决潮流计算的一种常用的经典算法,其解决潮流计算问题的复杂度主要来源于对于雅可比矩阵求逆的过程,这个过程的复杂度为$\\mathcal{O}(poly(n))$。在牛顿迭代法中,雅可比矩阵求逆的过程也可以被看作是求解线性方程组的过程。而在量子计算领域,已经有多个量子算法被提出用于解决线性方程组,如Harrow-Hassidim-Lloyd(HHL)算法,变分量子线性求解器(Variational Quantum Linear Solver, VQLS)。并且相比于经典算法,它们已经被证明在一定条件下在解决线性方程组问题上存在着指数加速的优势。在这里,我们可以使用变分量子线性求解器(VQLS)来替换经典牛顿迭代法中复杂度较高的雅可比矩阵求逆过程,从而实现潮流计算问题的量子加速 [1]。我们在量子应用模型库中同样提供了变分量子线性求解器的相关模型和教程,您可以阅读其教程来了解该算法的更多细节。接下来,我们将以一个5节点系统为例,展示基于牛顿迭代法和变分量子线性求解器的潮流计算量子解决方案。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 使用教程\n",
"\n",
"我们给出了一个设置好参数,可以直接进行电力潮流计算的配置文件。用户只需在`config.toml`里修改相应的参数,并在终端运行\n",
"`python power_flow.py --config config.toml`,即可对给定数据进行潮流分析。\n",
"### 输出结果\n",
"电力潮流的计算结果将被记录到文件 `pf_result.txt` 中。同时我们的优化过程将被记录在日志文件`power_flow.log`中,用户可以看到随着循环数的增加,损失大小的变化。\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 在线演示\n",
"这里,我们给出一个在线演示的版本,可以在线进行测试。首先定义配置文件的内容:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"# 模型的整体配置文件\n",
"# 存储潮流计算初始数据文件的路径。\n",
"data_dir = './ieee5cdf.txt'\n",
"\n",
"# 结束潮流计算的误差阈值, 默认为1e-3。\n",
"threshold = 1e-3\n",
"\n",
"# 潮流计算中最小迭代次数。\n",
"minIter = 3\n",
"\n",
"# 潮流计算中最大迭代次数。\n",
"maxIter = 100\n",
"\n",
"# 参数化量子电路的层数。\n",
"depth = 4\n",
"\n",
"# 量子电路优化迭代次数。\n",
"iterations = 100\n",
"\n",
"# 优化器的学习率。\n",
"LR = 0.1\n",
"\n",
"# 参数化量子电路优化中损失函数的阈值。默认为0。\n",
"gamma = 0\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"接下来是模型运行部分的代码:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 100/100 [01:19<00:00, 1.25it/s]\n",
"100%|██████████| 100/100 [01:08<00:00, 1.46it/s]\n",
"100%|██████████| 100/100 [00:57<00:00, 1.73it/s]\n",
"100%|██████████| 100/100 [01:01<00:00, 1.63it/s]\n",
"100%|██████████| 100/100 [00:57<00:00, 1.73it/s]\n",
"100%|██████████| 100/100 [01:00<00:00, 1.65it/s]\n",
"100%|██████████| 100/100 [01:10<00:00, 1.41it/s]\n",
"100%|██████████| 100/100 [01:17<00:00, 1.29it/s]\n",
"100%|██████████| 100/100 [01:20<00:00, 1.25it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"输出电力潮流计算结果:\n",
"Power Flow:\n",
"\n",
"| Bus | Bus | V | Angle | Injection | Generation | Load |\n",
"| No | Name | pu | Degree | MW | MVar | MW | Mvar | MW | MVar |\n",
"| 1 |Bus 1 HV| 1.060 | 0.000 | 129.816 | 24.447 | 129.816 | 24.447 | 0.000 | 0.000 |\n",
"| 2 |Bus 2 HV| 1.036 | -0.046 | 20.000 | 20.000 | 40.000 | 30.000 | 20.000 | 10.000 |\n",
"| 3 |Bus 3 HV| 1.009 | -0.084 | -45.000 | -15.000 | 0.000 | -0.000 | 45.000 | 15.000 |\n",
"| 4 |Bus 4 HV| 1.007 | -0.090 | -40.000 | -5.000 | -0.000 | 0.000 | 40.000 | 5.000 |\n",
"| 5 |Bus 5 HV| 1.002 | -0.104 | -60.000 | -10.000 | 0.000 | -0.000 | 60.000 | 10.000 |\n",
"----------------------------------------------------------------------------------------------------------\n",
"\n",
"Network and losses:\n",
"\n",
"| From | To | P | Q | From | To | P | Q | Branch Loss |\n",
"| Bus | Bus | MW | MVar | Bus | Bus | MW | MVar | MW | MVar |\n",
"| 1 | 2 | 95.69 | 13.87 | 2 | 1 | -81.06 | -9.54 | 14.63 | 4.33 |\n",
"| 1 | 3 | 46.48 | 10.58 | 3 | 1 | -34.51 | -6.77 | 11.97 | 3.81 |\n",
"| 2 | 3 | 28.99 | 8.15 | 3 | 2 | -20.24 | -7.01 | 8.74 | 1.13 |\n",
"| 2 | 4 | 32.23 | 8.06 | 4 | 2 | -23.40 | -6.65 | 8.83 | 1.42 |\n",
"| 2 | 5 | 58.11 | 13.33 | 5 | 2 | -50.69 | -9.77 | 7.42 | 3.56 |\n",
"| 3 | 4 | 20.94 | -1.21 | 4 | 3 | -16.84 | 1.32 | 4.10 | 0.11 |\n",
"| 4 | 5 | 11.41 | 0.33 | 5 | 4 | -1.28 | -0.23 | 10.12 | 0.10 |\n",
"----------------------------------------------------------------------------------------------------------\n",
"\n",
"Total active power losses: 65.82, Total reactive power losses: 14.45\n",
"误差为: 1.9593560071140548e-05\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings('ignore')\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.data_analysis.power_flow import data_to_Grid\n",
"\n",
"config = toml.loads(test_toml)\n",
"file_name = config.pop('data_dir')\n",
"grid = data_to_Grid(file_name)\n",
"grid.powerflow(**config)\n",
"print(\"输出电力潮流计算结果:\")\n",
"grid.printResults()\n",
"Error = grid.tolerances[-1] \n",
"print(f\"误差为: {Error}\")\n",
"grid.saveResults()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"这里误差指的是求解方程后方程组的绝对误差的最大值。"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### 注意事项\n",
"\n",
"这里模型的输入数据需要使用IEEE的通用数据格式 [2]。\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 引用信息\n",
"\n",
"```tex\n",
"@article{liu2022quantum,\n",
" title={Quantum Power Flows: From Theory to Practice},\n",
" author={Liu, Junyu and Zheng, Han and Hanada, Masanori and Setia, Kanav and Wu, Dan},\n",
" journal={arXiv preprint arXiv:2211.05728},\n",
" year={2022}\n",
"}\n",
"\n",
"@article{pierce1973common,\n",
" title={Common format for exchange of solved load flow data},\n",
" author={Pierce, HE and others},\n",
" journal={IEEE Transactions on Power Apparatus and Systems},\n",
" volume={92},\n",
" number={6},\n",
" pages={1916--1925},\n",
" year={1973}\n",
"}\n",
"```\n",
"\n",
"## 参考文献\n",
"[1] Liu, Junyu, et al. \"Quantum Power Flows: From Theory to Practice.\" arXiv preprint arXiv:2211.05728 (2022).\n",
"\n",
"[2] Pierce, H. E. \"Common format for exchange of solved load flow data.\" IEEE Transactions on Power Apparatus and Systems 92.6 (1973): 1916-1925."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "power_flow",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.16"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "09b405f4ce7ff94b18a2e1d1d7346b8a2e6101e2bd963fee0349fb6bd6dc2572"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Power Flow Study\n",
"*Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"## Background\n",
"\n",
"Power flow is an important numerical analysis in a power system, and it aims to calculate the various parameters of each node in a power system according to the given conditions and constraints. Power flow study is the foundation for almost all the power system analysis and evaluatons, and it also underpins the further planning, expansion, and operation determining for a power system. A simple example is that we can check whether the proposed power system planning scheme can meet the requirements through power flow calculation. Therefore, power flow calculation has a very important practical significance.\n",
"\n",
"In a power flow problem, each part of a power system is represented by nodes and lines connecting nodes. For AC power, there are generally four parameters to be considered for each node, namely voltage magnitute, phase angle, active power and reactive power. Active power refers to the power of the system that actually converts the electric energy into other forms of energy, while reactive power is the power continuously flows back and forth between the source and load. According to the given initial conditions, we can obtain the unknown parameters on each node by solving the power balance equations. From the perspective of mathematical model, power flow problem can be reduced to solving a series of nonlinear equations. Speficially, for the $i$ th node in a power system with $n$ ndoes, we have:\n",
"\n",
"$$\\begin{cases}\n",
"P_i-U_i\\sum^n_{j=1}U_j(G_{ij}\\cos\\delta_{ij}+B_{ij}\\sin\\delta_{ij}) &= 0 \\\\\n",
"Q_i-U_i\\sum^n_{j=1}U_j(G_{ij}\\sin\\delta_{ij}-B_{ij}\\cos\\delta_{ij}) &= 0 \n",
"\\end{cases}$$\n",
"where $P_i$ is the injective active power of $i$ th node, $Q_i$ is the injective reactive power of $i$ th node, and $U_i$ is the voltage magnitude of $i$ th node. Meanwhile, $G$ represents the real part of admittance matrix, $B$ represents the imaginary part of the admittance matrix, $\\delta_ {ij}$ represents the phase difference between the $i$ th and $j$ th nodes.\n",
"\n",
"Solving nonlinear equations is a difficult task, so we usually use some numerical methods to obtain approximate solutions. One of the most commonly used methods is Newton-Raphson method, which can gradually approach the precise solution of the equations by increasing the number of iterations. In short, Newton-Raphson method is to find the appropriate optimization direction from an initial point by calculating the derivative, and then continuously update our approximate solution to get a result that is close to the precise solution."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quantum solutions\n",
"We have known that Newton-Raphson method is commonly used in power flow problem. Its complexity mainly comes from the process of calculating the inversion of Jacobian matrix, and the complexity is $\\mathcal{O}(poly(n))$. In Newton-Raphson method, calculating the inversion of Jacobian matrix can also be seen as the process of solving linear equations. In the field of quantum computing, many quantum algorithms have been proposed to solve linear equations e.g. Harrow-Hassidim-Lloyd (HHL) algorithm and Variational Quantum Linear Solver (VQLS). Compared with classical algorithms, they have been proved to have the exponential acceleration in solving linear equations under certain conditions. Here, we would use the variational quantum linear solver (VQLS) to replace the Jacobian matrix inversion process in Newton-Raphson method, so as to achieve the quantum acceleration of power flow problems [1]. We also provide a model and a tutorial of VQLS in our Quantum Application Model Library, you can find more details of this algorithm in its tutorial. Next, we will take a 5-node power system as an example to demonstrate the quantum solution for power flow problem based on Newton-Raphson method and VQLS."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## User's guide\n",
"\n",
"We provide a configuration file with previously chosen parameters. The user just needs to change the parameters in the `config.toml` file, and run `python power_flow.py --config config.toml` in the terminal, to solve the power flow problem.\n",
"### Output\n",
"The results of power flow problem will be output to the `pf_result.txt` file. And the process of optimization will be documented in the `power_flow.log` file. Users can check the evolution of loss and error value as the number of looping iterations increases."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Online demonstration\n",
"Here, we demonstrate a demo that can be tested online. We firstly define the contents of the configuration file:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"test_toml = r\"\"\"\n",
"# The path of the input data. It should be a .txt file and in the IEEE Common Data Format.\n",
"data_dir = './ieee5cdf.txt'\n",
"\n",
"# Threshold for loss value to end optmization for power flow, default is 1e-3\n",
"threshold = 1e-3\n",
"\n",
"# Minimum number of iterations of power flow optimization.\n",
"minIter = 3\n",
"\n",
"# Maximum number of iterations of power flow optimization.\n",
"maxIter = 100\n",
"\n",
"# The depth of the quantum ansatz circuit.\n",
"depth = 4\n",
"\n",
"# Number optimization cycles of quantum circuit. \n",
"iterations = 100\n",
"\n",
"# The learning rate of the optimizer.\n",
"LR = 0.1\n",
"\n",
"# Threshold for loss value to end optimization for quantum circuit early, default is 0.\n",
"gamma = 0\n",
"\n",
"\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we present the code for the model:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 100/100 [00:59<00:00, 1.68it/s]\n",
"100%|██████████| 100/100 [00:59<00:00, 1.69it/s]\n",
"100%|██████████| 100/100 [00:59<00:00, 1.69it/s]\n",
"100%|██████████| 100/100 [00:59<00:00, 1.69it/s]\n",
"100%|██████████| 100/100 [00:58<00:00, 1.70it/s]\n",
"100%|██████████| 100/100 [00:58<00:00, 1.70it/s]\n",
"100%|██████████| 100/100 [00:58<00:00, 1.70it/s]\n",
"100%|██████████| 100/100 [00:58<00:00, 1.70it/s]\n",
"100%|██████████| 100/100 [01:00<00:00, 1.65it/s]"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is the power flow results:\n",
"Power Flow:\n",
"\n",
"| Bus | Bus | V | Angle | Injection | Generation | Load |\n",
"| No | Name | pu | Degree | MW | MVar | MW | Mvar | MW | MVar |\n",
"| 1 |Bus 1 HV| 1.060 | 0.000 | 129.816 | 24.447 | 129.816 | 24.447 | 0.000 | 0.000 |\n",
"| 2 |Bus 2 HV| 1.036 | -0.046 | 20.000 | 20.000 | 40.000 | 30.000 | 20.000 | 10.000 |\n",
"| 3 |Bus 3 HV| 1.009 | -0.084 | -45.000 | -15.000 | 0.000 | 0.000 | 45.000 | 15.000 |\n",
"| 4 |Bus 4 HV| 1.007 | -0.090 | -40.000 | -5.000 | -0.000 | -0.000 | 40.000 | 5.000 |\n",
"| 5 |Bus 5 HV| 1.002 | -0.104 | -60.000 | -10.000 | 0.000 | 0.000 | 60.000 | 10.000 |\n",
"----------------------------------------------------------------------------------------------------------\n",
"\n",
"Network and losses:\n",
"\n",
"| From | To | P | Q | From | To | P | Q | Branch Loss |\n",
"| Bus | Bus | MW | MVar | Bus | Bus | MW | MVar | MW | MVar |\n",
"| 1 | 2 | 95.69 | 13.87 | 2 | 1 | -81.06 | -9.54 | 14.63 | 4.33 |\n",
"| 1 | 3 | 46.48 | 10.58 | 3 | 1 | -34.51 | -6.77 | 11.97 | 3.81 |\n",
"| 2 | 3 | 28.99 | 8.15 | 3 | 2 | -20.24 | -7.01 | 8.74 | 1.13 |\n",
"| 2 | 4 | 32.23 | 8.06 | 4 | 2 | -23.40 | -6.65 | 8.83 | 1.42 |\n",
"| 2 | 5 | 58.11 | 13.33 | 5 | 2 | -50.69 | -9.77 | 7.42 | 3.56 |\n",
"| 3 | 4 | 20.94 | -1.21 | 4 | 3 | -16.84 | 1.32 | 4.10 | 0.11 |\n",
"| 4 | 5 | 11.40 | 0.33 | 5 | 4 | -1.28 | -0.23 | 10.12 | 0.10 |\n",
"----------------------------------------------------------------------------------------------------------\n",
"\n",
"Total active power losses: 65.82, Total reactive power losses: 14.45\n",
"Error: 1.203040833536173e-06\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"import os\n",
"import warnings\n",
"\n",
"warnings.filterwarnings('ignore')\n",
"os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'\n",
"\n",
"import toml\n",
"from paddle_quantum.data_analysis.power_flow import data_to_Grid\n",
"\n",
"config = toml.loads(test_toml)\n",
"file_name = config.pop('data_dir')\n",
"grid = data_to_Grid(file_name)\n",
"grid.powerflow(**config)\n",
"print(\"Here is the power flow results:\")\n",
"grid.printResults()\n",
"Error = grid.tolerances[-1] \n",
"print(f\"Error: {Error}\")\n",
"grid.saveResults()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The error here represents the maximum absolute error of nonlinear equations."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Note\n",
"\n",
"The input data should be in the IEEE common format [2]."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Citation\n",
"\n",
"```\n",
"@article{liu2022quantum,\n",
" title={Quantum Power Flows: From Theory to Practice},\n",
" author={Liu, Junyu and Zheng, Han and Hanada, Masanori and Setia, Kanav and Wu, Dan},\n",
" journal={arXiv preprint arXiv:2211.05728},\n",
" year={2022}\n",
"}\n",
"\n",
"@article{pierce1973common,\n",
" title={Common format for exchange of solved load flow data},\n",
" author={Pierce, HE and others},\n",
" journal={IEEE Transactions on Power Apparatus and Systems},\n",
" volume={92},\n",
" number={6},\n",
" pages={1916--1925},\n",
" year={1973}\n",
"}\n",
"```\n",
"\n",
"## References\n",
"[1] Liu, Junyu, et al. \"Quantum Power Flows: From Theory to Practice.\" arXiv preprint arXiv:2211.05728 (2022).\n",
"\n",
"[2] Pierce, H. E. \"Common format for exchange of solved load flow data.\" IEEE Transactions on Power Apparatus and Systems 92.6 (1973): 1916-1925."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "power_flow",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.16"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "09b405f4ce7ff94b18a2e1d1d7346b8a2e6101e2bd963fee0349fb6bd6dc2572"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
# !/usr/bin/env python3
# Copyright (c) 2020 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Power Flow Model
"""
import argparse
import os
import warnings
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
import toml
import logging
import numpy as np
from paddle_quantum.data_analysis.power_flow import data_to_Grid
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Solve power flow problem.")
parser.add_argument("--config", type=str, help="Input the config file with toml format.")
args = parser.parse_args()
config = toml.load(args.config)
file_name = config.pop('data_dir')
grid = data_to_Grid(file_name)
grid.powerflow(**config)
print("Here is the power flow results:")
grid.printResults()
Error = grid.tolerances[-1]
print(f"Error is: {Error}")
logging.basicConfig(
filename='./power_flow.log',
filemode='w',
format='%(asctime)s %(levelname)s %(message)s',
level=logging.INFO
)
msg = f"Error is: {Error}"
logging.info(msg)
grid.saveResults()
\ No newline at end of file
......@@ -12,7 +12,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -25,7 +24,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -56,7 +54,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -76,7 +73,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -109,7 +105,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -143,7 +138,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
......@@ -161,7 +155,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "modellib",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
......@@ -175,9 +169,8 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15"
"version": "3.7.16"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "8f24120f890011f53feb4ed62c47961d8565ec1de8b7cb23548c15bd6da8f2d2"
......
# The configuration file of quantum random number generation.
# the count of numbers you needed
bit_len = 10
# the physical processor
backend = 'local_baidu_sim2'
# backend = 'cloud_baidu_sim2_water'
# backend = 'cloud_baidu_sim2_earth'
# backend = 'cloud_baidu_sim2_thunder'
# backend = 'cloud_baidu_sim2_heaven'
# backend = 'cloud_baidu_sim2_wind'
# backend = 'cloud_baidu_sim2_lake'
# backend = 'cloud_aer_at_bd'
# backend = 'cloud_baidu_qpu_qian'
# backend = 'cloud_iopcas'
# backend = 'cloud_ionapm'
# backend = 'service_ubqc'
# user's token for cloud service
token = ''
# whether to use extractor for post-process, please use lowercase here
extract = false
# security parameters
security = 1e-8
# the min-entropy of hardware 1 and hardware 2, range from (0,1)
min_entr_1 = 0.9
min_entr_2 = 0.9
# the save path of log file
log_path = './qrng.log'
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 量子随机数生成器\n",
"*Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.*\n",
"\n",
"随机数生成器是通过算法、物理信号、环境噪音等来产生随机数列的方法或装置,其在如密码学、机器学习等诸多领域扮演重要角色,它直接影响了密码的可破解性、机器学习模型的泛化性能。经典计算机通过如线性同余法等伪随机算法生成伪随机数,存在一定的缺陷,比如存在周期性等。而量子计算机可以利用量子测量的不确定性生成真随机数,具有不可预测性。本模型库封装了能够调用百度量子云平台上线的百度自研量子计算机、模拟器、第三方硬件生成随机数的方法。\n",
"\n",
"## 生成原理\n",
"利用量子计算机可对零初始态 $|0\\rangle$ 作用 $H$ 门,得到量子态 $|\\psi\\rangle$, \n",
"$$\n",
"|\\psi\\rangle = \\frac{\\sqrt2}{2} |0\\rangle + \\frac{\\sqrt2}{2} |1\\rangle .\n",
"$$\n",
"该量子态 $|\\psi\\rangle$ 在 Pauli-$Z$ 量子测量(即计算基测量,以 $|0\\rangle\\langle 0|$,$|1\\rangle \\langle 1 |$ 为测量算子)下为 $0$ 或为 $1$ 的概率均为 $\\frac12$。量子态坍缩的不确定性可以保证得到的 $0$ 或 $1$ 的结果为真随机。重复上述流程即可得到随机比特串,流程示意图如下。\n",
"![random_number](./randnum_CN.png)\n",
"\n",
"若实际量子计算机为无噪声理想量子计算机,则测量结果为真随机。不过当前量子计算机均存在一定的噪声,因此我们同时封装了经典的隐私增强算法 [1],其可以通过利用两台独立的量子计算机(或一台量子计算机的两个独立比特)分别测量得到更长的比特串数据并对两者数据进行后处理提取较短的比特串数据,从而过滤噪声的影响。提取过程示意图如下。\n",
"![extractor](./extractor_CN.png)\n",
"\n",
"## 如何使用\n",
"### 快速使用\n",
"用户可以通过在命令行下输入 `python randnum.py --config config.toml` 即可运行,其中 `config.toml` 为模型配置文件,用户可修改该文件内的参数或输入其它文件实现自定义参数。\n",
"### 在线演示\n",
"同时,我们这里给出一个在线演示的版本,可以在线进行调试。首先定义配置文件的内容:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"example_toml = r'''\n",
"# 所需比特串长度\n",
"bit_len = 10\n",
"\n",
"# 处理器硬件\n",
"backend = 'local_baidu_sim2' # 本地模拟器\n",
"# backend = 'cloud_baidu_qpu_qian' # 百度自研量子计算机 乾始 (Qian)\n",
"\n",
"# 用户 token,在使用云端服务时需要输入\n",
"token = ''\n",
"\n",
"# 是否使用隐私增强后处理,toml 文件中 true 和 false 都应为小写\n",
"extract = false\n",
"\n",
"# 信息安全参数,越小则越接近真随机(后处理所需参数,不启动后处理时无需修改)\n",
"security = 1e-8\n",
"\n",
"# 处理器硬件 1 和 2 的最小熵(后处理所需参数,不启动后处理时无需修改,取值范围 (0,1))\n",
"min_entr_1 = 0.9 \n",
"min_entr_2 = 0.9\n",
"\n",
"# 日志文件路径\n",
"log_path = './qrng.log'\n",
"'''"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"接下来执行模型运行过程。"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Applications/anaconda3/envs/pqtest/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. \n",
"Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\n",
" if data.dtype == np.object:\n"
]
},
{
"data": {
"text/plain": [
"[0, 0, 0, 1, 1, 1, 1, 0, 1, 0]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import warnings\n",
"import toml\n",
"from paddle_quantum.data_analysis.rand_num import random_number_generation\n",
"warnings.filterwarnings('ignore')\n",
"random_number_generation(**toml.loads(example_toml))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"在这里,我们只需要修改 `example_toml` 字典中的参数,再运行上文全部代码,就可以快速自定义调试。如需使用百度自研量子计算机“乾始”,可将 `backend` 参数的输入修改为 ` 'cloud_baidu_qpu_qian'` (需保留引号,python 字符串数据类型),并输入自己账户 `token`。(若无 `token` 可前往[百度量易伏平台](https://quantum-hub.baidu.com/)注册。)\n",
"## 注意事项\n",
"- 请保持环境中 `Qcompute` 版本更新,过低版本会导致新上线后端无法识别。后端 `'cloud_baidu_qpu_qian'` 要求 `>= 3.0.0`。\n",
"- 使用百度量子云平台的后端时需输入 `token`,请留意个人账户中对应云服务的剩余可用时长。\n",
"- 使用云服务时请留意[百度量子云平台](https://quantum-hub.baidu.com/)的各机器运行状态和服务时间。\n",
"\n",
"## 引用信息\n",
"```\n",
"@article{hayashi2016more,\n",
" title={More efficient privacy amplification with less random seeds via dual universal hash function},\n",
" author={Hayashi, Masahito and Tsurumaru, Toyohiro},\n",
" journal={IEEE Transactions on Information Theory},\n",
" volume={62},\n",
" number={4},\n",
" pages={2213--2232},\n",
" year={2016},\n",
" publisher={IEEE}\n",
"}\n",
"```\n",
"## 参考文献\n",
"[1] Hayashi, Masahito, and Toyohiro Tsurumaru. \"More efficient privacy amplification with less random seeds via dual universal hash function.\" IEEE Transactions on Information Theory 62.4 (2016): 2213-2232."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.15"
},
"vscode": {
"interpreter": {
"hash": "d3caffbb123012c2d0622db402df9f37d80adc57c1cef1fdb856f61446d88d0a"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
此差异已折叠。
# !/usr/bin/env python3
# Copyright (c) 2023 Institute for Quantum Computing, Baidu Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Quantum random number generator.
"""
import os
import warnings
import toml
import argparse
from paddle_quantum.data_analysis.rand_num import random_number_generation
warnings.filterwarnings('ignore')
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'python'
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Quantum random number generation.")
parser.add_argument(
"--config", type=str, default='./config.toml', help="The path of toml format config file.")
random_number_generation(**toml.load(parser.parse_args().config))
\ No newline at end of file
......@@ -18,11 +18,11 @@
# -- Project information -----------------------------------------------------
project = 'paddle-quantum'
copyright = '2022, Baidu Inc'
copyright = '2023, Baidu Inc'
author = 'Baidu Inc'
# The full version, including alpha/beta/rc tags
release = '2.2.1'
release = '2.4.0'
# -- General configuration ---------------------------------------------------
......
......@@ -62,7 +62,7 @@ or download all the files and finish the installation locally,
.. code:: shell
git clone http://github.com/PaddlePaddle/quantum
git clone https://github.com/PaddlePaddle/quantum
.. code:: shell
......
......@@ -2,7 +2,7 @@
:maxdepth: 1
paddle_quantum.ansatz
paddle_quantum.backend
paddle_quantum.backend.quleaf
paddle_quantum.biocomputing
paddle_quantum.channel
paddle_quantum.data_analysis
......
paddle\_quantum.ansatz.layer
=======================================
.. automodule:: paddle_quantum.ansatz.layer
:members:
:show-inheritance:
......@@ -12,4 +12,5 @@ paddle\_quantum.ansatz
paddle_quantum.ansatz.circuit
paddle_quantum.ansatz.container
paddle_quantum.ansatz.layer
paddle_quantum.ansatz.vans
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -4,3 +4,4 @@ paddle\_quantum.mbqc.qobject
.. automodule:: paddle_quantum.mbqc.qobject
:members:
:show-inheritance:
:noindex:
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册