From e026f5902917ff3faa9b7480fb6e21f8753b3055 Mon Sep 17 00:00:00 2001 From: jingqinghe Date: Fri, 22 May 2020 15:28:50 +0800 Subject: [PATCH] update README and document --- docs/source/compile_and_intall.rst | 1 + docs/source/data_parallel_instruction.rst | 1 + docs/source/examples/dpsgd_examples.rst | 1 + docs/source/examples/md/dpsgd-example.md | 40 +++-- docs/source/examples/md/gru4rec_examples.md | 5 + docs/source/examples/md/uci_demo.md | 160 ++++++++++++++++++ docs/source/examples/mpc_uci_demo.rst | 1 + docs/source/index.rst | 24 +-- docs/source/instruction.rst | 24 --- docs/source/md/compile_and_install.md | 23 +++ ..._start.md => data_parallel_instruction.md} | 2 + docs/source/md/introduction.md | 65 +++++-- docs/source/md/mpc_instruction.md | 81 +++++++++ docs/source/md/reference.md | 16 +- docs/source/mpc_instruction.rst | 1 + docs/source/team.rst | 2 +- 16 files changed, 374 insertions(+), 73 deletions(-) create mode 100644 docs/source/compile_and_intall.rst create mode 100644 docs/source/data_parallel_instruction.rst create mode 100644 docs/source/examples/dpsgd_examples.rst create mode 100644 docs/source/examples/md/uci_demo.md create mode 100644 docs/source/examples/mpc_uci_demo.rst delete mode 100644 docs/source/instruction.rst rename docs/source/md/{quick_start.md => data_parallel_instruction.md} (99%) create mode 100644 docs/source/md/mpc_instruction.md create mode 100644 docs/source/mpc_instruction.rst diff --git a/docs/source/compile_and_intall.rst b/docs/source/compile_and_intall.rst new file mode 100644 index 0000000..2e17413 --- /dev/null +++ b/docs/source/compile_and_intall.rst @@ -0,0 +1 @@ +.. mdinclude:: md/compile_and_install.md diff --git a/docs/source/data_parallel_instruction.rst b/docs/source/data_parallel_instruction.rst new file mode 100644 index 0000000..09a6484 --- /dev/null +++ b/docs/source/data_parallel_instruction.rst @@ -0,0 +1 @@ +.. mdinclude:: md/data_parallel_instruction.md diff --git a/docs/source/examples/dpsgd_examples.rst b/docs/source/examples/dpsgd_examples.rst new file mode 100644 index 0000000..821def2 --- /dev/null +++ b/docs/source/examples/dpsgd_examples.rst @@ -0,0 +1 @@ +.. mdinclude:: md/dpsgd-example.md diff --git a/docs/source/examples/md/dpsgd-example.md b/docs/source/examples/md/dpsgd-example.md index f3dd18f..5fc95639 100644 --- a/docs/source/examples/md/dpsgd-example.md +++ b/docs/source/examples/md/dpsgd-example.md @@ -10,7 +10,7 @@ This document introduces how to use PaddleFL to train a model with Fl Strategy: Please use pip which has paddlepaddle installed -``` +```sh pip install paddle_fl ``` @@ -35,7 +35,7 @@ The dataset will downloaded automatically in the API and will be located under ` PaddleFL has two phases , CompileTime and RunTime. In CompileTime, a federated learning task is defined by fl_master. In RunTime, a federated learning job is executed on fl_server and fl_trainer in distributed clusters. -``` +```sh sh run.sh ``` @@ -43,13 +43,19 @@ sh run.sh In this example, we implement compile time programs in fl_master.py -``` +```sh python fl_master.py ``` In fl_master.py, we first define FL-Strategy, User-Defined-Program and Distributed-Config. Then FL-Job-Generator generate FL-Job for federated server and worker. ```python +import paddle.fluid as fluid +import paddle_fl.paddle_fl as fl +from paddle_fl.paddle_fl.core.master.job_generator import JobGenerator +from paddle_fl.paddle_fl.core.strategy.fl_strategy_base import FLStrategyFactory +import math + class Model(object): def __init__(self): pass @@ -100,7 +106,7 @@ job_generator.generate_fl_job( #### How to work in RunTime -``` +```sh python -u fl_scheduler.py >scheduler.log & python -u fl_server.py >server0.log & python -u fl_trainer.py 0 >trainer0.log & @@ -110,7 +116,9 @@ python -u fl_trainer.py 3 >trainer3.log & ``` In fl_scheduler.py, we let server and trainers to do registeration. -``` +```python +from paddle_fl.paddle_fl.core.scheduler.agent_master import FLScheduler + worker_num = 4 server_num = 1 #Define number of worker/server and the port for scheduler @@ -122,7 +130,12 @@ scheduler.start_fl_training() ``` In fl_server.py, we load and run the FL server job. -``` +```python +import paddle_fl.paddle_fl as fl +import paddle.fluid as fluid +from paddle_fl.paddle_fl.core.server.fl_server import FLServer +from paddle_fl.paddle_fl.core.master.fl_job import FLRunTimeJob + server = FLServer() server_id = 0 job_path = "fl_job_config" @@ -136,18 +149,23 @@ server.start() In fl_trainer.py, we load and run the FL trainer job, then evaluate the accuracy with test data and compute the privacy budget. -``` +```python +import numpy +import sys +import paddle +import paddle.fluid as fluid +import logging +import math +from paddle_fl.paddle_fl.core.master.fl_job import FLRunTimeJob +from paddle_fl.paddle_fl.core.trainer.fl_trainer import FLTrainerFactory + trainer_id = int(sys.argv[1]) # trainer id for each guest job_path = "fl_job_config" job = FLRunTimeJob() job.load_trainer_job(job_path, trainer_id) trainer = FLTrainerFactory().create_fl_trainer(job) trainer.start() -``` - - -``` def train_test(train_test_program, train_test_feed, train_test_reader): acc_set = [] for test_data in train_test_reader(): diff --git a/docs/source/examples/md/gru4rec_examples.md b/docs/source/examples/md/gru4rec_examples.md index 33d4493..97178ae 100644 --- a/docs/source/examples/md/gru4rec_examples.md +++ b/docs/source/examples/md/gru4rec_examples.md @@ -39,6 +39,11 @@ python fl_master.py ``` In fl_master.py, we first define FL-Strategy, User-Defined-Program and Distributed-Config. Then FL-Job-Generator generate FL-Job for federated server and worker. ```python +import paddle.fluid as fluid +import paddle_fl.paddle_fl as fl +from paddle_fl.paddle_fl.core.master.job_generator import JobGenerator +from paddle_fl.paddle_fl.core.strategy.fl_strategy_base import FLStrategyFactory + # define model model = Model() model.gru4rec_network() diff --git a/docs/source/examples/md/uci_demo.md b/docs/source/examples/md/uci_demo.md new file mode 100644 index 0000000..4fd5d3a --- /dev/null +++ b/docs/source/examples/md/uci_demo.md @@ -0,0 +1,160 @@ +## Instructions for PaddleFL-MPC UCI Housing Demo + +([简体中文](./README_CN.md)|English) + +This document introduces how to run UCI Housing demo based on Paddle-MPC, which has two ways of running, i.e., single machine and multi machines. + +### 1. Running on Single Machine + +#### (1). Prepare Data + +Generate encrypted data utilizing `generate_encrypted_data()` in `process_data.py` script. For example, users can write the following code into a python script named `prepare.py`, and then run the script with command `python prepare.py`. + +```python +import process_data +process_data.generate_encrypted_data() +``` + +Encrypted data files of feature and label would be generated and saved in `/tmp` directory. Different suffix names are used for these files to indicate the ownership of different computation parties. For instance, a file named `house_feature.part0` means it is a feature file of party 0. + +#### (2). Launch Demo with A Shell Script + +Launch demo with the `run_standalone.sh` script. The concrete command is: + +```bash +bash run_standalone.sh uci_housing_demo.py +``` + +The loss with cypher text format will be displayed on screen while training. At the same time, the loss data would be also save in `/tmp` directory, and the format of file name is similar to what is described in Step 1. + +Besides, predictions would be made in this demo once training is finished. The predictions with cypher text format would also be save in `/tmp` directory. + +Finally, using `load_decrypt_data()` in `process_data.py` script, this demo would decrypt and print the loss and predictions, which can be compared with related results of Paddle plain text model. + +**Note** that remember to delete the loss and prediction files in `/tmp` directory generated in last running, in case of any influence on the decrypted results of current running. For simplifying users operations, we provide the following commands in `run_standalone.sh`, which can delete the files mentioned above when running this script. + +```bash +# remove temp data generated in last time +LOSS_FILE="/tmp/uci_loss.*" +PRED_FILE="/tmp/uci_prediction.*" +if [ "$LOSS_FILE" ]; then + rm -rf $LOSS_FILE +fi + +if [ "$PRED_FILE" ]; then + rm -rf $PRED_FILE +fi +``` + + + +### 2. Running on Multi Machines + +#### (1). Prepare Data + +Data owner encrypts data. Concrete operations are consistent with “Prepare Data” in “Running on Single Machine”. + +#### (2). Distribute Encrypted Data + +According to the suffix of file name, distribute encrypted data files to `/tmp ` directories of all 3 computation parties. For example, send `house_feature.part0` and `house_label.part0` to `/tmp` of party 0 with `scp` command. + +#### (3). Modify uci_housing_demo.py + +Each computation party makes the following modifications on `uci_housing_demo.py` according to the environment of machine. + +* Modify IP Information + + Modify `localhost` in the following code as the IP address of the machine. + + ```python + pfl_mpc.init("aby3", int(role), "localhost", server, int(port)) + ``` + +* Comment Out Codes for Single Machine Running + + Comment out the following codes which are used when running on single machine. + + ```python + import process_data + print("uci_loss:") + process_data.load_decrypt_data("/tmp/uci_loss", (1,)) + print("prediction:") + process_data.load_decrypt_data("/tmp/uci_prediction", (BATCH_SIZE,)) + ``` + +#### (4). Launch Demo on Each Party + +**Note** that Redis service is necessary for demo running. Remember to clear the cache of Redis server before launching demo on each computation party, in order to avoid any negative influences caused by the cached records in Redis. The following command can be used for clear Redis, where REDIS_BIN is the executable binary of redis-cli, SERVER and PORT represent the IP and port of Redis server respectively. + +``` +$REDIS_BIN -h $SERVER -p $PORT flushall +``` + +Launch demo on each computation party with the following command, + +``` +$PYTHON_EXECUTABLE uci_housing_demo.py $PARTY_ID $SERVER $PORT +``` + +where PYTHON_EXECUTABLE is the python which installs PaddleFL, PARTY_ID is the ID of computation party, which is 0, 1, or 2, SERVER and PORT represent the IP and port of Redis server respectively. + +Similarly, training loss with cypher text format would be printed on the screen of each computation party. And at the same time, the loss and predictions would be saved in `/tmp` directory. + +**Note** that remember to delete the loss and prediction files in `/tmp` directory generated in last running, in case of any influence on the decrypted results of current running. + +#### (5). Decrypt Loss and Prediction Data + +Each computation party sends `uci_loss.part` and `uci_prediction.part` files in `/tmp` directory to the `/tmp` directory of data owner. Data owner decrypts and gets the plain text of loss and predictions with ` load_decrypt_data()` in `process_data.py`. + +For example, the following code can be written into a python script to decrypt and print training loss. + +```python +import process_data +print("uci_loss:") +process_data.load_decrypt_data("/tmp/uci_loss", (1,)) +``` + +And the following code can be written into a python script to decrypt and print predictions. + +```python +import process_data +print("prediction:") +process_data.load_decrypt_data("/tmp/uci_prediction", (BATCH_SIZE,)) +``` + +#### Convergence of paddle_fl.mpc vs paddle + +Below, is the result of our experiment to test the convergence of paddle_fl.mpc + +#### A. Convergence of paddle_fl.mpc vs paddle + +##### 1. Training Parameters +- Dataset: Boston house price dataset +- Number of Epoch: 20 +- Batch Size: 10 + +##### 2. Experiment Results + +| Epoch/Step | paddle_fl.mpc | Paddle | +| ---------- | ------------- | ------ | +| Epoch=0, Step=0 | 738.39491 | 738.46204 | +| Epoch=1, Step=0 | 630.68834 | 629.9071 | +| Epoch=2, Step=0 | 539.54683 | 538.1757 | +| Epoch=3, Step=0 | 462.41159 | 460.64722 | +| Epoch=4, Step=0 | 397.11516 | 395.11017 | +| Epoch=5, Step=0 | 341.83102 | 339.69815 | +| Epoch=6, Step=0 | 295.01114 | 292.83597 | +| Epoch=7, Step=0 | 255.35141 | 253.19429 | +| Epoch=8, Step=0 | 221.74739 | 219.65132 | +| Epoch=9, Step=0 | 193.26459 | 191.25981 | +| Epoch=10, Step=0 | 169.11423 | 167.2204 | +| Epoch=11, Step=0 | 148.63138 | 146.85835 | +| Epoch=12, Step=0 | 131.25081 | 129.60391 | +| Epoch=13, Step=0 | 116.49708 | 114.97599 | +| Epoch=14, Step=0 | 103.96669 | 102.56854 | +| Epoch=15, Step=0 | 93.31706 | 92.03858 | +| Epoch=16, Step=0 | 84.26219 | 83.09653 | +| Epoch=17, Step=0 | 76.55664 | 75.49785 | +| Epoch=18, Step=0 | 69.99673 | 69.03561 | +| Epoch=19, Step=0 | 64.40562 | 63.53539 | + diff --git a/docs/source/examples/mpc_uci_demo.rst b/docs/source/examples/mpc_uci_demo.rst new file mode 100644 index 0000000..7be0b66 --- /dev/null +++ b/docs/source/examples/mpc_uci_demo.rst @@ -0,0 +1 @@ +.. mdinclude:: md/uci_demo.md diff --git a/docs/source/index.rst b/docs/source/index.rst index 3b82854..d2e185b 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -2,19 +2,13 @@ .. mdinclude:: md/logo.md -Quick Start -=========== .. toctree:: :maxdepth: 1 :caption: Quick Start - :hidden: - - instruction.rst - -See instruction_ for quick start. - -.. _instruction: instruction.html + compile_and_intall.rst + data_parallel_instruction.rst + mpc_instruction.rst .. toctree:: :maxdepth: 1 @@ -31,14 +25,8 @@ See instruction_ for quick start. :caption: Examples examples/gru4rec_examples.rst - - -.. toctree:: - :maxdepth: 2 - :caption: API Reference - - api/paddle_fl - + examples/dpsgd_examples.rst + examples/mpc_uci_demo.rst The Team ======== @@ -49,7 +37,7 @@ The Team team.rst -PaddleFL is developed and maintained by Nimitz Team at Baidu +PaddleFL is developed by PaddlePaddle and Security team. .. toctree:: diff --git a/docs/source/instruction.rst b/docs/source/instruction.rst deleted file mode 100644 index e1de03c..0000000 --- a/docs/source/instruction.rst +++ /dev/null @@ -1,24 +0,0 @@ -Quick Start Instructions -======================== - -.. mdinclude:: md/logo.md - -Install PaddleFL ----------------- -To install PaddleFL, we need the following packages. - - -.. code-block:: sh - - paddlepaddle >= 1.6 - networkx - -We can run - -.. code-block:: sh - - python setup.py install - or - pip install paddle-fl - -.. mdinclude:: md/quick_start.md diff --git a/docs/source/md/compile_and_install.md b/docs/source/md/compile_and_install.md index 3793501..4d1db65 100644 --- a/docs/source/md/compile_and_install.md +++ b/docs/source/md/compile_and_install.md @@ -1,3 +1,26 @@ +# Compile and Install + +## Installation + +We **highly recommend** to run PaddleFL in Docker + +```sh +#Pull and run the docker +docker pull hub.baidubce.com/paddlefl/paddle_fl:latest +docker run --name --net=host -it -v $PWD:/root /bin/bash + +#Install paddle_fl +pip install paddle_fl +``` + +We also prepare a stable redis package for you to download and install, which will be used in tasks with MPC. + +```sh +wget --no-check-certificate https://paddlefl.bj.bcebos.com/redis-stable.tar +tar -xf redis-stable.tar +cd redis-stable && make +``` + ## Compile From Source Code #### A. Environment preparation diff --git a/docs/source/md/quick_start.md b/docs/source/md/data_parallel_instruction.md similarity index 99% rename from docs/source/md/quick_start.md rename to docs/source/md/data_parallel_instruction.md index 93731d6..085e33b 100644 --- a/docs/source/md/quick_start.md +++ b/docs/source/md/data_parallel_instruction.md @@ -1,3 +1,5 @@ +# Instructions for Data Parallel + ## Step 1: Define Federated Learning Compile-Time We define very simple multiple layer perceptron for demonstration. When multiple organizations diff --git a/docs/source/md/introduction.md b/docs/source/md/introduction.md index cb38bfe..e283ccd 100644 --- a/docs/source/md/introduction.md +++ b/docs/source/md/introduction.md @@ -8,15 +8,17 @@ Data is becoming more and more expensive nowadays, and sharing of raw data is ve ## Overview of PaddleFL - + + In PaddleFL, horizontal and vertical federated learning strategies will be implemented according to the categorization given in [4]. Application demonstrations in natural language processing, computer vision and recommendation will be provided in PaddleFL. -#### Federated Learning Strategy -- **Vertical Federated Learning**: Logistic Regression with PrivC, Neural Network with third-party PrivC [5] +#### A. Federated Learning Strategy + +- **Vertical Federated Learning**: Logistic Regression with PrivC[5], Neural Network with MPC [11] -- **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6] +- **Horizontal Federated Learning**: Federated Averaging [2], Differential Privacy [6], Secure Aggregation -#### Training Strategy +#### B. Training Strategy - **Multi Task Learning** [7] @@ -24,15 +26,23 @@ In PaddleFL, horizontal and vertical federated learning strategies will be imple - **Active Learning** +There are mainly two components in PaddleFL: **Data Parallel** and **Federated Learning with MPC (PFM)**. + +With Data Parallel, distributed data holders can finish their Federated Learning tasks based on common horizontal federated strategies, such as FedAvg, DPSGD, etc. + +Besides, PFM is implemented based on secure multi-party computation (MPC) to enable secure training and prediction. As a key product of PaddleFL, PFM intrinsically supports federated learning well, including horizontal, vertical and transfer learning scenarios. Users with little cryptography expertise can also train models or conduct prediction on encrypted data. + ## Framework design of PaddleFL - +### Data Parallel + + -In PaddleFL, components for defining a federated learning task and training a federated learning job are as follows: +In Data Parallel, components for defining a federated learning task and training a federated learning job are as follows: -#### Compile Time +#### A. Compile Time -- **FL-Strategy**: a user can define federated learning strategies with FL-Strategy such as Fed-Avg[1] +- **FL-Strategy**: a user can define federated learning strategies with FL-Strategy such as Fed-Avg[2] - **User-Defined-Program**: PaddlePaddle's program that defines the machine learning model structure and training strategies such as multi-task learning. @@ -40,7 +50,7 @@ In PaddleFL, components for defining a federated learning task and training a fe - **FL-Job-Generator**: Given FL-Strategy, User-Defined Program and Distributed Training Config, FL-Job for federated server and worker will be generated through FL Job Generator. FL-Jobs will be sent to organizations and federated parameter server for run-time execution. -#### Run Time +#### B. Run Time - **FL-Server**: federated parameter server that usually runs in cloud or third-party clusters. @@ -48,10 +58,37 @@ In PaddleFL, components for defining a federated learning task and training a fe - **FL-scheduler**: Decide which set of trainers can join the training before each updating cycle. -## On Going and Future Work +### Federated Learning with MPC + + + +Paddle FL MPC implements secure training and inference tasks based on the underlying MPC protocol like ABY3[11], which is a high efficient three-party computing model. + +In ABY3, participants can be classified into roles of Input Party (IP), Computing Party (CP) and Result Party (RP). Input Parties (e.g., the training data/model owners) encrypt and distribute data or models to Computing Parties. Computing Parties (e.g., the VM on the cloud) conduct training or inference tasks based on specific MPC protocols, being restricted to see only the encrypted data or models, and thus guarantee the data privacy. When the computation is completed, one or more Result Parties (e.g., data owners or specified third-party) receive the encrypted results from Computing Parties, and reconstruct the plaintext results. Roles can be overlapped, e.g., a data owner can also act as a computing party. + +A full training or inference process in PFM consists of mainly three phases: data preparation, training/inference, and result reconstruction. + +#### A. Data preparation + +- **Private data alignment**: PFM enables data owners (IPs) to find out records with identical keys (like UUID) without revealing private data to each other. This is especially useful in the vertical learning cases where segmented features with same keys need to be identified and aligned from all owners in a private manner before training. + +- **Encryption and distribution**: In PFM, data and models from IPs will be encrypted using Secret-Sharing[10], and then be sent to CPs, via directly transmission or distributed storage like HDFS. Each CP can only obtain one share of each piece of data, and thus is unable to recover the original value in the Semi-honest model. + +#### B. Training/inference + +A PFM program is exactly a PaddlePaddle program, and will be executed as normal PaddlePaddle programs. Before training/inference, user needs to choose a MPC protocol, define a machine learning model and their training strategies. Typical machine learning operators are provided in `paddle_fl.mpc` over encrypted data, of which the instances are created and run in order by Executor during run-time. + + +#### C. Result reconstruction + +Upon completion of the secure training (or inference) job, the models (or prediction results) will be output by CPs in encrypted form. Result Parties can collect the encrypted results, decrypt them using the tools in PFM, and deliver the plaintext results to users. + +# On Going and Future Work + +- Vertial Federated Learning will support more algorithms. + +- Add K8S deployment scheme for Paddle Encrypted. -- Experimental benchmark with public datasets in federated learning settings. +- FL mobile simulator will be open sourced in following versions. -- Federated Learning Systems deployment methods in Kubernetes. -- Vertical Federated Learning Strategies and more horizontal federated learning strategies will be open sourced. diff --git a/docs/source/md/mpc_instruction.md b/docs/source/md/mpc_instruction.md new file mode 100644 index 0000000..da4401e --- /dev/null +++ b/docs/source/md/mpc_instruction.md @@ -0,0 +1,81 @@ +# Instructions for Federated Learning with MPC + +A PFM program is exactly a PaddlePaddle program, and will be executed as normal PaddlePaddle programs. Before training/inference, user needs to choose a MPC protocol, define a machine learning model and their training strategies. Typical machine learning operators are provided in paddle_fl.mpc over encrypted data, of which the instances are created and run in order by Executor during run-time. + +Below is an example for complish an vertial Federated Learning with MPC + +```python +import sys +import numpy as np +import time + +import paddle +import paddle.fluid as fluid +import paddle_fl.mpc as pfl_mpc +import paddle_fl.mpc.data_utils.aby3 as aby3 + +# define your role number(0, 1, 2) and the address of redis server +role, server, port = sys.argv[1], sys.argv[2], sys.argv[3] + +# specify the protocal and initialize the environment +pfl_mpc.init("aby3", int(role), "localhost", server, int(port)) +role = int(role) + +# data preprocessing +BATCH_SIZE = 10 + +feature_reader = aby3.load_aby3_shares("/tmp/house_feature", id=role, shape=(13, )) +label_reader = aby3.load_aby3_shares("/tmp/house_label", id=role, shape=(1, )) +batch_feature = aby3.batch(feature_reader, BATCH_SIZE, drop_last=True) +batch_label = aby3.batch(label_reader, BATCH_SIZE, drop_last=True) + +x = pfl_mpc.data(name='x', shape=[BATCH_SIZE, 13], dtype='int64') +y = pfl_mpc.data(name='y', shape=[BATCH_SIZE, 1], dtype='int64') + +# async data loader +loader = fluid.io.DataLoader.from_generator(feed_list=[x, y], capacity=BATCH_SIZE) +batch_sample = paddle.reader.compose(batch_feature, batch_label) +place = fluid.CPUPlace() +loader.set_batch_generator(batch_sample, places=place) + +# define your model +y_pre = pfl_mpc.layers.fc(input=x, size=1) + +infer_program = fluid.default_main_program().clone(for_test=False) + +cost = pfl_mpc.layers.square_error_cost(input=y_pre, label=y) +avg_loss = pfl_mpc.layers.mean(cost) +optimizer = pfl_mpc.optimizer.SGD(learning_rate=0.001) +optimizer.minimize(avg_loss) + +# give the path to store training loss +loss_file = "/tmp/uci_loss.part{}".format(role) + +# train +exe = fluid.Executor(place) +exe.run(fluid.default_startup_program()) +epoch_num = 20 + +start_time = time.time() +for epoch_id in range(epoch_num): + step = 0 + + # Method 1: feed data directly + # for feature, label in zip(batch_feature(), batch_label()): + # mpc_loss = exe.run(feed={"x": feature, "y": label}, fetch_list=[avg_loss]) + + # Method 2: feed data via loader + for sample in loader(): + mpc_loss = exe.run(feed=sample, fetch_list=[avg_loss]) + + if step % 50 == 0: + print('Epoch={}, Step={}, Loss={}'.format(epoch_id, step, + mpc_loss)) + with open(loss_file, 'ab') as f: + f.write(np.array(mpc_loss).tostring()) + step += 1 + +end_time = time.time() +print('Mpc Training of Epoch={} Batch_size={}, cost time in seconds:{}' + .format(epoch_num, BATCH_SIZE, (end_time - start_time))) +``` diff --git a/docs/source/md/reference.md b/docs/source/md/reference.md index 214b68a..7e74928 100644 --- a/docs/source/md/reference.md +++ b/docs/source/md/reference.md @@ -1,17 +1,23 @@ ## Reference -[1]. Jakub Konen, H. Brendan McMahan, Daniel Ramage, Peter Richtik. **Federated Optimization: Distributed Machine Learning for On-Device Intelligence.** 2016 +[1]. Jakub Konečný, H. Brendan McMahan, Daniel Ramage, Peter Richtárik. **Federated Optimization: Distributed Machine Learning for On-Device Intelligence.** 2016 -[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agera y Arcas. **Federated Learning of Deep Networks using Model Averaging.** 2017 +[2]. H. Brendan McMahan, Eider Moore, Daniel Ramage, Blaise Agüera y Arcas. **Federated Learning of Deep Networks using Model Averaging.** 2017 -[3]. Jakub Konen, H. Brendan McMahan, Felix X. Yu, Peter Richtik, Ananda Theertha Suresh, Davepen Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016 +[3]. Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, Dave Bacon. **Federated Learning: Strategies for Improving Communication Efficiency.** 2016 [4]. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong. **Federated Machine Learning: Concept and Applications.** 2019 -[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation. In Proceedings of 15th EAI International Conference on Security and Privacy in Communication Networks.** SecureComm 2019 +[5]. Kai He, Liu Yang, Jue Hong, Jinghua Jiang, Jieming Wu, Xu Dong et al. **PrivC - A framework for efficient Secure Two-Party Computation.** In Proc. of SecureComm 2019 -[6]. Mart Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. **Deep Learning with Differential Privacy.** 2016 +[6]. Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang. **Deep Learning with Differential Privacy.** 2016 [7]. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar. **Federated Multi-Task Learning** 2016 [8]. Yang Liu, Tianjian Chen, Qiang Yang. **Secure Federated Transfer Learning.** 2018 + +[9]. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk. **Session-based Recommendations with Recurrent Neural Networks.** 2016 + +[10]. https://en.wikipedia.org/wiki/Secret_sharing + +[11]. Payman Mohassel and Peter Rindal. **ABY3: A Mixed Protocol Framework for Machine Learning.** In Proc. of CCS 2018 diff --git a/docs/source/mpc_instruction.rst b/docs/source/mpc_instruction.rst new file mode 100644 index 0000000..fb17835 --- /dev/null +++ b/docs/source/mpc_instruction.rst @@ -0,0 +1 @@ +.. mdinclude:: md/mpc_instruction.md diff --git a/docs/source/team.rst b/docs/source/team.rst index a33200f..63a2994 100644 --- a/docs/source/team.rst +++ b/docs/source/team.rst @@ -1,3 +1,3 @@ The Team ======== -PGL is developed and maintained by NLP and Paddle Teams at Baidu +PaddleFL is developed by PaddlePaddle and Security team. -- GitLab