提交 01b7f03e 编写于 作者: L leiyuning

initial version

上级
<!-- Thanks for sending a pull request! Here are some tips for you:
If this is your first time, please read our contributor guidelines: https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md
-->
**What type of PR is this?**
> Uncomment only one ` /kind <>` line, hit enter to put that in a new line, and remove leading whitespaces from that line:
>
> /kind bug
> /kind task
> /kind feature
**What this PR does / why we need it**:
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
-->
Fixes #
**Special notes for your reviewer**:
# Built html files
api/build_en
api/build_zh_cn
docs/build_en
docs/build_zh_cn
tutorials/build_en
tutorials/build_zh_cn
# Workspace
.idea/
.vscode/
# Contributing Documents
You are welcome to contribute MindSpore documents. Documents that meet requirements will be displayed on the [MindSpore official website](https://www.mindspore.cn).
<!-- TOC -->
- [Contributing Documents](#contributing-documents)
- [Creating or Updating Documents](#creating-or-updating-documents)
- [Submitting Modification](#submitting-modification)
- [Document Writing Specifications](#document-writing-specifications)
<!-- /TOC -->
## Creating or Updating Documents
This project supports contribution documents in MarkDown and reStructuredText formats. You can create the ```.md``` or ```.rst``` files or modify existing documents.
## Submitting Modification
The procedure for submitting the modification is the same as that for submitting the code. For details, see [Code Contribution Guide](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md).
## Document Writing Specifications
- The title supports only the ATX style. The title and context must be separated by a blank line.
```
# Heading 1
## Heading 2
### Heading 3
```
- If the list title and content need to be displayed in different lines, add a blank line between the title and content. Otherwise, the line breaks may not be implemented.
```
- Title
Content
```
- Anchors (hyperlinks) in the table of content can contain only Chinese characters, lowercase letters, and hyphens (-). Spaces or other special characters are not allowed. Otherwise, the link is invalid.
- Precautions are marked with a right angle bracket (>).
```
> Precautions
```
- References should be listed at the end of the document and marked in the document.
```
Add a [number] after the referenced text or image description.
## References
[1] Author. [Document Name](http://xxx).
[2] Author. Document Name.
```
- Comments in the sample code must comply with the following requirements:
- Comments are written in English.
- Use ```"""``` to comment out Python functions, methods, and classes.
- Use ```#``` to comment out other Python code.
- Use ```//``` to comment out C++ code.
```
"""
Comments on Python functions, methods, and classes
"""
# Python code comments
// C++ code comments
```
- A blank line must be added before and after an image and an image title. Otherwise, the typesetting will be abnormal.
```
Example:
![](./xxx.png)
Figure 1: xxx
The following content.
```
\ No newline at end of file
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More_considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.
Creative Commons may be contacted at creativecommons.org.
MindSpore Document
Copyright 2019-2020 Huawei Technologies Co., Ltd
# MindSpore Documents
![MindSpore Logo](resource/MindSpore-logo.png)
## Overview
This project provides the source files of the installation guide, tutorials, and other documents, as well as API configurations on the MindSpore official website <https://www.mindspore.cn>.
## Contributions
You are welcome to contribute documents. If you want to contribute documents, read the [CONTRIBUTING_DOC.md](./CONTRIBUTING_DOC.md). Please comply with the document writing specifications, and submit documents according to the process rules. After the documents are approved, the changes will be displayed in the document project and on the official website.
If you have any comments or suggestions on the documents, submit them in Issues.
## Directory Structure Description
```
docs
├───api // Configuration files for API generation.
├───docs // Introduction to documents.
├───install // Installation guide.
├───resource // Resource-related documents.
├───tutorials // Tutorial-related documents.
└───README.md // Docs repository description.
```
## Document Construction
MindSpore tutorials and API documents can be generated by [Sphinx](https://www.sphinx-doc.org/en/master/). The following uses the API document as an example to describe the procedure.
1. Download code of the MindSpore Docs repository.
```shell
git clone https://gitee.com/mindspore/docs.git
```
2. Go to the docs directory and install the dependency items in the ```requirements.txt``` file.
```shell
cd docs/api
pip install -r requirements.txt
```
3. Run the following command in the docs directory to create the build_zh_cn/html directory that stores the generated document web page. You can open ```build_zh_cn/html/index.html``` to view the API document.
```
make html
```
## License
- [Apache License 2.0](LICENSE)
- [Creative Commons License version 4.0](LICENSE-CC-BY-4.0)
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source_zh_cn
BUILDDIR = build_zh_cn
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
sphinx
recommonmark
sphinx-markdown-tables
sphinx_rtd_theme
numpy
jieba
\ No newline at end of file
# !/bin/bash
make html
if [ $? -ne 0 ]; then
echo "make html failed"
exit
fi
cd build_zh_cn/html
python -m http.server
mindarmour.attacks
==================
.. automodule:: mindarmour.attacks
:members:
\ No newline at end of file
mindarmour.defenses
===================
.. automodule:: mindarmour.defenses
:members:
\ No newline at end of file
mindarmour.detectors
====================
.. automodule:: mindarmour.detectors
:members:
\ No newline at end of file
mindarmour.evaluations
======================
.. automodule:: mindarmour.evaluations
:members:
\ No newline at end of file
mindarmour
==========
.. automodule:: mindarmour
:members:
\ No newline at end of file
mindarmour.utils
================
.. automodule:: mindarmour.utils
:members:
\ No newline at end of file
mindinsight.lineagemgr
======================
.. automodule:: mindinsight.lineagemgr
:members:
\ No newline at end of file
mindspore.common.initializer
============================
.. automodule:: mindspore.common.initializer
:members:
\ No newline at end of file
mindspore.communication
=======================
.. automodule:: mindspore.communication
:members:
mindspore.context
=================
.. automodule:: mindspore.context
:members:
mindspore.dataset
=================
.. automodule:: mindspore.dataset
:members:
:inherited-members:
:exclude-members: get_args, read_dir
.. autodata:: config
.. autoclass:: mindspore.dataset.core.configuration.ConfigurationManager
:members:
mindspore.dataset.transforms.c_transforms
=========================================
.. automodule:: mindspore.dataset.transforms.c_transforms
:members:
mindspore.dataset.transforms.py_transforms
==========================================
.. automodule:: mindspore.dataset.transforms.py_transforms
:members:
mindspore.dataset.transforms.vision.c_transforms
================================================
.. automodule:: mindspore.dataset.transforms.vision.c_transforms
:members:
mindspore.dataset.transforms.vision.py_transforms
=================================================
.. automodule:: mindspore.dataset.transforms.vision.py_transforms
:members:
mindspore.dtype
===============
Data Type
----------
.. class:: mindspore.dtype
The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
Run the following command to import the package:
.. code-block::
import mindspore.common.dtype as mstype
or
.. code-block::
from mindspore import dtype as mstype
Numeric Type
~~~~~~~~~~~~
Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
The following table lists the details.
============================================== =============================
Definition Description
============================================== =============================
``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
``mindspore.int16`` , ``mindspore.short`` 16-bit integer
``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
============================================== =============================
Other Type
~~~~~~~~~~
For other defined types, see the following table.
============================ =================
Type Description
============================ =================
``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW.
``MetaTensor`` A tensor only has data type and shape.
``bool_`` Bool number.
``int_`` Integer scalar.
``uint`` Unsigned integer scalar.
``float_`` Floating-point scalar.
``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
``function`` Function. Return in two ways, one returns ``Func`` directly, the other returns ``Func(args: List[T0,T1,...,Tn], retval: T)`` .
``type_type`` Type of type.
``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
``symbolic_key`` The value of a variable managed by embd, which is used as a key of the variable in ``env_type`` .
``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
============================ =================
Tree Topology
~~~~~~~~~~~~~~
The relationships of the above types are as follows:
.. code-block::
└─────── number
│ ├─── bool_
│ ├─── int_
│ │ ├─── int8, byte
│ │ ├─── int16, short
│ │ ├─── int32, intc
│ │ └─── int64, intp
│ ├─── uint
│ │ ├─── uint8, ubyte
│ │ ├─── uint16, ushort
│ │ ├─── uint32, uintc
│ │ └─── uint64, uintp
│ └─── float_
│ ├─── float16
│ ├─── float32
│ └─── float64
├─── tensor
│ ├─── Array[Float32]
│ └─── ...
├─── list_
│ ├─── List[Int32,Float32]
│ └─── ...
├─── tuple_
│ ├─── Tuple[Int32,Float32]
│ └─── ...
├─── function
│ ├─── Func
│ ├─── Func[(Int32, Float32), Int32]
│ └─── ...
├─── MetaTensor
├─── type_type
├─── type_none
├─── symbolic_key
└─── env_type
\ No newline at end of file
mindspore.mindrecord
====================
.. automodule:: mindspore.mindrecord
:members:
\ No newline at end of file
mindspore.nn
============
.. automodule:: mindspore.nn
:members:
mindspore.ops.composite
=======================
.. automodule:: mindspore.ops.composite
:members:
mindspore.ops.operations
========================
.. automodule:: mindspore.ops.operations
:members:
mindspore.ops
=============
.. automodule:: mindspore.ops
:members:
:exclude-members: signature_rw, signature_kind
mindspore.parallel
==================
.. automodule:: mindspore.parallel
:members:
mindspore
=========
.. automodule:: mindspore
:members:
\ No newline at end of file
mindspore.train
===============
.. automodule:: mindspore.train.summary
:members:
.. automodule:: mindspore.train.callback
:members:
.. automodule:: mindspore.train.serialization
:members:
.. automodule:: mindspore.train.amp
:members:
.. automodule:: mindspore.train.loss_scale_manager
:members:
\ No newline at end of file
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
# import sys
# sys.path.append('..')
# sys.path.insert(0, os.path.abspath('.'))
import mindspore
# If you don't want to generate MindInsight APIs, comment this line.
import mindinsight
# If you don't want to generate MindArmour APIs, comment this line.
import mindarmour
# -- Project information -----------------------------------------------------
project = 'MindSpore'
copyright = '2020, MindSpore'
author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = '0.1.0-alpha'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinx_markdown_tables',
'recommonmark',
]
source_suffix = {
'.rst': 'restructuredtext',
'.md': 'markdown',
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
pygments_style = 'sphinx'
autodoc_inherit_docstrings = False
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# -- Options for Texinfo output -------------------------------------------
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'python': ('https://docs.python.org/', '../python_objects.inv'),
'numpy': ('https://docs.scipy.org/doc/numpy/', '../numpy_objects.inv'),
}
.. MindSpore documentation master file, created by
sphinx-quickstart on Thu Mar 24 11:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
MindSpore API
=============
.. toctree::
:maxdepth: 1
:caption: Python API
api/python/mindspore/mindspore
api/python/mindspore/mindspore.dtype
api/python/mindspore/mindspore.common.initializer
api/python/mindspore/mindspore.communication
api/python/mindspore/mindspore.context
api/python/mindspore/mindspore.nn
api/python/mindspore/mindspore.ops
api/python/mindspore/mindspore.ops.composite
api/python/mindspore/mindspore.ops.operations
api/python/mindspore/mindspore.parallel
api/python/mindspore/mindspore.train
api/python/mindspore/mindspore.dataset
api/python/mindspore/mindspore.dataset.transforms.c_transforms
api/python/mindspore/mindspore.dataset.transforms.vision.c_transforms
api/python/mindspore/mindspore.dataset.transforms.py_transforms
api/python/mindspore/mindspore.dataset.transforms.vision.py_transforms
api/python/mindspore/mindspore.mindrecord
api/python/mindinsight/mindinsight.lineagemgr
api/python/mindarmour/mindarmour
api/python/mindarmour/mindarmour.utils
api/python/mindarmour/mindarmour.evaluations
api/python/mindarmour/mindarmour.detectors
api/python/mindarmour/mindarmour.attacks
api/python/mindarmour/mindarmour.defenses
.. toctree::
:maxdepth: 1
:caption: C++ API
predict <https://www.mindspore.cn/apicc/en/0.1.0-alpha/predict/namespacemembers.html>
mindarmour.attacks
==================
.. automodule:: mindarmour.attacks
:members:
\ No newline at end of file
mindarmour.defenses
===================
.. automodule:: mindarmour.defenses
:members:
\ No newline at end of file
mindarmour.detectors
====================
.. automodule:: mindarmour.detectors
:members:
\ No newline at end of file
mindarmour.evaluations
======================
.. automodule:: mindarmour.evaluations
:members:
\ No newline at end of file
mindarmour
==========
.. automodule:: mindarmour
:members:
\ No newline at end of file
mindarmour.utils
================
.. automodule:: mindarmour.utils
:members:
\ No newline at end of file
mindinsight.lineagemgr
======================
.. automodule:: mindinsight.lineagemgr
:members:
\ No newline at end of file
mindspore.common.initializer
============================
.. automodule:: mindspore.common.initializer
:members:
\ No newline at end of file
mindspore.communication
=======================
.. automodule:: mindspore.communication
:members:
mindspore.context
=================
.. automodule:: mindspore.context
:members:
mindspore.dataset
=================
.. automodule:: mindspore.dataset
:members:
:inherited-members:
:exclude-members: get_args, read_dir
.. autodata:: config
.. autoclass:: mindspore.dataset.core.configuration.ConfigurationManager
:members:
mindspore.dataset.transforms.c_transforms
=========================================
.. automodule:: mindspore.dataset.transforms.c_transforms
:members:
mindspore.dataset.transforms.py_transforms
==========================================
.. automodule:: mindspore.dataset.transforms.py_transforms
:members:
mindspore.dataset.transforms.vision.c_transforms
================================================
.. automodule:: mindspore.dataset.transforms.vision.c_transforms
:members:
mindspore.dataset.transforms.vision.py_transforms
=================================================
.. automodule:: mindspore.dataset.transforms.vision.py_transforms
:members:
mindspore.dtype
===============
Data Type
----------
.. class:: mindspore.dtype
The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
Run the following command to import the package:
.. code-block::
import mindspore.common.dtype as mstype
or
.. code-block::
from mindspore import dtype as mstype
Numeric Type
~~~~~~~~~~~~
Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
The following table lists the details.
============================================== =============================
Definition Description
============================================== =============================
``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
``mindspore.int16`` , ``mindspore.short`` 16-bit integer
``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
============================================== =============================
Other Type
~~~~~~~~~~
For other defined types, see the following table.
============================ =================
Type Description
============================ =================
``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW.
``MetaTensor`` A tensor only has data type and shape.
``bool_`` Bool number.
``int_`` Integer scalar.
``uint`` Unsigned integer scalar.
``float_`` Floating-point scalar.
``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
``function`` Function. Return in two ways, one returns ``Func`` directly, the other returns ``Func(args: List[T0,T1,...,Tn], retval: T)`` .
``type_type`` Type of type.
``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
``symbolic_key`` The value of a variable managed by embd, which is used as a key of the variable in ``env_type`` .
``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
============================ =================
Tree Topology
~~~~~~~~~~~~~~
The relationships of the above types are as follows:
.. code-block::
└─── mindspore.dtype
├─── number
│ ├─── bool_
│ ├─── int_
│ │ ├─── int8, byte
│ │ ├─── int16, short
│ │ ├─── int32, intc
│ │ └─── int64, intp
│ ├─── uint
│ │ ├─── uint8, ubyte
│ │ ├─── uint16, ushort
│ │ ├─── uint32, uintc
│ │ └─── uint64, uintp
│ └─── float_
│ ├─── float16
│ ├─── float32
│ └─── float64
├─── tensor
│ ├─── Array[float32]
│ └─── ...
├─── list_
│ ├─── List[int32,float32]
│ └─── ...
├─── tuple_
│ ├─── Tuple[int32,float32]
│ └─── ...
├─── function
│ ├─── Func
│ ├─── Func[(int32, float32), int32]
│ └─── ...
├─── MetaTensor
├─── type_type
├─── type_none
├─── symbolic_key
└─── env_type
\ No newline at end of file
mindspore.mindrecord
====================
.. automodule:: mindspore.mindrecord
:members:
\ No newline at end of file
mindspore.nn
============
.. automodule:: mindspore.nn
:members:
mindspore.ops.composite
=======================
.. automodule:: mindspore.ops.composite
:members:
mindspore.ops.operations
========================
.. automodule:: mindspore.ops.operations
:members:
mindspore.ops
=============
.. automodule:: mindspore.ops
:members:
:exclude-members: signature_rw, signature_kind
mindspore.parallel
==================
.. automodule:: mindspore.parallel
:members:
mindspore
=========
.. automodule:: mindspore
:members:
\ No newline at end of file
mindspore.train
===============
.. automodule:: mindspore.train.summary
:members:
.. automodule:: mindspore.train.callback
:members:
.. automodule:: mindspore.train.serialization
:members:
.. automodule:: mindspore.train.amp
:members:
.. automodule:: mindspore.train.loss_scale_manager
:members:
\ No newline at end of file
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
# import sys
# sys.path.append('..')
# sys.path.insert(0, os.path.abspath('.'))
import mindspore
# If you don't want to generate MindInsight APIs, comment this line.
import mindinsight
# If you don't want to generate MindArmour APIs, comment this line.
import mindarmour
# -- Project information -----------------------------------------------------
project = 'MindSpore'
copyright = '2020, MindSpore'
author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = '0.1.0-alpha'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinx_markdown_tables',
'recommonmark',
]
source_suffix = {
'.rst': 'restructuredtext',
'.md': 'markdown',
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
pygments_style = 'sphinx'
autodoc_inherit_docstrings = False
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
html_search_options = {'dict': '../resource/jieba.txt'}
# -- Options for Texinfo output -------------------------------------------
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'python': ('https://docs.python.org/', '../python_objects.inv'),
'numpy': ('https://docs.scipy.org/doc/numpy/', '../numpy_objects.inv'),
}
.. MindSpore documentation master file, created by
sphinx-quickstart on Thu Mar 24 11:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
MindSpore API
=============
.. toctree::
:maxdepth: 1
:caption: Python API
api/python/mindspore/mindspore
api/python/mindspore/mindspore.dtype
api/python/mindspore/mindspore.common.initializer
api/python/mindspore/mindspore.communication
api/python/mindspore/mindspore.context
api/python/mindspore/mindspore.nn
api/python/mindspore/mindspore.ops
api/python/mindspore/mindspore.ops.composite
api/python/mindspore/mindspore.ops.operations
api/python/mindspore/mindspore.parallel
api/python/mindspore/mindspore.train
api/python/mindspore/mindspore.dataset
api/python/mindspore/mindspore.dataset.transforms.c_transforms
api/python/mindspore/mindspore.dataset.transforms.vision.c_transforms
api/python/mindspore/mindspore.dataset.transforms.py_transforms
api/python/mindspore/mindspore.dataset.transforms.vision.py_transforms
api/python/mindspore/mindspore.mindrecord
api/python/mindinsight/mindinsight.lineagemgr
api/python/mindarmour/mindarmour
api/python/mindarmour/mindarmour.utils
api/python/mindarmour/mindarmour.evaluations
api/python/mindarmour/mindarmour.detectors
api/python/mindarmour/mindarmour.attacks
api/python/mindarmour/mindarmour.defenses
.. toctree::
:maxdepth: 1
:caption: C++ API
predict <https://www.mindspore.cn/apicc/zh-CN/0.1.0-alpha/predict/namespacemembers.html>
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source_zh_cn
BUILDDIR = build_zh_cn
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
sphinx
recommonmark
sphinx-markdown-tables
sphinx_rtd_theme
numpy
jieba
\ No newline at end of file
# Overall Architecture
This document describes the overall architecture of MindSpore.
<!-- TOC -->
- [Overall Architecture](#overall-architecture)
<!-- /TOC -->
The MindSpore framework consists of the Frontend Expression layer, Graph Engine layer, and Backend Runtime layer.
![architecture](./images/architecture.png)
- MindSpore Frontend Expression layer
This layer contains Python APIs, MindSpore intermediate representation (IR), and graph high level optimization (GHLO).
- Python APIs provide users with a unified API for model training, inference, and export, and a unified API for data processing and format transformation.
- GHLO includes optimization irrelevant to hardware (such as dead code elimination), auto parallel, and auto differentiation.
- MindSpore IR provides unified intermediate representations, based on which MindSpore performs pass optimization.
- MindSpore Graph Engine layer
This layer contains graph low level optimization (GLLO) and graph execution.
- GLLO includes hardware-related optimization and in-depth optimization related to the combination of hardware and software, such as operator fusion and buffer fusion.
- Graph execution provides communication APIs required for offline graph execution and distributed training.
- MindSpore Backend Runtime layer
This layer contains the efficient running environments on the cloud, edge and device.
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
# -- Project information -----------------------------------------------------
project = 'MindSpore'
copyright = '2020, MindSpore'
author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = '0.1.0-alpha'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx_markdown_tables',
'recommonmark',
]
source_suffix = {
'.rst': 'restructuredtext',
'.md': 'markdown',
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
pygments_style = 'sphinx'
autodoc_inherit_docstrings = False
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
\ No newline at end of file
# Constraints on Network Construction Using Python
<!-- TOC -->
- [Constraints on Network Construction Using Python](#constraints-on-network-construction-using-python)
- [Overview](#overview)
- [Syntax Constraints](#syntax-constraints)
- [Supported Python Data Types](#supported-python-data-types)
- [MindSpore Extended Data Type](#mindspore-extended-data-type)
- [Expression Types](#expression-types)
- [Statement Types](#statement-types)
- [System Functions](#system-functions)
- [Function Parameters](#function-parameters)
- [Operators](#operators)
- [Slicing Operations](#slicing-operations)
- [Unsupported Syntax](#unsupported-syntax)
- [Network Definition Constraints](#network-definition-constraints)
- [Instance Types on the Entire Network](#instance-types-on-the-entire-network)
- [Network Input Type](#network-input-type)
- [Network Graph Optimization](#network-graph-optimization)
- [Network Construction Components](#network-construction-components)
- [Other Constraints](#other-constraints)
<!-- /TOC -->
## Overview
MindSpore can compile user source code based on the Python syntax into computational graphs, and can convert common functions or instances inherited from nn.Cell into computational graphs. Currently, MindSpore does not support conversion of any Python source code into computational graphs. Therefore, there are constraints on source code compilation, including syntax constraints and network definition constraints. As MindSpore evolves, the constraints may change.
## Syntax Constraints
### Supported Python Data Types
* Number: supports `int`, `float`, and `bool`. Complex numbers are not supported.
* String
* List: supports the append method only. Updating a list will generate a new list.
* Tuple
* Dictionary: The type of key should be String.
### MindSpore Extended Data Type
* Tensor: Tensor variables must be defined instances.
### Expression Types
| Operation | Description
| :----------- |:--------
| Unary operator |`+`,`-`, and`not`. The operator `+` supports only scalars.
| Binary operator |`+`, `-`, `*`, `/`, and `%`.
| `if` expression | For example, `a = x if x < y else y`.
| Comparison expression | `>`, `>=`, `<`, `<=`, `==`, and `! =`.
| Logical expression | `and` and `or`.
| `lambda` expression | For example, `lambda x, y: x + y`.
| Reserved keyword type | `True`, `False`, and `None`.
### Statement Types
| Statement | Compared with Python
| :----------- |:--------
| `for` | Nested for loops are partially supported. Iteration sequences must be tuples or lists.
| `while` | Nested while loops are partially supported.
| `if` | Same as that in Python. The input of the `if` condition must be a constant.
| `def` | Same as that in Python.
| Assignment statement | Accessed multiple subscripts of lists and dictionaries cannot be used as l-value.
### System Functions
* len
* partial
* map
* zip
* range
### Function Parameters
* Default parameter value: The data types `int`, `float`, `bool`, `None`, `str`, `tuple`, `list`, and `dict` are supported, whereas `Tensor` is not supported.
* Variable parameter: Functions with variable parameters cannot be used for backward propagation on computational graphs.
* Key-value pair parameter: Functions with key-value pair parameters cannot be used for backward propagation on computational graphs.
* Variable key-value pair parameter: Functions with variable key-value pairs cannot be used for backward propagation on computational graphs.
### Operators
| Operator | Supported Type
| :----------- |:--------
| `+` |Scalar, `Tensor`, and `tuple`
| `-` |Scalar and `Tensor`
| `*` |Scalar and `Tensor`
| `/` |Scalar and `Tensor`
| `[]` |The operation object type can be `list`, `tuple`, or `Tensor`. Accessed multiple subscripts of lists and dictionaries can be used as r-values instead of l-values. The index type cannot be Tensor. For details about access constraints for the tuple and Tensor types, see the description of slicing operations.
### Slicing Operations
* `tuple` slicing operation: `tuple_x[start:stop:step]`
- `tuple_x` indicates a tuple on which the slicing operation is performed.
- `start`: index where the slice starts. The value is of the `int` type, and the value range is `[-length(tuple_x), length(tuple_x) - 1]`. Default values can be used. The default settings are as follows:
- When `step > 0`, the default value is `0`.
- When `step < 0`, the default value is `length(tuple_x) - 1`.
- `end`: index where the slice ends. The value is of the `int` type, and the value range is `[-length(tuple_x) - 1, length(tuple_x)]`. Default values can be used. The default settings are as follows:
- When `step > 0`, the default value is `length(tuple_x)`.
- When `step < 0`, the default value is `-1`.
- `step`: slicing step. The value is of the `int` type, and its range is `step! = 0`. The default value `1` can be used.
* `Tensor` slicing operation: `tensor_x[start0:stop0:step0, start1:stop1:step1, start2:stop2:step2]`
- `tensor_x` indicates a `Tensor` with at least three dimensions. The slicing operation is performed on it.
- `start0`: index where the slice starts in dimension 0. The value is of the `int` type. Default values can be used. The default settings are as follows:
- When `step > 0`, the default value is `0`.
- When `step < 0`, the default value is `-1`.
- `end0`: index where the slice ends in dimension 0. The value is of the `int` type. Default values can be used. The default settings are as follows:
- When `step > 0`, the default value is `length(tuple_x)`.
- When `step < 0`, the default value is `-(1 + length(tuple_x))`.
- `step0`: slicing step in dimension 0. The value is of the `int` type, and its range is `step! = 0`. The default value `1` can be used.
- If the number of dimensions for slicing is less than that for `Tensor`, all elements are used by default if no slice dimension is specified.
- Slice dimension reduction operation: If an integer index is transferred to a dimension, the elements of the corresponding index in the dimension is obtained and the dimension is eliminated. For example, after `tensor_x[2:4:1, 1, 0:5:2]` with shape (4, 3, 6) is sliced, a `Tensor` with shape (2, 3) is generated. The first dimension of the original `Tensor` is eliminated.
### Unsupported Syntax
Currently, the following syntax is not supported in network constructors:
`break`, `continue`, `pass`, `raise`, `yield`, `async for`, `with`, `async with`, `assert`, `import`, and `await`.
## Network Definition Constraints
### Instance Types on the Entire Network
* Common Python function with the [@ms_function](https://www.mindspore.cn/api/en/0.1.0-alpha/api/python/mindspore/mindspore.html#mindspore.ms_function) decorator.
* Cell subclass inherited from [nn.Cell](https://www.mindspore.cn/api/en/0.1.0-alpha/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
### Network Input Type
* The training data input parameters of the entire network must be of the Tensor type.
* The generated ANF diagram cannot contain the following constant nodes: string constants, constants with nested tuples, and constants with nested lists.
### Network Graph Optimization
During graph optimization at the ME frontend, the dataclass, dictionary, list, and key-value pair types are converted to tuple types, and the corresponding operations are converted to tuple operations.
### Network Construction Components
| Category | Content
| :----------- |:--------
| `Cell` instance |[mindspore/nn/*](https://www.mindspore.cn/api/en/0.1.0-alpha/api/python/mindspore/mindspore.nn.html), and custom [Cell](https://www.mindspore.cn/api/en/0.1.0-alpha/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
| Member function of a `Cell` instance | Member functions of other classes in the construct function of Cell can be called.
| Function | Custom Python functions and system functions listed in the preceding content.
| Dataclass instance | Class decorated with @dataclass.
| Primitive operator |[mindspore/ops/operations/*](https://www.mindspore.cn/api/en/0.1.0-alpha/api/python/mindspore/mindspore.ops.operations.html).
| Composite operator |[mindspore/ops/composite/*](https://www.mindspore.cn/api/en/0.1.0-alpha/api/python/mindspore/mindspore.ops.composite.html).
| Operator generated by constexpr |Uses the value generated by [@constexpr](https://www.mindspore.cn/api/en/0.1.0-alpha/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr) to calculate operators.
### Other Constraints
Input parameters of the construct function on the entire network and parameters of functions modified by the ms_function decorator are generalized during the graph compilation. Therefore, they cannot be transferred to operators as constant input, as shown in the following example:
* The following is an example of incorrect input:
```python
class ExpandDimsTest(Cell):
def __init__(self):
super(ExpandDimsTest, self).__init__()
self.expandDims = P.ExpandDims()
def construct(self, input_x, input_axis):
return self.expandDims(input_x, input_axis)
expand_dim = ExpandDimsTest()
input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
expand_dim(input_x, 0)
```
In the example, ExpandDimsTest is a single-operator network with two inputs: input_x and input_axis. The second input of the ExpandDims operator must be a constant. This is because input_axis is required when the output dimension of the ExpandDims operator is deduced during graph compilation. As the network parameter input, the value of input_axis is generalized into a variable and cannot be determined. As a result, the output dimension of the operator cannot be deduced, causing the graph compilation failure. Therefore, the input required by deduction in the graph compilation phase must be a constant. In APIs, the "constant input is needed" is marked for parameters that require constant input of these operators.
* Directly enter the needed value or a member variable in a class for the constant input of the operator in the construct function. The following is an example of correct input:
```python
class ExpandDimsTest(Cell):
def __init__(self, axis):
super(ExpandDimsTest, self).__init__()
self.expandDims = P.ExpandDims()
self.axis = axis
def construct(self, input_x):
return self.expandDims(input_x, self.axis)
axis = 0
expand_dim = ExpandDimsTest(axis)
input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
expand_dim(input_x)
```
# Glossary
<!-- TOC -->
- [Glossary](#glossary)
<!-- /TOC -->
| Acronym and Abbreviation | Description |
| ----- | ----- |
| Ascend | Name of Huawei Ascend series chips. |
| CCE | Cube-based Computing Engine, which is an operator development tool oriented to hardware architecture programming. |
| CCE-C | Cube-based Computing Engine C, which is C code developed by the CCE. |
| CheckPoint | MindSpore model training check point, which is used to save model parameters for inference or retraining. |
| CIFAR-10 | An open-source image data set that contains 60000 32 x 32 color images of 10 categories, with 6000 images of each category. There are 50000 training images and 10000 test images. |
| CIFAR-100 | An open-source image data set that contains 100 categories. Each category contains 600 images. Each course has 500 training images and 100 test images. |
| DaVinci | DaVinci architecture, Huawei-developed new chip architecture. |
| EulerOS | Euler operating system, which is developed by Huawei based on the standard Linux kernel. |
| FC Layer | Fully connected layer, which acts as a classifier in the entire convolutional neural network. |
| FE | Fusion Engine, which connects to GE and TBE operators and has the capabilities of loading and managing the operator information library and managing convergence rules. |
| FP16 | 16-bit floating point, which is a half-precision floating point arithmetic format, consuming less memory. |
| FP32 | 32-bit floating point, which is a single-precision floating point arithmetic format. |
| GE | Graph Engine, MindSpore computational graph execution engine, which is responsible for optimizing hardware (such as operator fusion and memory overcommitment) based on the front-end computational graph and starting tasks on the device side. |
| GHLO | Graph High Level Optimization. GHLO includes optimization irrelevant to hardware (such as dead code elimination), auto parallel, and auto differentiation. |
| GLLO | Graph Low Level Optimization. GLLO includes hardware-related optimization and in-depth optimization related to the combination of hardware and software, such as operator fusion and buffer fusion. |
| Graph Mode | MindSpore static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution, featuring high performance. |
| HCCL | Huawei Collective Communication Library, which implements multi-device and multi-card communication based on the Da Vinci architecture chip. |
| ImageNet | Image database organized based on the WordNet hierarchy (currently nouns only). |
| LeNet | A classical convolutional neural network architecture proposed by Yann LeCun and others. |
| Loss | Difference between the predicted value and the actual value, which is a standard for determining the model quality of deep learning. |
| LSTM | Long short-term memory, an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence. |
| Manifest | A data format file. Huawei ModelArt adopts this format. For details, see <https://support.huaweicloud.com/engineers-modelarts/modelarts_23_0009.html>. |
| ME | Mind Expression, MindSpore frontend, which is used to compile tasks from user source code to computational graphs, control execution during training, maintain contexts (in non-sink mode), and dynamically generate graphs (in PyNative mode). |
| MindArmour | MindSpore security component, which is used for AI adversarial example management, AI model attack defense and enhancement, and AI model robustness evaluation. |
| MindData | MindSpore data framework, which provides data loading, enhancement, dataset management, and visualization. |
| MindInsight | MindSpore visualization component, which visualizes information such as scalars, images, computational graphs, and model hyperparameters. |
| MindSpore | Huawei-leaded open-source deep learning framework. |
| MindSpore Predict | A lightweight deep neural network inference engine that provides the inference function for models trained by MindSpore on the device side. |
| MNIST database | Modified National Handwriting of Images and Technology database, a large handwritten digit database, which is usually used to train various image processing systems. |
| PyNative Mode | MindSpore dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model. |
| ResNet-50 | Residual Neural Network 50, a residual neural network proposed by four Chinese people, including Kaiming He from Microsoft Research Institute. |
| Schema | Data set structure definition file, which defines the fields contained in a data set and the field types. |
| Summary | An operator that monitors the values of tensors on the network. It is a peripheral operation in the figure and does not affect the data flow. |
| TBE | Tensor Boost Engine, an operator development tool that is extended based on the Tensor Virtual Machine (TVM) framework. |
| TFRecord | Data format defined by TensorFlow. |
.. MindSpore documentation master file, created by
sphinx-quickstart on Thu Mar 24 10:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
MindSpore Documentation
=======================
.. toctree::
:glob:
:maxdepth: 1
architecture
roadmap
constraints_on_network_construction
operator_list
glossary
此差异已折叠。
# RoadMap
MindSpore's top priority plans in the year are displayed as follows. We will continuously adjust the priority based on user feedback.
<!-- TOC -->
- [Preset Models](#preset-models)
- [Usability](#usability)
- [Performance Optimization](#performance-optimization)
- [Architecture Evolution](#architecture-evolution)
- [MindInsight Debugging and Optimization](#mindinsight-debugging-and-optimization)
- [MindArmour Security Hardening Package](#mindarmour-security-hardening-package)
- [Inference Framework](#inference-framework)
<!-- /TOC -->
In general, we will make continuous improvements in the following aspects:
1. Support more preset models.
2. Continuously supplement APIs and operator libraries to improve usability and programming experience.
3. Comprehensively support for Huawei Ascend AI processor and continuously optimize the performance and software architecture.
4. Improve visualization, debugging and optimization, and security-related tools.
We sincerely hope that you can join the discussion in the user community and contribute your suggestions.
## Preset Models
* CV: Classic models for object detection, GAN, image segmentation, and posture recognition.
* NLP: RNN and Transformer neural network, expanding the application based on the BERT pre-training model.
* Other: GNN, reinforcement learning, probabilistic programming, and AutoML.
## Usability
* Supplement APIs such as operators, optimizers, and loss functions.
* Complete the native expression support of the Python language.
* Support common Tensor/Math operations.
* Add more application scenarios of automatic parallelization to improve the accuracy of policy search.
## Performance Optimization
* Optimize the compilation time.
* Low-bit mixed precision training and inference.
* Improve memory utilization.
* Provide more fusion optimization methods.
* Improve the execution performance in PyNative.
## Architecture Evolution
* Optimize computational graph and operator fusion. Use fine-grained graph IR to express operators to form intermediate representation (IR) with operator boundaries and explore more layer optimization opportunities.
* Support more programming languages.
* Optimize the automatic scheduling and distributed training data cache mechanism of data augmentation.
* Continuously improve MindSpore IR.
* Support distributed training in parameter server mode.
## MindInsight Debugging and Optimization
* Training process observation
* Histogram
* Optimize the display of computational and data graphs.
* Integrate the performance profiling and debugger tools.
* Support comparison between multiple trainings.
* Training result lineage
* Data augmentation lineage comparison.
* Training process diagnosis
* Performance profiling.
* Graph model-based debugger.
## MindArmour Security Hardening Package
* Test the model security.
* Provide model security hardening tools.
* Protect data privacy during training and inference.
## Inference Framework
* Support TensorFlow, Caffe, and ONNX model formats.
* Support iOS.
* Improve more CPU operators.
* Support more CV/NLP models.
* Online learning.
* Support deployment on IoT devices.
* Low-bit quantization.
* CPU and NPU heterogeneous scheduling.
# 总体架构
本文将为大家介绍MindSpore总体架构。
<!-- TOC -->
- [总体架构](#总体架构)
<!-- /TOC -->
MindSpore框架架构总体分为MindSpore前端表示层、MindSpore计算图引擎和MindSpore后端运行时三层。
![architecture](./images/architecture.png)
- MindSpore前端表示层(Mind Expression,简称ME)
该部分包含Python API、MindSpore IR(Intermediate representation,简称IR)、计算图高级别优化(Graph High Level Optimization,简称GHLO)三部分。
- Python API向用户提供统一的模型训练、推理、导出接口,以及统一的数据处理、增强、格式转换接口。
- GHLO包含硬件无关的优化(如死代码消除等)、自动并行和自动微分等功能。
- MindSpore IR提供统一的中间表示,MindSpore基于此IR进行pass优化。
- MindSpore计算图引擎(Graph Engine,简称GE)
该部分包含计算图低级别优化(Graph Low Level Optimization,简称GLLO)、图执行。
- GLLO包含硬件相关的优化,以及算子融合、Buffer融合等软硬件结合相关的深度优化。
- 图执行提供离线图执行、分布式训练所需要的通信接口等功能。
- MindSpore后端运行时
该部分包含云、边、端上不同环境中的高效运行环境。
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
# -- Project information -----------------------------------------------------
project = 'MindSpore'
copyright = '2020, MindSpore'
author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = '0.1.0-alpha'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx_markdown_tables',
'recommonmark',
]
source_suffix = {
'.rst': 'restructuredtext',
'.md': 'markdown',
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
pygments_style = 'sphinx'
autodoc_inherit_docstrings = False
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
html_search_options = {'dict': '../resource/jieba.txt'}
# Python源码构造网络约束
<!-- TOC -->
- [Python源码构造网络约束](#python源码构造网络约束)
- [概述](#概述)
- [语法约束](#语法约束)
- [支持的Python数据类型](#支持的python数据类型)
- [MindSpore扩展数据类型](#mindspore扩展数据类型)
- [表达式类型](#表达式类型)
- [语句类型](#语句类型)
- [系统函数](#系统函数)
- [函数参数](#函数参数)
- [操作符](#操作符)
- [切片操作](#切片操作)
- [不支持的语法](#不支持的语法)
- [网络定义约束](#网络定义约束)
- [整网实例类型](#整网实例类型)
- [网络输入类型](#网络输入类型)
- [网络图优化](#网络图优化)
- [网络构造组件](#网络构造组件)
- [其他约束](#其他约束)
<!-- /TOC -->
## 概述
MindSpore完成从用户源码到计算图的编译,用户源码基于Python语法编写,当前MindSpore支持将普通函数或者继承自nn.Cell的实例转换生成计算图,暂不支持将任意Python源码转换成计算图,所以对于用户源码支持的写法有所限制,主要包括语法约束和网络定义约束两方面。随着MindSpore的演进,这些约束可能会发生变化。
## 语法约束
### 支持的Python数据类型
* Number:包括`int``float``bool`,不支持复数类型。
* String
* List:当前只支持append方法;List的更新会拷贝生成新的List。
* Tuple
* Dictionary:当前`key`只支持String类型
### MindSpore扩展数据类型
* Tensor:Tensor变量必须是已定义实例。
### 表达式类型
| 操作名 | 具体操作
| :----------- |:--------
| 一元操作符 |`+``-``not`,其中`+`操作符只支持标量。
| 二元操作符 |`+``-``*``/``%`
| `if`表达式 |例如`a = x if x < y else y`
| 比较表达式 | `>``>=``<``<=``==``!=`
| 逻辑表达式 | `and``or`
| `lambda`表达式 | 例如`lambda x, y: x + y`
| 保留关键字类型 | `True``False``None`
### 语句类型
| 语句 | 与Python对比
| :----------- |:--------
| `for` | 迭代序列必须是Tuple/List,部分嵌套场景支持。
| `while` | 部分嵌套场景支持。
| `if` | 与Python使用原则一致,但if条件的输入只支持常量。
| `def` | 相同。
| 赋值语句 | List和Dictionary的多重下标访问不支持作为左值。
### 系统函数
* len
* partial
* map
* zip
* range
### 函数参数
* 参数默认值:目前不支持默认值设为`Tensor`类型数据,支持`int``float``bool``None``str``tuple``list``dict`类型数据。
* 可变参数:目前不支持带可变参数的函数求反向。
* 键值对参数:目前不支持带键值对参数的函数求反向。
* 可变键值对参数:目前不支持带可变键值对的函数求反向。
### 操作符
| 运算符 | 支持类型
| :----------- |:--------
| `+` |标量、`Tensor``tuple`
| `-` |标量、`Tensor`
| `*` |标量、`Tensor`
| `/` |标量、`Tensor`
| `[]` |操作对象类型支持`list``tuple``Tensor`,支持多重下标访问作为右值,但不支持多重下标访问作为左值,且索引类型不支持Tensor;Tuple、Tensor类型访问限制见切片操作中的说明。
### 切片操作
* `tuple`切片操作:`tuple_x[start:stop:step]`
- `tuple_x`为一个元组,是被执行切片操作的目标。
- `start`:切片的起始位置索引,类型为`int`,取值范围为`[-length(tuple_x), length(tuple_x) - 1]`。可缺省,缺省配置如下:
-`step > 0`时,缺省值为`0`
-`step < 0`时,缺省值为`length(tuple_x) - 1`
- `end`:切片的结束位置索引,类型为`int`,取值范围为`[-length(tuple_x) - 1, length(tuple_x)]`。可缺省,缺省配置如下:
-`step > 0`时,缺省值为`length(tuple_x)`
-`step < 0`是,缺省值为`-1`
- `step`:切片的步长,类型为`int`,取值范围为`step != 0`。可缺省,缺省值为`1`
* `Tensor`切片操作:`tensor_x[start0:stop0:step0, start1:stop1:step1, start2:stop2:step2]`
- `tensor_x`是一个维度不低于3维的`Tensor`,对其进行切片操作。
- `start0`:在第0维上进行切片的起始位置索引,类型为`int`,可缺省,缺省配置如下:
-`step > 0`时,缺省值为`0`
-`step < 0`时,缺省值为`-1`
- `end0`:在第0维上进行切片的结束位置索引,类型为`int`,可缺省,缺省配置如下:
-`step > 0`时,缺省值为`length(tuple_x)`
-`step < 0`是,缺省值为`-(1 + length(tuple_x))`
- `step0`:在第0维上进行切片的步长,类型为`int`,取值范围为`step != 0`。可缺省,缺省值为`1`
- 如果进行切片的维数少于`Tensor`的维数,则未指定切片的维度默认取全部元素。
- 切片降维操作:在某维度上传入整数索引,则取出该维度上对应索引的元素,且消除该维度,如shape为(4, 3, 6)的`tensor_x[2:4:1, 1, 0:5:2]`切片之后,生成一个shape为(2, 3)的`Tensor`,原`Tensor`的第1维被消除。
### 不支持的语法
目前在网络构造函数里面暂不支持以下语法:
`break``continue``pass``raise``yield``async for``with``async with``assert``import``await`
## 网络定义约束
### 整网实例类型
*[@ms_function](https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/api/python/mindspore/mindspore.html#mindspore.ms_function)装饰器的普通Python函数。
* 继承自[nn.Cell](https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell)的Cell子类。
### 网络输入类型
* 整网的训练数据输入参数只能是Tensor类型。
* 生成的ANF图里面不能包含这几种常量节点:字符串类型常量、带有Tuple嵌套的常量、带有List嵌套的常量。
### 网络图优化
在ME前端图优化过程中,会将DataClass类型、Dictionary、List、键值对操作转换为Tuple相关操作。
### 网络构造组件
| 类别 | 内容
| :----------- |:--------
| `Cell`实例 |[mindspore/nn/*](https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/api/python/mindspore/mindspore.nn.html)、自定义[Cell](https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell)
| `Cell`实例的成员函数 | Cell的construct中可以调用其他类成员函数。
| 函数 | 自定义Python函数、前文中列举的系统函数。
| dataclass实例 | 使用@dataclass装饰的类。
| Primitive算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/api/python/mindspore/mindspore.ops.operations.html)
| Composite算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/api/python/mindspore/mindspore.ops.composite.html)
| constexpr生成算子 |使用[@constexpr](https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr)生成的值计算算子。
### 其他约束
整网construct函数输入的参数以及使用ms_function装饰器修饰的函数的参数在图编译过程中会进行泛化,不能作为常量输入传给算子使用,如下例所示:
* 错误的写法如下:
```python
class ExpandDimsTest(Cell):
def __init__(self):
super(ExpandDimsTest, self).__init__()
self.expandDims = P.ExpandDims()
def construct(self, input_x, input_axis):
return self.expandDims(input_x, input_axis)
expand_dim = ExpandDimsTest()
input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
expand_dim(input_x, 0)
```
在示例中,ExpandDimsTest是一个只有单算子的网络,网络的输入有input_x和input_axis两个。因为ExpandDims算子的第二个输入需要是常量,这是因为在图编译过程中推导ExpandDims算子输出维度的时候需要用到,而input_axis作为网络参数输入会泛化成变量,无法确定其值,从而无法推导算子的输出维度导致图编译失败。所以在图编译阶段需要值推导的输入都应该是常量输入。在API中,这类算子需要常量输入的参数会进行说明,标注"constant input is needed"。
* 正确的写法是在construct函数里面对算子的常量输入直接填入需要的值或者是一个类的成员变量,如下:
```python
class ExpandDimsTest(Cell):
def __init__(self, axis):
super(ExpandDimsTest, self).__init__()
self.expandDims = P.ExpandDims()
self.axis = axis
def construct(self, input_x):
return self.expandDims(input_x, self.axis)
axis = 0
expand_dim = ExpandDimsTest(axis)
input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
expand_dim(input_x)
```
# 术语
<!-- TOC -->
- [术语](#术语)
<!-- /TOC -->
| 术语/缩略语 | 说明 |
| ----- | ----- |
| Ascend | 华为昇腾系列芯片的系列名称。 |
| CCE | Cube-based Computing Engine,面向硬件架构编程的算子开发工具。 |
| CCE-C | Cube-based Computing Engine C,使用CCE开发的C代码。 |
| CheckPoint | MindSpore模型训练检查点,保存模型的参数,可以用于保存模型供推理,或者再训练。 |
| CIFAR-10 | 一个开源的图像数据集,包含10个类别的60000个32x32彩色图像,每个类别6000个图像。有50000张训练图像和10000张测试图像。 |
| CIFAR-100 | 一个开源的图像数据集,它有100个类别,每个类别包含600张图像。每个课程有500张训练图像和100张测试图像。 |
| Davinci | 达芬奇架构,华为自研的新型芯片架构。 |
| EulerOS | 欧拉操作系统,华为自研的基于Linux标准内核的操作系统。 |
| FC Layer | Fully Conneted Layer,全连接层。整个卷积神经网络中起到分类器的作用。 |
| FE | Fusion Engine,负责对接GE和TBE算子,具备算子信息库的加载与管理、融合规则管理等能力。 |
| FP16 | 16位浮点,半精度浮点算术,消耗更小内存。 |
| FP32 | 32位浮点,单精度浮点算术。 |
| GE | Graph Engine,MindSpore计算图执行引擎,主要负责根据前端的计算图完成硬件相关的优化(算子融合、内存复用等等)、device侧任务启动。 |
| GHLO | Graph High Level Optimization,计算图高级别优化。GHLO包含硬件无关的优化(如死代码消除等)、自动并行和自动微分等功能。 |
| GLLO | Graph Low Level Optimization,计算图低级别优化。GLLO包含硬件相关的优化,以及算子融合、Buffer融合等软硬件结合相关的深度优化。 |
| Graph Mode | MindSpore的静态图模式,将神经网络模型编译成一整张图,然后下发执行,性能高。 |
| HCCL | Huawei Collective Communication Library,实现了基于Davinci架构芯片的多机多卡通信。 |
| ImageNet | 根据WordNet层次结构(目前仅名词)组织的图像数据库。 |
| LeNet | 一个经典的卷积神经网络架构,由Yann LeCun等人提出。 |
| Loss | 损失,预测值与实际值的偏差,深度学习用于判断模型好坏的一个标准。 |
| LSTM | Long short-term memory,长短期记忆,对应的网络是一种时间循环神经网络,适合于处理和预测时间序列中间隔和延迟非常长的重要事件。 |
| Manifest | 一种数据格式文件,华为ModelArts采用了该格式,详细说明请参见<https://support.huaweicloud.com/engineers-modelarts/modelarts_23_0009.html>。 |
| ME | Mind Expression,MindSpore前端,主要完成从用户源码到计算图的编译任务、训练中控制执行及上下文维护(非下沉模式配置下)、动态图(PyNative模式)等。 |
| MindArmour | MindSpore安全组件,用于AI对抗样本管理,AI模型防攻击和增强,AI模型健壮性评测。 |
| MindData | MindSpore数据框架,提供数据加载、增强、数据集管理以及可视化。 |
| MindInsight | MindSpore可视化组件,可视化标量、图像、计算图以及模型超参等信息。 |
| MindSpore | 华为主导开源的深度学习框架。 |
| MindSpore Predict | 一个轻量级的深度神经网络推理引擎,提供了将MindSpore训练出的模型在端侧进行推理的功能。 |
| MNIST database | Modified National Institute of Standards and Technology database,一个大型手写数字数据库,通常用于训练各种图像处理系统。 |
| PyNative Mode | MindSpore的动态图模式,将神经网络中的各个算子逐一下发执行,方便用户编写和调试神经网络模型。 |
| ResNet-50 | Residual Neural Network 50,由微软研究院的Kaiming He等四名华人提出的残差神经网络。 |
| Schema | 数据集结构定义文件,用于定义数据集包含哪些字段以及字段的类型。 |
| Summary | 是对网络中Tensor取值进行监测的一种算子,在图中是“外围”操作,不影响数据流本身。 |
| TBE | Tensor Boost Engine,在TVM( Tensor Virtual Machine )框架基础上扩展的算子开发工具。 |
| TFRecord | Tensorflow定义的数据格式。 |
\ No newline at end of file
.. MindSpore documentation master file, created by
sphinx-quickstart on Thu Mar 24 10:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
MindSpore文档
=================
.. toctree::
:glob:
:maxdepth: 1
architecture
roadmap
constraints_on_network_construction
operator_list
glossary
此差异已折叠。
# RoadMap
以下将展示MindSpore近一年的高阶计划,我们会根据用户的反馈诉求,持续调整计划的优先级。
总体而言,我们会努力在以下几个方面不断改进。
1. 提供更多的预置模型支持。
2. 持续补齐API和算子库,改善易用性和编程体验。
3. 提供华为昇腾AI处理器的全面支持,并不断优化性能及软件架构。
4. 完善可视化、调试调优、安全相关工具。
热忱希望各位在用户社区加入讨论,并贡献您的建议。
<!-- TOC -->
- [RoadMap](#roadmap)
- [预置模型](#预置模型)
- [易用性](#易用性)
- [性能优化](#性能优化)
- [架构演进](#架构演进)
- [MindInsight调试调优](#mindinsight调试调优)
- [MindArmour安全增强包](#mindarmour安全增强包)
- [推理框架](#推理框架)
<!-- /TOC -->
## 预置模型
* CV:目标检测、GAN、图像分割、姿态识别等场景经典模型。
* NLP:RNN、Transformer类型神经网络,拓展基于Bert预训练模型的应用。
* 其它:GNN、强化学习、概率编程、AutoML等。
## 易用性
* 补齐算子、优化器、Loss函数等各类API
* 完善Python语言原生表达支持
* 支持常见的Tensor/Math操作
* 增加更多的自动并行适用场景,提高策略搜索的准确性
## 性能优化
* 优化编译时间
* 低比特混合精度训练/推理
* 提升内存使用效率
* 提供更多的融合优化手段
* 加速PyNative执行性能
## 架构演进
* 图算融合优化:使用细粒度Graph IR表达算子,构成带算子边界的中间表达,挖掘更多图层优化机会。
* 支持更多编程语言
* 优化数据增强的自动调度及分布式训练数据缓存机制
* 持续完善MindSpore IR
* Parameter Server模式分布式训练
## MindInsight调试调优
* 训练过程观察
* 直方图
* 计算图/数据图展示优化
* 集成性能Profiling/Debugger工具
* 支持多次训练间的对比
* 训练结果溯源
* 数据增强溯源对比
* 训练过程诊断
* 性能Profiling
* 基于图模型的Debugger
## MindArmour安全增强包
* 测试模型的安全性
* 提供模型安全性增强工具
* 保护训练和推理过程中的数据隐私
## 推理框架
* 提供Tensorflow/Caffe/ONNX模型格式支持
* IOS系统支持
* 完善更多的CPU算子
* 更多CV/NLP模型支持
* 在线学习
* 支持部署在IOT设备
* 低比特量化
* CPU和NPU异构调度
# 安装MindSpore
本文档介绍如何在CPU的环境上快速安装MindSpore。
<!-- TOC -->
- [安装MindSpore](#安装mindspore)
- [环境要求](#环境要求)
- [系统要求和软件依赖](#系统要求和软件依赖)
- [Conda安装(可选)](#conda安装可选)
- [安装指南](#安装指南)
- [通过可执行文件安装](#通过可执行文件安装)
- [从源码编译安装](#从源码编译安装)
- [安装MindArmour](#安装mindarmour)
<!-- /TOC -->
## 环境要求
### 系统要求和软件依赖
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
| MindSpore 0.1.0-alpha | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.1/requirements.txt) | **编译依赖:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.64 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> **安装依赖:**<br> 与可执行文件安装依赖相同 |
- Ubuntu版本为18.04时,GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载requirements.txt中的依赖项,其余情况需自行安装。
### Conda安装(可选)
1. Conda安装包下载路径如下。
- [X86 Anaconda](https://www.anaconda.com/distribution/)[X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html)
2. 创建并激活Python环境。
```bash
conda create -n {your_env_name} python=3.7.5
conda activate {your_env_name}
```
> Conda是强大的Python环境管理工具,建议初学者上网查阅更多资料。
## 安装指南
### 通过可执行文件安装
1.[MindSpore网站下载地址](https://www.mindspore.cn/versions)下载whl包,建议先进行SHA-256完整性校验,执行如下命令安装MindSpore。
```bash
pip install mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
2. 执行如下命令,如果没有提示`No module named 'mindspore'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindspore'
```
### 从源码编译安装
1. 从代码仓下载源码。
```bash
git clone https://gitee.com/mindspore/mindspore.git -b r0.1
```
2. 在源码根目录下执行如下命令编译MindSpore。
```bash
bash build.sh -e cpu -z -j4
```
> - 在执行上述命令前,需保证可执行文件cmake和patch所在路径已加入环境变量PATH中。
> - build.sh中会执行git clone获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
> - 如果编译机性能较好,可在执行中增加-j{线程数}来增加线程数量。如`bash build.sh -e cpu -z -j12`。
3. 执行如下命令安装MindSpore。
```bash
chmod +x build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
4. 执行如下命令,如果没有提示`No module named 'mindspore'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindspore'
```
# 安装MindArmour
当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。
## 环境要求
### 系统要求和软件依赖
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
| MindArmour 0.1.0-alpha | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha<br> - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.1/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载setup.py中的依赖项,其余情况需自行安装。
## 安装指南
### 通过可执行文件安装
1.[MindSpore网站下载地址](https://www.mindspore.cn/versions)下载whl包,建议先进行SHA-256完整性校验,执行如下命令安装MindArmour。
```bash
pip install mindarmour-{version}-cp37-cp37m-linux_{arch}.whl
```
2. 执行如下命令,如果没有提示`No module named 'mindarmour'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindarmour'
```
### 从源码编译安装
1. 从代码仓下载源码。
```bash
git clone https://gitee.com/mindspore/mindarmour.git -b r0.1
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
```bash
cd mindarmour
python setup.py install
```
3. 执行如下命令,如果没有提示`No module named 'mindarmour'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindarmour'
```
# MindSpore Installation Guide
This document describes how to quickly install MindSpore on a CPU environment.
<!-- TOC -->
- [MindSpore Installation Guide](#mindspore-installation-guide)
- [Environment Requirements](#environment-requirements)
- [System Requirements and Software Dependencies](#system-requirements-and-software-dependencies)
- [(Optional) Installing Conda](#optional-installing-conda)
- [Installation Guide](#installation-guide)
- [Installing Using Executable Files](#installing-using-executable-files)
- [Installing Using the Source Code](#installing-using-the-source-code)
- [Installing MindArmour](#installing-mindarmour)
<!-- /TOC -->
## Environment Requirements
### System Requirements and Software Dependencies
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore 0.1.0-alpha | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.1/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.64 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> **Installation dependencies:**<br> same as the executable file installation dependencies. |
- When Ubuntu version is 18.04, GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the requirements.txt file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
### (Optional) Installing Conda
1. Download the Conda installation package from the following path:
- [X86 Anaconda](https://www.anaconda.com/distribution/) or [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html)
2. Create and activate the Python environment.
```bash
conda create -n {your_env_name} python=3.7.5
conda activate {your_env_name}
```
> Conda is a powerful Python environment management tool. It is recommended that a beginner read related information on the Internet first.
## Installation Guide
### Installing Using Executable Files
1. Download the .whl package from the [MindSpore website](https://www.mindspore.cn/versions/en). It is recommended to perform SHA-256 integrity verification first and run the following command to install MindSpore:
```bash
pip install mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
2. Run the following command. If no loading error message such as `No module named 'mindspore'` is displayed, the installation is successful.
```bash
python -c 'import mindspore'
```
### Installing Using the Source Code
1. Download the source code from the code repository.
```bash
git clone https://gitee.com/mindspore/mindspore.git -b r0.1
```
2. Run the following command in the root directory of the source code to compile MindSpore:
```bash
bash build.sh -e cpu -z -j4
```
> - Before running the preceding command, ensure that the paths where the executable files cmake and patch store have been added to the environment variable PATH.
> - In the build.sh script, the git clone command will be executed to obtain the code in the third-party dependency database. Ensure that the network settings of Git are correct.
> - If the compiler performance is strong, you can add -j{Number of threads} in to script to increase the number of threads. For example, `bash build.sh -e cpu -z -j12`.
3. Run the following command to install MindSpore:
```bash
chmod +x build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
4. Run the following command. If no loading error message such as `No module named 'mindspore'` is displayed, the installation is successful.
```bash
python -c 'import mindspore'
```
# Installing MindArmour
If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour.
## Environment Requirements
### System Requirements and Software Dependencies
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour 0.1.0-alpha | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.1/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the setup.py file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
## Installation Guide
### Installing Using Executable Files
1. Download the .whl package from the [MindSpore website](https://www.mindspore.cn/versions/en). It is recommended to perform SHA-256 integrity verification first and run the following command to install MindArmour:
```bash
pip install mindarmour-{version}-cp37-cp37m-linux_{arch}.whl
```
2. Run the following command. If no loading error message such as `No module named 'mindarmour'` is displayed, the installation is successful.
```bash
python -c 'import mindarmour'
```
### Installing Using the Source Code
1. Download the source code from the code repository.
```bash
git clone https://gitee.com/mindspore/mindarmour.git -b r0.1
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
```bash
cd mindarmour
python setup.py install
```
3. Run the following command. If no loading error message such as `No module named 'mindarmour'` is displayed, the installation is successful.
```bash
python -c 'import mindarmour'
```
# 安装MindSpore
本文档介绍如何在Ascend AI处理器的环境上快速安装MindSpore。
<!-- TOC -->
- [安装MindSpore](#安装mindspore)
- [环境要求](#环境要求)
- [硬件要求](#硬件要求)
- [系统要求和软件依赖](#系统要求和软件依赖)
- [Conda安装(可选)](#conda安装可选)
- [配套软件包依赖配置](#配套软件包依赖配置)
- [安装指南](#安装指南)
- [通过可执行文件安装](#通过可执行文件安装)
- [从源码编译安装](#从源码编译安装)
- [配置环境变量](#配置环境变量)
- [安装验证](#安装验证)
- [安装MindInsight](#安装mindinsight)
- [安装MindArmour](#安装mindarmour)
<!-- /TOC -->
## 环境要求
### 硬件要求
- Ascend 910 AI处理器
> - 申请方式:填写[申请表](https://www.mindspore.cn/table)发送至contact@mindspore.cn,审核通过即可获取云上资源。
> - 需为每张卡预留至少32G内存。
### 系统要求和软件依赖
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
| MindSpore 0.1.0-alpha | - Ubuntu 16.04(及以上) x86_64 <br> - EulerOS 2.8 arrch64 <br> - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI处理器配套软件包(对应版本Atlas T 1.1.T106) <br> - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.1/requirements.txt) | **编译依赖:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI处理器配套软件包(对应版本Atlas T 1.1.T106) <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.64 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> **安装依赖:**<br> 与可执行文件安装依赖相同 |
- 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本Atlas T 1.1.T106)的安装路径`/usr/local/HiAI`,若无权限,需要root用户将当前用户添加到`/usr/local/HiAI`所在的用户组,具体配置请详见配套软件包的说明文档。
- Ubuntu版本为18.04时,GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载requirements.txt中的依赖项,其余情况需自行安装。
### Conda安装(可选)
1. 针对不同的CPU架构,Conda安装包下载路径如下。
- [X86 Anaconda](https://www.anaconda.com/distribution/)[X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html)
- [ARM Anaconda](https://github.com/Archiconda/build-tools/releases/download/0.2.3/Archiconda3-0.2.3-Linux-aarch64.sh)
2. 创建并激活Python环境。
```bash
conda create -n {your_env_name} python=3.7.5
conda activate {your_env_name}
```
> Conda是强大的Python环境管理工具,建议初学者上网查阅更多资料。
### 配套软件包依赖配置
- 安装Ascend 910 AI处理器配套软件包(对应版本Atlas T 1.1.T106)提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。
```bash
pip install /usr/local/HiAI/runtime/lib64/topi-{version}-py3-none-any.whl
pip install /usr/local/HiAI/runtime/lib64/te-{version}-py3-none-any.whl
```
## 安装指南
### 通过可执行文件安装
-[MindSpore网站下载地址](https://www.mindspore.cn/versions)下载whl包,建议先进行SHA-256完整性校验,执行如下命令安装MindSpore。
```bash
pip install mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
### 从源码编译安装
必须在Ascend 910 AI处理器的环境上进行编译安装。
1. 从代码仓下载源码。
```bash
git clone https://gitee.com/mindspore/mindspore.git -b r0.1
```
2. 在源码根目录下,执行如下命令编译MindSpore。
```bash
bash build.sh -e d -z
```
> - 在执行上述命令前,需保证可执行文件cmake和patch所在路径已加入环境变量PATH中。
> - build.sh中会执行git clone获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
> - build.sh中默认的编译线程数为8,如果编译机性能较差可能会出现编译错误,可在执行中增加-j{线程数}来减少线程数量。如`bash build.sh -e d -z -j4`。
3. 执行如下命令安装MindSpore。
```bash
chmod +x build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
## 配置环境变量
- 安装好MindSpore之后,需要导出Runtime相关环境变量。
```bash
# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
export GLOG_v=2
# Conda environmental options
LOCAL_HIAI=/usr/local/HiAI # the root directory of run package
# lib libraries that the run package depends on
export LD_LIBRARY_PATH=${LOCAL_HIAI}/runtime/lib64/:/usr/local/HiAI/driver/lib64:${LD_LIBRARY_PATH}
# Environment variables that must be configured
export PATH=${LOCAL_HIAI}/runtime/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
export TBE_IMPL_PATH=${LOCAL_HIAI}/runtime/ops/op_impl/built-in/ai_core/tbe/impl/ # TBE operator implementation path
export PYTHONPATH=${LOCAL_HIAI}/runtime/ops/op_impl/built-in/ai_core/tbe/:${PYTHONPATH} # Python library that TBE implementation depends on
```
## 安装验证
- 安装并配置好环境变量后,执行如下python脚本:
```bash
import numpy as np
from mindspore import Tensor
from mindspore.ops import functional as F
import mindspore.context as context
context.set_context(device_target="Ascend")
x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))
print(F.tensor_add(x, y))
```
- 若出现如下结果,即安装验证通过。
```
[[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]]]
```
# 安装MindInsight
当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。
## 环境要求
### 系统要求和软件依赖
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
| MindInsight 0.1.0-alpha | - Ubuntu 16.04(及以上) x86_64 <br> - EulerOS 2.8 arrch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha <br> - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.1/requirements.txt) | **编译依赖:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [node.js](https://nodejs.org/en/download/) >= 10.19.0 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 <br> **安装依赖:**<br> 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载requirements.txt中的依赖项,其余情况需自行安装。
## 安装指南
### 通过可执行文件安装
1.[MindSpore网站下载地址](https://www.mindspore.cn/versions)下载whl包,建议先进行SHA-256完整性校验,执行如下命令安装MindInsight。
```bash
pip install mindinsight-{version}-cp37-cp37m-linux_{arch}.whl
```
2. 执行如下命令,如果提示`web address: http://127.0.0.1:8080`,则说明安装成功。
```bash
mindinsight start
```
### 从源码编译安装
1. 从代码仓下载源码。
```bash
git clone https://gitee.com/mindspore/mindinsight.git -b r0.1
```
2. 可选择以下任意一种安装方式:
(1) 进入源码的根目录,执行安装命令。
```bash
cd mindinsight
pip install -r requirements.txt
python setup.py install
```
(2) 构建whl包进行安装。
进入源码的build目录,执行MindInsight编译脚本。
```bash
cd mindinsight/build
bash build.sh
```
进入源码的output目录,即可查看生成的MindInsight安装包,执行安装命令。
```bash
cd mindinsight/output
pip install mindinsight-{version}-cp37-cp37m-linux_{arch}.whl
```
3. 执行如下命令,如果提示`web address: http://127.0.0.1:8080`,则说明安装成功。
```bash
mindinsight start
```
# 安装MindArmour
当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。
## 环境要求
### 系统要求和软件依赖
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
| MindArmour 0.1.0-alpha | - Ubuntu 16.04(及以上) x86_64 <br> - EulerOS 2.8 arrch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha <br> - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.1/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载setup.py中的依赖项,其余情况需自行安装。
## 安装指南
### 通过可执行文件安装
1.[MindSpore网站下载地址](https://www.mindspore.cn/versions)下载whl包,建议先进行SHA-256完整性校验,执行如下命令安装MindArmour。
```bash
pip install mindarmour-{version}-cp37-cp37m-linux_{arch}.whl
```
2. 执行如下命令,如果没有提示`No module named 'mindarmour'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindarmour'
```
### 从源码编译安装
1. 从代码仓下载源码。
```bash
git clone https://gitee.com/mindspore/mindarmour.git -b r0.1
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
```bash
cd mindarmour
python setup.py install
```
3. 执行如下命令,如果没有提示`No module named 'mindarmour'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindarmour'
```
# MindSpore Installation Guide
This document describes how to quickly install MindSpore on an Ascend AI processor environment.
<!-- TOC -->
- [MindSpore Installation Guide](#mindspore-installation-guide)
- [Environment Requirements](#environment-requirements)
- [Hardware Requirements](#hardware-requirements)
- [System Requirements and Software Dependencies](#system-requirements-and-software-dependencies)
- [(Optional) Installing Conda](#optional-installing-conda)
- [Configuring software package Dependencies](#configuring-software-package-dependencies)
- [Installation Guide](#installation-guide)
- [Installing Using Executable Files](#installing-using-executable-files)
- [Installing Using the Source Code](#installing-using-the-source-code)
- [Configuring Environment Variables](#configuring-environment-variables)
- [Installation Verification](#installation-verification)
- [Installing MindInsight](#installing-mindinsight)
- [Installing MindArmour](#installing-mindarmour)
<!-- /TOC -->
## Environment Requirements
### Hardware Requirements
- Ascend 910 AI processor
> - Reserve at least 32 GB memory for each card.
### System Requirements and Software Dependencies
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore 0.1.0-alpha | - Ubuntu 16.04 or later x86_64 <br> - EulerOS 2.8 arrch64 <br> - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:Atlas T 1.1.T106) <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.1/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - Ascend 910 AI processor software package(Version:Atlas T 1.1.T106) <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.64 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> **Installation dependencies:**<br> same as the executable file installation dependencies. |
- Confirm that the current user has the right to access the installation path `/usr/local/hiAI `of Ascend 910 AI processor software package(Version:Atlas T 1.1.T106). If not, the root user needs to add the current user to the user group where `/usr/local/hiAI` is located. For the specific configuration, please refer to the software package instruction document.
- When Ubuntu version is 18.04, GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the requirements.txt file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
### (Optional) Installing Conda
1. Download the Conda installation package from the following path:
- [X86 Anaconda](https://www.anaconda.com/distribution/) or [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html)
- [ARM Anaconda](https://github.com/Archiconda/build-tools/releases/download/0.2.3/Archiconda3-0.2.3-Linux-aarch64.sh)
2. Create and activate the Python environment.
```bash
conda create -n {your_env_name} python=3.7.5
conda activate {your_env_name}
```
> Conda is a powerful Python environment management tool. It is recommended that a beginner read related information on the Internet first.
### Configuring software package Dependencies
- Install the .whl package provided in Ascend 910 AI processor software package(Version:Atlas T 1.1.T106). The .whl package is released with the software package. After software package is upgraded, reinstall the .whl package.
```bash
pip install /usr/local/HiAI/runtime/lib64/topi-{version}-py3-none-any.whl
pip install /usr/local/HiAI/runtime/lib64/te-{version}-py3-none-any.whl
```
## Installation Guide
### Installing Using Executable Files
- Download the .whl package from the [MindSpore website](https://www.mindspore.cn/versions/en). It is recommended to perform SHA-256 integrity verification first and run the following command to install MindSpore:
```bash
pip install mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
### Installing Using the Source Code
The compilation and installation must be performed on the Ascend 910 AI processor environment.
1. Download the source code from the code repository.
```bash
git clone https://gitee.com/mindspore/mindspore.git -b r0.1
```
2. Run the following command in the root directory of the source code to compile MindSpore:
```bash
bash build.sh -e d -z
```
> - Before running the preceding command, ensure that the paths where the executable files cmake and patch store have been added to the environment variable PATH.
> - In the build.sh script, the git clone command will be executed to obtain the code in the third-party dependency database. Ensure that the network settings of Git are correct.
> - In the build.sh script, the default number of compilation threads is 8. If the compiler performance is poor, compilation errors may occur. You can add -j{Number of threads} in to script to reduce the number of threads. For example, `bash build.sh -e d -z -j4`.
3. Run the following command to install MindSpore:
```bash
chmod +x build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
## Configuring Environment Variables
- After MindSpore is installed, export runtime-related environment variables.
```bash
# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
export GLOG_v=2
# Conda environmental options
LOCAL_HIAI=/usr/local/HiAI # the root directory of run package
# lib libraries that the run package depends on
export LD_LIBRARY_PATH=${LOCAL_HIAI}/runtime/lib64/:/usr/local/HiAI/driver/lib64:${LD_LIBRARY_PATH}
# Environment variables that must be configured
export PATH=${LOCAL_HIAI}/runtime/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
export TBE_IMPL_PATH=${LOCAL_HIAI}/runtime/ops/op_impl/built-in/ai_core/tbe/impl/ # TBE operator implementation path
export PYTHONPATH=${LOCAL_HIAI}/runtime/ops/op_impl/built-in/ai_core/tbe/:${PYTHONPATH} # Python library that TBE implementation depends on
```
## Installation Verification
- After configuring the environment variables, execute the following Python script:
```bash
import numpy as np
from mindspore import Tensor
from mindspore.ops import functional as F
import mindspore.context as context
context.set_context(device_target="Ascend")
x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))
print(F.tensor_add(x, y))
```
- The outputs should be same as:
```
[[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]]]
```
# Installing MindInsight
If you need to analyze information such as model scalars, graphs, and model traceback, you can install MindInsight.
## Environment Requirements
### System Requirements and Software Dependencies
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindInsight 0.1.0-alpha | - Ubuntu 16.04 or later x86_64 <br> - EulerOS 2.8 arrch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.1/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [node.js](https://nodejs.org/en/download/) >= 10.19.0 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 <br> **Installation dependencies:**<br> same as the executable file installation dependencies. |
- When the network is connected, dependency items in the requirements.txt file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
## Installation Guide
### Installing Using Executable Files
1. Download the .whl package from the [MindSpore website](https://www.mindspore.cn/versions/en). It is recommended to perform SHA-256 integrity verification first and run the following command to install MindInsight:
```bash
pip install mindinsight-{version}-cp37-cp37m-linux_{arch}.whl
```
2. Run the following command. If `web address: http://127.0.0.1:8080` is displayed, the installation is successful.
```bash
mindinsight start
```
### Installing Using the Source Code
1. Download the source code from the code repository.
```bash
git clone https://gitee.com/mindspore/mindinsight.git -b r0.1
```
2. Install MindInsight by using either of the following installation methods:
(1) Access the root directory of the source code and run the following installation command:
```bash
cd mindinsight
pip install -r requirements.txt
python setup.py install
```
(2) Create a .whl package to install MindInsight.
Access the build directory of the source code and run the MindInsight compilation script.
```bash
cd mindinsight/build
bash build.sh
```
Access the output directory of the source code, where the generated MindInsight installation package is stored, and run the installation command.
```bash
cd mindinsight/output
pip install mindinsight-{version}-cp37-cp37m-linux_{arch}.whl
```
3. Run the following command. If `web address: http://127.0.0.1:8080` is displayed, the installation is successful.
```bash
mindinsight start
```
# Installing MindArmour
If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour.
## Environment Requirements
### System Requirements and Software Dependencies
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour 0.1.0-alpha | - Ubuntu 16.04 or later x86_64 <br> - EulerOS 2.8 arrch64 <br> - EulerOS 2.5 x86_64 <br> | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.1/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the setup.py file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
## Installation Guide
### Installing Using Executable Files
1. Download the .whl package from the [MindSpore website](https://www.mindspore.cn/versions/en). It is recommended to perform SHA-256 integrity verification first and run the following command to install MindArmour:
```bash
pip install mindarmour-{version}-cp37-cp37m-linux_{arch}.whl
```
2. Run the following command. If no loading error message such as `No module named 'mindarmour'` is displayed, the installation is successful.
```bash
python -c 'import mindarmour'
```
### Installing Using the Source Code
1. Download the source code from the code repository.
```bash
git clone https://gitee.com/mindspore/mindarmour.git -b r0.1
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
```bash
cd mindarmour
python setup.py install
```
3. Run the following command. If no loading error message such as `No module named 'mindarmour'` is displayed, the installation is successful.
```bash
python -c 'import mindarmour'
```
# 安装MindSpore
本文档介绍如何在Nvidia GPU的环境上快速安装MindSpore。
<!-- TOC -->
- [安装MindSpore](#安装mindspore)
- [环境要求](#环境要求)
- [硬件要求](#硬件要求)
- [系统要求和软件依赖](#系统要求和软件依赖)
- [Conda安装(可选)](#conda安装可选)
- [安装指南](#安装指南)
- [通过可执行文件安装](#通过可执行文件安装)
- [从源码编译安装](#从源码编译安装)
- [安装验证](#安装验证)
- [安装MindArmour](#安装mindarmour)
<!-- /TOC -->
## 环境要求
### 硬件要求
- Nvidia GPU
### 系统要求和软件依赖
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
| MindSpore 0.1.0-alpha | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) <br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br> - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (可选,单机多卡/多机多卡训练需要) <br> - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (可选,单机多卡/多机多卡训练需要) <br> - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.1/requirements.txt) | **编译依赖:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.64 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> - [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) <br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br> **安装依赖:**<br> 与可执行文件安装依赖相同 |
- Ubuntu版本为18.04时,GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载requirements.txt中的依赖项,其余情况需自行安装。
### Conda安装(可选)
1. Conda安装包下载路径如下。
- [X86 Anaconda](https://www.anaconda.com/distribution/)[X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html)
2. 创建并激活Python环境。
```bash
conda create -n {your_env_name} python=3.7.5
conda activate {your_env_name}
```
> Conda是强大的Python环境管理工具,建议初学者上网查阅更多资料。
## 安装指南
### 通过可执行文件安装
-[MindSpore网站下载地址](https://www.mindspore.cn/versions)下载whl包,建议先进行SHA-256完整性校验,执行如下命令安装MindSpore。
```bash
pip install mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
### 从源码编译安装
1. 从代码仓下载源码。
```bash
git clone https://gitee.com/mindspore/mindspore.git -b r0.1
```
2. 在源码根目录下执行如下命令编译MindSpore。
```bash
bash build.sh -e gpu -M on -z
```
> - 在执行上述命令前,需保证可执行文件cmake和patch所在路径已加入环境变量PATH中。
> - build.sh中会执行git clone获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
> - build.sh中默认的编译线程数为8,如果编译机性能较差可能会出现编译错误,可在执行中增加-j{线程数}来减少线程数量。如`bash build.sh -e gpu -M on -z -j4`。
3. 执行如下命令安装MindSpore。
```bash
chmod +x build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
## 安装验证
- 安装好MindSpore之后,执行如下python脚本:
```bash
import numpy as np
from mindspore import Tensor
from mindspore.ops import functional as F
import mindspore.context as context
context.set_context(device_target="GPU")
x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))
print(F.tensor_add(x, y))
```
- 若出现如下结果,即安装验证通过。
```bash
[[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]]]
```
# 安装MindArmour
当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。
## 环境要求
### 系统要求和软件依赖
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
| MindArmour 0.1.0-alpha | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha <br> - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.1/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载setup.py中的依赖项,其余情况需自行安装。
## 安装指南
### 通过可执行文件安装
1.[MindSpore网站下载地址](https://www.mindspore.cn/versions)下载whl包,建议先进行SHA-256完整性校验,执行如下命令安装MindArmour。
```bash
pip install mindarmour-{version}-cp37-cp37m-linux_{arch}.whl
```
2. 执行如下命令,如果没有提示`No module named 'mindarmour'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindarmour'
```
### 从源码编译安装
1. 从代码仓下载源码。
```bash
git clone https://gitee.com/mindspore/mindarmour.git -b r0.1
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
```bash
cd mindarmour
python setup.py install
```
3. 执行如下命令,如果没有提示`No module named 'mindarmour'`等加载错误的信息,则说明安装成功。
```bash
python -c 'import mindarmour'
```
# MindSpore Installation Guide
This document describes how to quickly install MindSpore on a NVIDIA GPU environment.
<!-- TOC -->
- [MindSpore Installation Guide](#mindspore-installation-guide)
- [Environment Requirements](#environment-requirements)
- [Hardware Requirements](#hardware-requirements)
- [System Requirements and Software Dependencies](#system-requirements-and-software-dependencies)
- [(Optional) Installing Conda](#optional-installing-conda)
- [Installation Guide](#installation-guide)
- [Installing Using Executable Files](#installing-using-executable-files)
- [Installing Using the Source Code](#installing-using-the-source-code)
- [Installation Verification](#installation-verification)
- [Installing MindArmour](#installing-mindarmour)
<!-- /TOC -->
## Environment Requirements
### Hardware Requirements
- Nvidia GPU
### System Requirements and Software Dependencies
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindSpore 0.1.0-alpha | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) <br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br> - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) <br> - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) <br> - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.1/requirements.txt). | **Compilation dependencies:**<br> - [Python](https://www.python.org/downloads/) 3.7.5 <br> - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 <br> - [CMake](https://cmake.org/download/) >= 3.14.1 <br> - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 <br> - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 <br> - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.64 <br> - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6 <br> - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 <br> - [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) <br> - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 <br> **Installation dependencies:**<br> same as the executable file installation dependencies. |
- When Ubuntu version is 18.04, GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the requirements.txt file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
### (Optional) Installing Conda
1. Download the Conda installation package from the following path:
- [X86 Anaconda](https://www.anaconda.com/distribution/) or [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html)
2. Create and activate the Python environment.
```bash
conda create -n {your_env_name} python=3.7.5
conda activate {your_env_name}
```
> Conda is a powerful Python environment management tool. It is recommended that a beginner read related information on the Internet first.
## Installation Guide
### Installing Using Executable Files
- Download the .whl package from the [MindSpore website](https://www.mindspore.cn/versions/en). It is recommended to perform SHA-256 integrity verification first and run the following command to install MindSpore:
```bash
pip install mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
### Installing Using the Source Code
1. Download the source code from the code repository.
```bash
git clone https://gitee.com/mindspore/mindspore.git -b r0.1
```
2. Run the following command in the root directory of the source code to compile MindSpore:
```bash
bash build.sh -e gpu -M on -z
```
> - Before running the preceding command, ensure that the paths where the executable files cmake and patch store have been added to the environment variable PATH.
> - In the build.sh script, the git clone command will be executed to obtain the code in the third-party dependency database. Ensure that the network settings of Git are correct.
> - In the build.sh script, the default number of compilation threads is 8. If the compiler performance is poor, compilation errors may occur. You can add -j{Number of threads} in to script to reduce the number of threads. For example, `bash build.sh -e gpu -M on -z -j4`.
3. Run the following command to install MindSpore:
```bash
chmod +x build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl
```
## Installation Verification
- After Installation, execute the following Python script:
```bash
import numpy as np
from mindspore import Tensor
from mindspore.ops import functional as F
import mindspore.context as context
context.set_context(device_target="GPU")
x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))
print(F.tensor_add(x, y))
```
- The outputs should be same as:
```bash
[[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]]]
```
# Installing MindArmour
If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour.
## Environment Requirements
### System Requirements and Software Dependencies
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
| MindArmour 0.1.0-alpha | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 <br> - MindSpore 0.1.0-alpha <br> - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.1/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the setup.py file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
## Installation Guide
### Installing Using Executable Files
1. Download the .whl package from the [MindSpore website](https://www.mindspore.cn/versions/en). It is recommended to perform SHA-256 integrity verification first and run the following command to install MindArmour:
```bash
pip install mindarmour-{version}-cp37-cp37m-linux_{arch}.whl
```
2. Run the following command. If no loading error message such as `No module named 'mindarmour'` is displayed, the installation is successful.
```bash
python -c 'import mindarmour'
```
### Installing Using the Source Code
1. Download the source code from the code repository.
```bash
git clone https://gitee.com/mindspore/mindarmour.git -b r0.1
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
```bash
cd mindarmour
python setup.py install
```
3. Run the following command. If no loading error message such as `No module named 'mindarmour'` is displayed, the installation is successful.
```bash
python -c 'import mindarmour'
```
数据集
损失函数
优化器
# Release List
<!-- TOC -->
- [Release List](#release-list)
- [0.1.0-alpha](#010-alpha)
- [Releasenotes](#releasenotes)
- [Downloads](#downloads)
- [Tutorials](#tutorials)
- [API](#api)
- [Docs](#docs)
- [master(unstable)](#masterunstable)
<!-- /TOC -->
## 0.1.0-alpha
### Releasenotes
<https://gitee.com/mindspore/mindspore/blob/r0.1/RELEASE.md>
### Downloads
| Module Name | Hardware Platform | Operating System | Download Links | SHA-256 |
| --- | --- | --- | --- | --- |
| MindSpore | Ascend910 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/ascend/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | a76df4e96c4cb69b10580fcde2d4ef46b5d426be6d47a3d8fd379c97c3e66638 |
| | | EulerOS-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/ascend/euleros-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | 45d4fcb37bf796b3208b7c1ca70dc0db1387a878ef27836d3d445f311c8c02e0 |
| | | EulerOS-aarch64 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/ascend/euleros-aarch64/mindspore-0.1.0-cp37-cp37m-linux_aarch64.whl> | 7daba2d1739ce19d55695460dce5ef044b4d38baad4f5117056e5f77f49a12b4 |
| | GPU CUDA 9.2 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/gpu/cuda-9.2/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | b6e5623135b57b8c262f3e32d97fbe1e20e8c19da185a7aba97b9dc98c7ecda1 |
| | GPU CUDA 10.1 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/gpu/cuda-10.1/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | 43711725cf7e071ca21b5ba25e90d6955789fe3495c62217e70869f52ae20c01 |
| | CPU | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | 45c473a97a6cb227e4221117bfb1b3ebe3f2eab938e0b76d5117e6c3127b8e5c |
| MindInsight | Ascend910 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindInsight/ubuntu/x86_64/mindinsight-0.1.0-cp37-cp37m-linux_x86_64.whl> | 960b6f485ce545ccce98adfb4c62cdea216c9b7851ffdc0669827c53811c3e59 |
| | | EulerOS-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindInsight/euleros/x86_64/mindinsight-0.1.0-cp37-cp37m-linux_x86_64.whl> | 9f1ef04fec09e5b90be4a6223b3bf2943334746c1f5dac37207db4524b64942f |
| | | EulerOS-aarch64 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindInsight/euleros/aarch64/mindinsight-0.1.0-cp37-cp37m-linux_aarch64.whl> | d64207126542571057572f856010a5a8b3362ccd9e5b5c81da5b78b94face5fe |
| MindArmour | Ascend910 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/x86_64/mindarmour-0.1.0-cp37-cp37m-linux_x86_64.whl> | 7796b6c114ee4962ce605da59a9bc47390c8910acbac318ecc0598829aad6e8c |
| | | EulerOS-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/x86_64/mindarmour-0.1.0-cp37-cp37m-linux_x86_64.whl> | 7796b6c114ee4962ce605da59a9bc47390c8910acbac318ecc0598829aad6e8c |
| | | EulerOS-aarch64 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/aarch64/mindarmour-0.1.0-cp37-cp37m-linux_aarch64.whl> | f354fcdbb3d8b4022fda5a6636e763f8091aca2167dc23e60b7f7b6d710523cb |
| | GPU CUDA 9.2/GPU CUDA 10.1/CPU | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/x86_64/mindarmour-0.1.0-cp37-cp37m-linux_x86_64.whl> | 7796b6c114ee4962ce605da59a9bc47390c8910acbac318ecc0598829aad6e8c |
### Tutorials
<https://www.mindspore.cn/tutorial/en/0.1.0-alpha/index.html>
### API
<https://www.mindspore.cn/api/en/0.1.0-alpha/index.html>
### Docs
<https://www.mindspore.cn/docs/en/0.1.0-alpha/index.html>
## master(unstable)
### Tutorials
<https://www.mindspore.cn/tutorial/en/master/index.html>
### API
<https://www.mindspore.cn/api/en/master/index.html>
### Docs
<https://www.mindspore.cn/docs/en/master/index.html>
# 发布版本列表
<!-- TOC -->
- [发布版本列表](#发布版本列表)
- [0.1.0-alpha](#010-alpha)
- [版本说明](#版本说明)
- [下载地址](#下载地址)
- [教程](#教程)
- [API](#api)
- [文档](#文档)
- [master(unstable)](#masterunstable)
<!-- /TOC -->
## 0.1.0-alpha
### 版本说明
<https://gitee.com/mindspore/mindspore/blob/r0.1/RELEASE.md>
### 下载地址
| 组件 | 硬件平台 | 操作系统 | 链接 | SHA-256 |
| --- | --- | --- | --- | --- |
| MindSpore | Ascend910 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/ascend/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | a76df4e96c4cb69b10580fcde2d4ef46b5d426be6d47a3d8fd379c97c3e66638 |
| | | EulerOS-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/ascend/euleros-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | 45d4fcb37bf796b3208b7c1ca70dc0db1387a878ef27836d3d445f311c8c02e0 |
| | | EulerOS-aarch64 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/ascend/euleros-aarch64/mindspore-0.1.0-cp37-cp37m-linux_aarch64.whl> | 7daba2d1739ce19d55695460dce5ef044b4d38baad4f5117056e5f77f49a12b4 |
| | GPU CUDA 9.2 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/gpu/cuda-9.2/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | b6e5623135b57b8c262f3e32d97fbe1e20e8c19da185a7aba97b9dc98c7ecda1 |
| | GPU CUDA 10.1 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/gpu/cuda-10.1/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | 43711725cf7e071ca21b5ba25e90d6955789fe3495c62217e70869f52ae20c01 |
| | CPU | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindSpore/cpu/ubuntu-x86/mindspore-0.1.0-cp37-cp37m-linux_x86_64.whl> | 45c473a97a6cb227e4221117bfb1b3ebe3f2eab938e0b76d5117e6c3127b8e5c |
| MindInsight | Ascend910 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindInsight/ubuntu/x86_64/mindinsight-0.1.0-cp37-cp37m-linux_x86_64.whl> | 960b6f485ce545ccce98adfb4c62cdea216c9b7851ffdc0669827c53811c3e59 |
| | | EulerOS-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindInsight/euleros/x86_64/mindinsight-0.1.0-cp37-cp37m-linux_x86_64.whl> | 9f1ef04fec09e5b90be4a6223b3bf2943334746c1f5dac37207db4524b64942f |
| | | EulerOS-aarch64 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindInsight/euleros/aarch64/mindinsight-0.1.0-cp37-cp37m-linux_aarch64.whl> | d64207126542571057572f856010a5a8b3362ccd9e5b5c81da5b78b94face5fe |
| MindArmour | Ascend910 | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/x86_64/mindarmour-0.1.0-cp37-cp37m-linux_x86_64.whl> | 7796b6c114ee4962ce605da59a9bc47390c8910acbac318ecc0598829aad6e8c |
| | | EulerOS-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/x86_64/mindarmour-0.1.0-cp37-cp37m-linux_x86_64.whl> | 7796b6c114ee4962ce605da59a9bc47390c8910acbac318ecc0598829aad6e8c |
| | | EulerOS-aarch64 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/aarch64/mindarmour-0.1.0-cp37-cp37m-linux_aarch64.whl> | f354fcdbb3d8b4022fda5a6636e763f8091aca2167dc23e60b7f7b6d710523cb |
| | GPU CUDA 9.2/GPU CUDA 10.1/CPU | Ubuntu-x86 | <https://ms-release.obs.cn-north-4.myhuaweicloud.com/0.1.0-alpha/MindArmour/x86_64/mindarmour-0.1.0-cp37-cp37m-linux_x86_64.whl> | 7796b6c114ee4962ce605da59a9bc47390c8910acbac318ecc0598829aad6e8c |
### 教程
<https://www.mindspore.cn/tutorial/zh-CN/0.1.0-alpha/index.html>
### API
<https://www.mindspore.cn/api/zh-CN/0.1.0-alpha/index.html>
### 文档
<https://www.mindspore.cn/docs/zh-CN/0.1.0-alpha/index.html>
## master(unstable)
### 教程
<https://www.mindspore.cn/tutorial/zh-CN/master/index.html>
### API
<https://www.mindspore.cn/api/zh-CN/master/index.html>
### 文档
<https://www.mindspore.cn/docs/zh-CN/master/index.html>
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source_zh_cn
BUILDDIR = build_zh_cn
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
sphinx
recommonmark
sphinx-markdown-tables
sphinx_rtd_theme
jieba
\ No newline at end of file
Community
=========
Contributing Code
-----------------
If you want to contribute code, please read https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md .
Contributing Documents
----------------------
If you want to contribute documents, please read https://gitee.com/mindspore/docs/blob/master/CONTRIBUTING_DOC.md .
\ No newline at end of file
# Computer Vision Applications
<!-- TOC -->
- [Computer Vision Applications](#computer-vision-applications)
- [Overview](#overview)
- [Image Classification](#image-classification)
- [Task Description and Preparation](#task-description-and-preparation)
- [Downloading the CIFAR-10 Dataset](#downloading-the-cifar-10-dataset)
- [Data Preloading and Preprocessing](#data-preloading-and-preprocessing)
- [Defining the CNN](#defining-the-cnn)
- [Defining the Loss Function and Optimizer](#defining-the-loss-function-and-optimizer)
- [Calling the High-level `Model` API To Train and Save the Model File](#calling-the-high-level-model-api-to-train-and-save-the-model-file)
- [Loading and Validating the Saved Model](#loading-and-validating-the-saved-model)
- [References](#references)
<!-- /TOC -->
## Overview
Computer vision is the most widely researched and mature technology field of deep learning, and is widely used in scenarios such as mobile phone photographing, intelligent security protection, and automated driving. Since AlexNet won the ImageNet competition in 2012, deep learning has greatly promoted the development of the computer vision field. Almost all the most advanced computer vision algorithms are related to deep learning. Deep neural network can extract image features layer by layer and retain local invariance. It is widely used in visual tasks such as classification, detection, segmentation, tracking, retrieval, recognition, promotion, and reconstruction.
This chapter describes how to apply MindSpore to computer vision scenarios based on image classification tasks.
## Image Classification
Image classification is the most basic computer vision application and belongs to the supervised learning category. For example, determine the class of a digital image, such as cat, dog, airplane, or car. The function is as follows:
```python
def classify(image):
label = model(image)
return label
```
The key point is to select a proper model. The model generally refers to a deep convolutional neural network (CNN), such as AlexNet, VGG, GoogleNet, and ResNet.
MindSpore presets a typical CNN, such as LeNet, which can be directly used by developers. The usage method is as follows:
```python
from mindspore.model_zoo.lenet import LeNet5
network = LeNet(num_classes)
```
MindSpore supports the following image classification networks: LeNet, AlexNet, and ResNet.
## Task Description and Preparation
![cifar10](images/cifar10.jpg)
Figure 1: CIFAR-10 dataset [1]
Figure 1 shows that the CIFAR-10 dataset contains 10 classes of 60,000 images. Each class contains 6000 images. 50,000 images are for training and 10,000 images are for testing. The size of each image is 32 x 32 pixels.
Generally, a training indicator of image classification is accuracy, that is, a ratio of a quantity of accurately predicted examples to a total quantity of predicted examples.
Next, let's use MindSpore to solve the image classification task. The overall process is as follows:
1. Download the CIFAR-10 dataset.
2. Load and preprocess data.
3. Define a convolutional neural network. In this example, the ResNet-50 network is used.
4. Define the loss function and optimizer.
5. Call the high-level `Model` API to train and save the model file.
6. Load the saved model for inference.
> This example is for the hardware platform of the Ascend 910 AI processor, download the complete code at <https://gitee.com/mindspore/docs/blob/r0.1/tutorials/tutorial_code/resnet>.
The key parts of the task process code are explained below.
### Downloading the CIFAR-10 Dataset
CIFAR-10 dataset download address: [the website of Cifar-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html) In this example, the data is in binary format. In the Linux environment, run the following command to download the dataset:
```shell
wget https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz
```
Run the following command to decompress the dataset:
```shell
tar -zvxf cifar-10-binary.tar.gz
```
### Data Preloading and Preprocessing
1. Load the dataset.
Data can be loaded through the built-in dataset format `Cifar10Dataset` API.
> `Cifar10Dataset`: The read type is random read. The built-in CIFAR-10 dataset contains images and labels. The default image format is uint8, and the default label data format is uint32. For details, see the description of the `Cifar10Dataset` API.
The data loading code is as follows, where `data_home` indicates the data storage location:
```python
cifar_ds = ds.Cifar10Dataset(data_home)
```
2. Enhance the data.
Data augmentation is to normalize data and enrich the number of data samples. Common data augmentation modes include cropping, flipping, and color change. MindSpore calls the `map` method to perform augmentation operations on images.
```python
resize_height = 224
resize_width = 224
rescale = 1.0 / 255.0
shift = 0.0
# define map operations
random_crop_op = C.RandomCrop((32, 32), (4, 4, 4, 4)) # padding_mode default CONSTANT
random_horizontal_op = C.RandomHorizontalFlip()
resize_op = C.Resize((resize_height, resize_width)) # interpolation default BILINEAR
rescale_op = C.Rescale(rescale, shift)
normalize_op = C.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
changeswap_op = C.HWC2CHW()
type_cast_op = C2.TypeCast(mstype.int32)
c_trans = []
if training:
c_trans = [random_crop_op, random_horizontal_op]
c_trans += [resize_op, rescale_op, normalize_op, changeswap_op]
# apply map operations on images
cifar_ds = cifar_ds.map(input_columns="label", operations=type_cast_op)
cifar_ds = cifar_ds.map(input_columns="image", operations=c_trans)
```
3. Shuffle and batch process the data.
Shuffle data randomly to disorder the data sequence and read data in batches for model training:
```python
# apply repeat operations
cifar_ds = cifar_ds.repeat(repeat_num)
# apply shuffle operations
cifar_ds = cifar_ds.shuffle(buffer_size=10)
# apply batch operations
cifar_ds = cifar_ds.batch(batch_size=args_opt.batch_size, drop_remainder=True)
```
### Defining the CNN
CNN is a standard algorithm for image classification tasks. CNN uses a layered structure to perform feature extraction on an image, and is formed by stacking a series of network layers, such as a convolutional layer, a pooling layer, and an activation layer.
ResNet is recommended. First, it is deep enough with 34 layers, 50 layers, or 101 layers. The deeper the hierarchy, the stronger the representation capability, and the higher the classification accuracy. Second, it is learnable. The residual structure is used. The lower layer is directly connected to the upper layer through the shortcut connection, which solves the problem of gradient disappearance caused by the network depth during the reverse propagation. In addition, the ResNet network has good performance, including the recognition accuracy, model size, and parameter quantity.
MindSpore Model Zoo has a built-in ResNet model. In this example, the ResNet-50 network is used. The calling method is as follows:
```python
from mindspore.model_zoo.resnet import resnet50
network = resnet50(class_num=10)
```
For more information about ResNet, see [ResNet Paper](https://arxiv.org/abs/1512.03385).
### Defining the Loss Function and Optimizer
A loss function and an optimizer need to be defined. The loss function is a training objective of the deep learning, and is also referred to as an objective function. The loss function indicates the distance between a logit of a neural network and a label, and is scalar data.
Common loss functions include mean square error, L2 loss, Hinge loss, and cross entropy. Cross entropy is usually used for image classification.
The optimizer is used for neural network solution (training). Because of the large scale of neural network parameters, the stochastic gradient descent (SGD) algorithm and its improved algorithm are used in deep learning to solve the problem. MindSpore encapsulates common optimizers, such as `SGD`, `ADAM`, and `Momemtum`. In this example, the `Momentum` optimizer is used. Generally, two parameters need to be set: `moment` and `weight decay`.
An example of the code for defining the loss function and optimizer in MindSpore is as follows:
```python
# loss function definition
ls = SoftmaxCrossEntropyWithLogits(sparse=True, is_grad=False, reduction="mean")
# optimization definition
opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01, 0.9)
```
### Calling the High-level `Model` API To Train and Save the Model File
After data preprocessing, network definition, and loss function and optimizer definition are complete, model training can be performed. Model training involves two iterations: multi-round iteration (epoch) of datasets and single-step iteration based on the batch size of datasets. The single-step iteration refers to extracting data from a dataset by batch, inputting the data to a network to calculate a loss function, and then calculating and updating a gradient of training parameters by using an optimizer.
To simplify the training process, MindSpore encapsulates the high-level `Model` API. You can enter the network, loss function, and optimizer to complete the `Model` initialization, and then call the `train` API for training. The `train` API parameters include the number of iterations (`epoch`) and dataset (`dataset`).
Model saving is a process of persisting training parameters. In the `Model` class, the model is saved using the callback function, as shown in the following code: You can set the parameters of the callback function by using `CheckpointConfig`. `save_checkpoint_steps` indicates that the model is saved once every fixed number of single-step iterations, and `keep_checkpoint_max` indicates the maximum number of saved models.
```python
'''
network, loss, optimizer are defined before.
batch_num, epoch_size are training parameters.
'''
model = Model(net, loss_fn=ls, optimizer=opt, metrics={'acc'})
# CheckPoint CallBack definition
config_ck = CheckpointConfig(save_checkpoint_steps=batch_num, keep_checkpoint_max=35)
ckpoint_cb = ModelCheckpoint(prefix="train_resnet_cifar10", directory="./", config=config_ck)
# LossMonitor is used to print loss value on screen
loss_cb = LossMonitor()
model.train(epoch_size, dataset, callbacks=[ckpoint_cb, loss_cb])
```
### Loading and Validating the Saved Model
The trained model file (such as resnet.ckpt) can be used to predict the class of a new image. Run the `load_checkpoint` command to load the model file. Then call the `eval` API of `Model` to predict the new image class.
```python
param_dict = load_checkpoint(args_opt.checkpoint_path)
load_param_into_net(net, param_dict)
eval_dataset = create_dataset(1, training=False)
res = model.eval(eval_dataset)
print("result: ", res)
```
## References
[1] https://www.cs.toronto.edu/~kriz/cifar.html
# Customized Debugging Information
<!-- TOC -->
- [Customized Debugging Information](#customized-debugging-information)
- [Overview](#overview)
- [Introduction to Callback](#introduction-to-callback)
- [Callback Capabilities of MindSpore](#callback-capabilities-of-mindspore)
- [Custom Callback](#custom-callback)
- [MindSpore Metrics](#mindspore-metrics)
- [MindSpore Print Operator](#mindspore-print-operator)
- [Log-related Environment Variables and Configurations](#log-related-environment-variables-and-configurations)
<!-- /TOC -->
## Overview
This section describes how to use the customized capabilities provided by MindSpore, such as callback, metrics, and log printing, to help you quickly debug the training network.
## Introduction to Callback
Callback here is not a function but a class. You can use callback to observe the internal status and related information of the network during training or perform specific actions in a specific period.
For example, you can monitor the loss, save model parameters, dynamically adjust parameters, and terminate training tasks in advance.
### Callback Capabilities of MindSpore
MindSpore provides the callback capabilities to allow users to insert customized operations in a specific phase of training or inference, including:
- Callback functions such as ModelCheckpoint, LossMonitor, and SummaryStep provided by the MindSpore framework
- Custom callback functions
Usage: Transfer the callback object in the model.train method. The callback object can be a list, for example:
```python
ckpt_cb = ModelCheckpoint()
loss_cb = LossMonitor()
summary_cb = SummaryStep()
model.train(epoch, dataset, callbacks=[ckpt_cb, loss_cb, summary_cb])
```
ModelCheckpoint can save model parameters for retraining or inference.
LossMonitor can output loss information in logs for users to view. In addition, LossMonitor monitors the loss value change during training. When the loss value is `Nan` or `Inf`, the training terminates.
SummaryStep can save the training information to a file for later use.
### Custom Callback
You can customize callback based on the callback base class as required.
The callback base class is defined as follows:
```python
class Callback():
"""Callback base class"""
def begin(self, run_context):
"""Called once before the network executing."""
pass
def epoch_begin(self, run_context):
"""Called before each epoch beginning."""
pass
def epoch_end(self, run_context):
"""Called after each epoch finished."""
pass
def step_begin(self, run_context):
"""Called before each epoch beginning."""
pass
def step_end(self, run_context):
"""Called after each step finished."""
pass
def end(self, run_context):
"""Called once after network training."""
pass
```
The callback can record important information during training and transfer the information to the callback object through a dictionary variable `cb_params`,
You can obtain related attributes from each custom callback and perform customized operations. You can also customize other variables and transfer them to the `cb_params` object.
The main attributes of `cb_params` are as follows:
- loss_fn: Loss function
- optimizer: Optimizer
- train_dataset: Training dataset
- cur_epoch_num: Number of current epochs
- cur_step_num: Number of current steps
- batch_num: Number of steps in an epoch
- ...
You can inherit the callback base class to customize a callback object.
The following example describes how to use a custom callback function.
```python
class StopAtTime(Callback):
def __init__(self, run_time):
super(StopAtTime, self).__init__()
self.run_time = run_time*60
def begin(self, run_context):
cb_params = run_context.original_args()
cb_params.init_time = time.time()
def step_end(self, run_context):
cb_params = run_context.original_args()
epoch_num = cb_params.cur_epoch_num
step_num = cb_params.cur_step_num
loss = cb_params.cb_params
cur_time = time.time()
if (cur_time - cb_params.init_time) > self.run_time:
print("epoch: ", epoch_num, " step: ", step_num, " loss: ", loss)
run_context.request_stop()
stop_cb = StopAtTime(run_time=10)
model.train(100, dataset, callbacks=stop_cb)
```
The output is as follows:
```
epoch: 20 step: 32 loss: 2.298344373703003
```
This callback function is used to terminate the training within a specified period. You can use the `run_context.original_args()` method to obtain the `cb_params` dictionary, which contains the main attribute information described above.
In addition, you can modify and add values in the dictionary. In the preceding example, an `init_time` object is defined in `begin()` and transferred to the `cb_params` dictionary.
A decision is made at each `step_end`. When the training time is greater than the configured time threshold, a training termination signal will be sent to the `run_context` to terminate the training in advance and the current values of epoch, step, and loss will be printed.
## MindSpore Metrics
After the training is complete, you can use metrics to evaluate the training result.
MindSpore provides multiple metrics, such as `accuracy`, `loss`, `tolerance`, `recall`, and `F1`.
You can define a metrics dictionary object that contains multiple metrics and transfer them to the `model.eval` interface to verify the training precision.
```python
metrics = {
'accuracy': nn.Accuracy(),
'loss': nn.Loss(),
'precision': nn.Precision(),
'recall': nn.Recall(),
'f1_score': nn.F1()
}
net = ResNet()
loss = CrossEntropyLoss()
opt = Momentum()
model = Model(net, loss_fn=loss, optimizer=opt, metrics=metrics)
ds_eval = create_dataset()
output = model.eval(ds_eval)
```
The `model.eval()` method returns a dictionary that contains the metrics and results transferred to the metrics.
You can also define your own metrics class by inheriting the `Metric` base class and rewriting the `clear`, `update`, and `eval` methods.
The `accuracy` operator is used as an example to describe the internal implementation principle.
The `accuracy` inherits the `EvaluationBase` base class and rewrites the preceding three methods.
The `clear()` method initializes related calculation parameters in the class.
The `update()` method accepts the predicted value and tag value and updates the internal variables of accuracy.
The `eval()` method calculates related indicators and returns the calculation result.
By invoking the `eval` method of `accuracy`, you will obtain the calculation result.
You can understand how `accuracy` runs by using the following code:
```python
x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
y = Tensor(np.array([1, 0, 1]))
metric = Accuracy()
metric.clear()
metric.update(x, y)
accuracy = metric.eval()
print('Accuracy is ', accuracy)
```
The output is as follows:
```
Accuracy is 0.6667
```
## MindSpore Print Operator
MindSpore-developed print operator is used to print the tensors or character strings input by users. Multiple strings, multiple tensors, and a combination of tensors and strings are supported, which are separated by comma (,).
The use method of MindSpore print operator is the same that of other operators. You need to assert MindSpore print operator in `__init__`() and invoke using `construct()`. The following is an example.
```python
import numpy as np
from mindspore import Tensor
from mindspore.ops import operations as P
import mindspore.nn as nn
import mindspore.context as context
context.set_context(mode=context.GRAPH_MODE)
class PrintDemo(nn.Cell):
def __init__(self):
super(PrintDemo, self).__init__()
self.print = P.Print()
def construct(self, x, y):
self.print('print Tensor x and Tensor y:', x, y)
return x
x = Tensor(np.ones([2, 1]).astype(np.int32))
y = Tensor(np.ones([2, 2]).astype(np.int32))
net = PrintDemo()
output = net(x, y)
```
The output is as follows:
```
print Tensor x and Tensor y:
Tensor shape:[[const vector][2, 1]]Int32
val:[[1]
[1]]
Tensor shape:[[const vector][2, 2]]Int32
val:[[1 1]
[1 1]]
```
## Log-related Environment Variables and Configurations
MindSpore uses glog to output logs. The following environment variables are commonly used:
1. GLOG_v specifies the log level. The default value is 2, indicating the WARNING level. The values are as follows: 0: DEBUG; 1: INFO; 2: WARNING; 3: ERROR.
2. When GLOG_logtostderr is set to 1, logs are output to the screen. If the value is set to 0, logs are output to a file. Default value: 1
3. GLOG_log_dir=YourPath specifies the log output path. If GLOG_log_dir is specified and the value of GLOG_logtostderr is 1, logs are output to the screen but not to a file.
# Distributed Training
<!-- TOC -->
- [Distributed Training](#distributed-training)
- [Overview](#overview)
- [Preparations](#preparations)
- [Configuring Distributed Environment Variables](#configuring-distributed-environment-variables)
- [Invoking the Collective Communication Library](#invoking-the-collective-communication-library)
- [Loading Datasets](#loading-datasets)
- [Defining the Network](#defining-the-network)
- [Defining the Loss Function and Optimizer](#defining-the-loss-function-and-optimizer)
- [Defining the Loss Function](#defining-the-loss-function)
- [Defining the Optimizer](#defining-the-optimizer)
- [Training the Network](#training-the-network)
- [Running Test Cases](#running-test-cases)
<!-- /TOC -->
## Overview
MindSpore supports `DATA_PARALLEL` and `AUTO_PARALLEL`. Automatic parallel is a distributed parallel mode that integrates data parallel, model parallel, and hybrid parallel. It can automatically establish cost models and select a parallel mode for users.
Among them:
- Data parallel: A parallel mode for dividing data in batches.
- Layerwise parallel: A parallel mode for dividing parameters by channel.
- Hybrid parallel: A parallel mode that covers both data parallel and model parallel.
- Cost model: A cost model built based on the memory computing cost and communication cost, for which an efficient algorithm is designed to find the parallel strategy with the shorter training time.
In this tutorial, we will learn how to train the ResNet-50 network in `DATA_PARALLEL` or `AUTO_PARALLEL` mode on MindSpore.
For sample code, please see at
<https://gitee.com/mindspore/docs/blob/r0.1/tutorials/tutorial_code/distributed_training/resnet50_distributed_training.py>.
> The current sample is for the Ascend AI processor.
## Preparations
### Configuring Distributed Environment Variables
When distributed training is performed in the lab environment, you need to configure the networking information file for the current multi-card environment. If HUAWEI CLOUD is used, skip this section.
The Ascend 910 AI processor and 1980 AIServer are used as an example. The JSON configuration file of a two-card environment is as follows. In this example, the configuration file is named rank_table.json.
```json
{
"board_id": "0x0000",
"chip_info": "910",
"deploy_mode": "lab",
"group_count": "1",
"group_list": [
{
"device_num": "2",
"server_num": "1",
"group_name": "",
"instance_count": "2",
"instance_list": [
{"devices":[{"device_id":"0","device_ip":"192.1.27.6"}],"rank_id":"0","server_id":"10.155.111.140"},
{"devices":[{"device_id":"1","device_ip":"192.2.27.6"}],"rank_id":"1","server_id":"10.155.111.140"}
]
}
],
"para_plane_nic_location": "device",
"para_plane_nic_name": [
"eth0", "eth1"
],
"para_plane_nic_num": "2",
"status": "completed"
}
```
The following parameters need to be modified based on the actual training environment:
1. `server_num` indicates the number of hosts, and `server_id` indicates the IP address of the local host.
2. `device_num`, `para_plane_nic_num`, and `instance_count` indicate the number of cards.
3. `rank_id` indicates the logical sequence number of a card, which starts from 0 fixedly. `device_id` indicates the physical sequence number of a card, that is, the actual sequence number of the host where the card is located.
4. `device_ip` indicates the IP address of the NIC. You can run the `cat /etc/hccn.conf` command on the current host to obtain the IP address of the NIC.
5. `para_plane_nic_name` indicates the name of the corresponding NIC.
After the networking information file is ready, add the file path to the environment variable `MINDSPORE_HCCL_CONFIG_PATH`. In addition, the `device_id` information needs to be transferred to the script. In this example, the information is transferred by configuring the environment variable DEVICE_ID.
```bash
export MINDSPORE_HCCL_CONFIG_PATH="./rank_table.json"
export DEVICE_ID=0
```
### Invoking the Collective Communication Library
You need to enable the distributed API `enable_hccl` in the `context.set_context()` API, set the `device_id` parameter, and invoke `init()` to complete the initialization operation.
In the sample, the graph mode is used during runtime. On the Ascend AI processor, Huawei Collective Communication Library (HCCL) is used.
```python
import os
from mindspore import context
from mindspore.communication.management import init
if __name__ == "__main__":
context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", enable_hccl=True, device_id=int(os.environ["DEVICE_ID"]))
init()
...
```
`mindspore.communication.management` encapsulates the collective communication API provided by the HCCL to help users obtain distributed information. The common types include `get_rank` and `get_group_size`, which correspond to the ID of the current card in the cluster and the number of cards, respectively.
> HCCL implements multi-device multi-card communication based on the Da Vinci architecture chip. The restrictions on using the distributed service are as follows:
> 1. In a single-node system, a cluster of 1, 2, 4, or 8 cards is supported. In a multi-node system, a cluster of 8 x N cards is supported.
> 2. Each server has four NICs (numbered 0 to 3) and four NICs (numbered 4 to 7) deployed on two different networks. During training of two or four cards, the NICs must be connected and clusters cannot be created across networks.
> 3. The operating system needs to use the symmetric multiprocessing (SMP) mode.
## Loading Datasets
During distributed training, data is imported in data parallel mode. The following uses Cifar10Dataset as an example to describe how to import the CIFAR-10 data set in parallel mode, `data_path` is the path of the dataset.
Different from a single-node system, the multi-node system needs to transfer `num_shards` and `shard_id` parameters to the dataset API, which correspond to the number of cards and logical sequence number of the NIC, respectively. You are advised to obtain the parameters through the HCCL API.
```python
import mindspore.common.dtype as mstype
import mindspore.dataset as ds
import mindspore.dataset.transforms.c_transforms as C
import mindspore.dataset.transforms.vision.c_transforms as vision
from mindspore.communication.management import get_rank, get_group_size
def create_dataset(repeat_num=1, batch_size=32, rank_id=0, rank_size=1):
resize_height = 224
resize_width = 224
rescale = 1.0 / 255.0
shift = 0.0
# get rank_id and rank_size
rank_id = get_rank()
rank_size = get_group_size()
data_set = ds.Cifar10Dataset(data_path, num_shards=rank_size, shard_id=rank_id)
# define map operations
random_crop_op = vision.RandomCrop((32, 32), (4, 4, 4, 4))
random_horizontal_op = vision.RandomHorizontalFlip()
resize_op = vision.Resize((resize_height, resize_width))
rescale_op = vision.Rescale(rescale, shift)
normalize_op = vision.Normalize((0.4465, 0.4822, 0.4914), (0.2010, 0.1994, 0.2023))
changeswap_op = vision.HWC2CHW()
type_cast_op = C.TypeCast(mstype.int32)
c_trans = [random_crop_op, random_horizontal_op]
c_trans += [resize_op, rescale_op, normalize_op, changeswap_op]
# apply map operations on images
data_set = data_set.map(input_columns="label", operations=type_cast_op)
data_set = data_set.map(input_columns="image", operations=c_trans)
# apply repeat operations
data_set = data_set.repeat(repeat_num)
# apply shuffle operations
data_set = data_set.shuffle(buffer_size=10)
# apply batch operations
data_set = data_set.batch(batch_size=batch_size, drop_remainder=True)
return data_set
```
## Defining the Network
In `DATA_PARALLEL` and `AUTO_PARALLEL` modes, the network definition mode is the same as that of a single-node system. For sample code, see at
<https://gitee.com/mindspore/docs/blob/r0.1/tutorials/tutorial_code/resnet/resnet.py>.
## Defining the Loss Function and Optimizer
### Defining the Loss Function
In the Loss function, the SoftmaxCrossEntropyWithLogits is expanded into multiple small operators for implementation according to a mathematical formula.
Compared with fusion loss, the loss in `AUTO_PARALLEL` mode searches and finds optimal parallel strategy by operator according to an algorithm.
```python
from mindspore.ops import operations as P
from mindspore import Tensor
import mindspore.ops.functional as F
import mindspore.common.dtype as mstype
import mindspore.nn as nn
class SoftmaxCrossEntropyExpand(nn.Cell):
def __init__(self, sparse=False):
super(SoftmaxCrossEntropyExpand, self).__init__()
self.exp = P.Exp()
self.sum = P.ReduceSum(keep_dims=True)
self.onehot = P.OneHot()
self.on_value = Tensor(1.0, mstype.float32)
self.off_value = Tensor(0.0, mstype.float32)
self.div = P.Div()
self.log = P.Log()
self.sum_cross_entropy = P.ReduceSum(keep_dims=False)
self.mul = P.Mul()
self.mul2 = P.Mul()
self.mean = P.ReduceMean(keep_dims=False)
self.sparse = sparse
self.max = P.ReduceMax(keep_dims=True)
self.sub = P.Sub()
def construct(self, logit, label):
logit_max = self.max(logit, -1)
exp = self.exp(self.sub(logit, logit_max))
exp_sum = self.sum(exp, -1)
softmax_result = self.div(exp, exp_sum)
if self.sparse:
label = self.onehot(label, F.shape(logit)[1], self.on_value, self.off_value)
softmax_result_log = self.log(softmax_result)
loss = self.sum_cross_entropy((self.mul(softmax_result_log, label)), -1)
loss = self.mul2(F.scalar_to_array(-1.0), loss)
loss = self.mean(loss, -1)
return loss
```
### Defining the Optimizer
The `Momentum` optimizer is used as the parameter update tool. The definition is the same as that of a single-node system.
```python
from mindspore.nn.optim.momentum import Momentum
lr = 0.01
momentum = 0.9
opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), lr, momentum)
```
## Training the Network
`context.set_auto_parallel_context()` is an API provided for users to set parallel parameters. The parameters are as follows:
- `parallel_mode`: distributed parallel mode. The options are `ParallelMode.DATA_PARALLEL` and `ParallelMode.AUTO_PARALLEL`.
- `mirror_mean`: During backward computation, the framework collects gradients of parameters in data parallel mode across multiple machines, obtains the global gradient value, and transfers the global gradient value to the optimizer for update.
The value True indicates the `allreduce_mean` operation that would be applied, and the value False indicates the `allreduce_sum` operation that would be applied.
In the following example, the parallel mode is set to `AUTO_PARALLEL`. `dataset_sink_mode=False` indicates that the non-sink mode is used. `LossMonitor` can return the loss value through the callback function.
```python
from mindspore.nn.optim.momentum import Momentum
from mindspore.train.callback import LossMonitor
from mindspore.train.model import Model, ParallelMode
from resnet import resnet50
def test_train_cifar(num_classes=10, epoch_size=10):
context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, mirror_mean=True)
loss_cb = LossMonitor()
dataset = create_dataset(epoch_size)
net = resnet50(32, num_classes)
loss = SoftmaxCrossEntropyExpand(sparse=True)
opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01, 0.9)
model = Model(net, loss_fn=loss, optimizer=opt)
model.train(epoch_size, dataset, callbacks=[loss_cb], dataset_sink_mode=False)
```
## Running Test Cases
Currently, MindSpore distributed execution uses the single-card single-process running mode. The number of processes must be the same as the number of used cards. Each single-process will create a folder to save log and building information. The following is an example of a running script for two-card distributed training:
```bash
#!/bin/bash
export MINDSPORE_HCCL_CONFIG_PATH=./rank_table.json
export RANK_SIZE=2
for((i=0;i<$RANK_SIZE;i++))
do
mkdir device$i
cp ./resnet50_distributed_training.py ./device$i
cd ./device$i
export RANK_ID=$i
export DEVICE_ID=$i
echo "start training for device $i"
env > env$i.log
pytest -s -v ./resnet50_distributed_training.py > log$i 2>&1 &
cd ../
done
```
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册