提交 b99c7a66 编写于 作者: B bao liang 提交者: dailidong

update documents (#740)

* update english documents

* refactor zk client

* update documents

* update zkclient

* update zkclient

* update documents

* add architecture-design

* change i18n

* update i18n

* update english documents

* add architecture-design

* update english documents

* update en-US documents

* add architecture-design

* update demo site
上级 f8f4556b
...@@ -37,7 +37,7 @@ Its main objectives are as follows: ...@@ -37,7 +37,7 @@ Its main objectives are as follows:
Stability | Easy to use | Features | Scalability | Stability | Easy to use | Features | Scalability |
-- | -- | -- | -- -- | -- | -- | --
Decentralized multi-master and multi-worker | Visualization process defines key information such as task status, task type, retry times, task running machine, visual variables and so on at a glance.  |  Support pause, recover operation | support custom task types Decentralized multi-master and multi-worker | Visualization process defines key information such as task status, task type, retry times, task running machine, visual variables and so on at a glance.  |  Support pause, recover operation | support custom task types
HA is supported by itself | All process definition operations are visualized, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, the api mode operation is provided. | Users on easyscheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs. " Supports traditional shell tasks, while supporting large data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | The scheduler uses distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster. Master and Worker support dynamic online and offline. HA is supported by itself | All process definition operations are visualized, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, the api mode operation is provided. | Users on easyscheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs. " | The scheduler uses distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster. Master and Worker support dynamic online and offline.
Overload processing: Task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured, when too many tasks will be cached in the task queue, will not cause machine jam. | One-click deployment | Supports traditional shell tasks, and also support big data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | | Overload processing: Task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured, when too many tasks will be cached in the task queue, will not cause machine jam. | One-click deployment | Supports traditional shell tasks, and also support big data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | |
...@@ -62,7 +62,7 @@ Overload processing: Task queue mechanism, the number of schedulable tasks on a ...@@ -62,7 +62,7 @@ Overload processing: Task queue mechanism, the number of schedulable tasks on a
- [**Upgrade document**](https://analysys.github.io/easyscheduler_docs_cn/升级文档.html?_blank "Upgrade document") - [**Upgrade document**](https://analysys.github.io/easyscheduler_docs_cn/升级文档.html?_blank "Upgrade document")
- <a href="http://52.82.13.76:8888" target="_blank">Online Demo</a> - <a href="http://106.75.43.194:8888" target="_blank">Online Demo</a>
More documentation please refer to <a href="https://analysys.github.io/easyscheduler_docs_cn/" target="_blank">[EasyScheduler online documentation]</a> More documentation please refer to <a href="https://analysys.github.io/easyscheduler_docs_cn/" target="_blank">[EasyScheduler online documentation]</a>
......
...@@ -52,7 +52,7 @@ Easy Scheduler ...@@ -52,7 +52,7 @@ Easy Scheduler
- [**升级文档**](https://analysys.github.io/easyscheduler_docs_cn/升级文档.html?_blank "升级文档") - [**升级文档**](https://analysys.github.io/easyscheduler_docs_cn/升级文档.html?_blank "升级文档")
- <a href="http://52.82.13.76:8888" target="_blank">我要体验</a> - <a href="http://106.75.43.194:8888" target="_blank">我要体验</a>
更多文档请参考 <a href="https://analysys.github.io/easyscheduler_docs_cn/" target="_blank">easyscheduler中文在线文档</a> 更多文档请参考 <a href="https://analysys.github.io/easyscheduler_docs_cn/" target="_blank">easyscheduler中文在线文档</a>
......
Easy Scheduler Release 1.0.1 Easy Scheduler Release 1.0.1
=== ===
Easy Scheduler 1.0.2 is the second version in the 1.x series. The update is as follows: Easy Scheduler 1.0.1 is the second version in the 1.x series. The update is as follows:
- 1,outlook TSL email support - 1,outlook TSL email support
- 2,servlet and protobuf jar conflict resolution - 2,servlet and protobuf jar conflict resolution
......
...@@ -28,7 +28,7 @@ A: Support most mailboxes, qq, 163, 126, 139, outlook, aliyun, etc. are supporte ...@@ -28,7 +28,7 @@ A: Support most mailboxes, qq, 163, 126, 139, outlook, aliyun, etc. are supporte
## Q: What are the common system variable time parameters and how do I use them? ## Q: What are the common system variable time parameters and how do I use them?
A: Please refer to https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8C.html#%E7%B3%BB%E7%BB%9F%E5%8F%82%E6%95%B0 A: Please refer to 'System parameter' in the system-manual
--- ---
...@@ -46,7 +46,7 @@ A: Use **the administrator** to create a Worker group, **specify the Worker grou ...@@ -46,7 +46,7 @@ A: Use **the administrator** to create a Worker group, **specify the Worker grou
## Q: Priority of the task ## Q: Priority of the task
A: We also support t**he priority of processes and tasks**. Priority We have five levels of **HIGHEST, HIGH, MEDIUM, LOW and LOWEST**. **You can set the priority between different process instances, or you can set the priority of different task instances in the same process instance.** For details, please refer to the task priority design https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.html#%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1 A: We also support **the priority of processes and tasks**. Priority We have five levels of **HIGHEST, HIGH, MEDIUM, LOW and LOWEST**. **You can set the priority between different process instances, or you can set the priority of different task instances in the same process instance.** For details, please refer to the task priority design in the architecture-design.
---- ----
...@@ -163,7 +163,7 @@ A: **Note:** **Master monitors Master and Worker services.** ...@@ -163,7 +163,7 @@ A: **Note:** **Master monitors Master and Worker services.**
​ 2,If the Worker service is lost, the Master will monitor that the Worker service is gone. If there is a Yarn task, the Kill Yarn task will be retried. ​ 2,If the Worker service is lost, the Master will monitor that the Worker service is gone. If there is a Yarn task, the Kill Yarn task will be retried.
Please see the fault-tolerant design for details:https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1.html#%E7%B3%BB%E7%BB%9F%E6%9E%B6%E6%9E%84%E8%AE%BE%E8%AE%A1 Please see the fault-tolerant design in the architecture for details.
--- ---
...@@ -189,7 +189,7 @@ A: Yes, **if the timing start and end time is the same time, then this timing wi ...@@ -189,7 +189,7 @@ A: Yes, **if the timing start and end time is the same time, then this timing wi
A: 1, the task dependency between **DAG**, is **from the zero degree** of the DAG segmentation A: 1, the task dependency between **DAG**, is **from the zero degree** of the DAG segmentation
​ 2, there are **task dependent nodes**, you can achieve cross-process tasks or process dependencies, please refer to the (DEPENDENT) node:https://analysys.github.io/easyscheduler_docs_cn/%E7%B3%BB%E7%BB%9F%E4%BD%BF%E7%94%A8%E6%89%8B%E5%86%8C.html#%E4%BB%BB%E5%8A%A1%E8%8A%82%E7%82%B9%E7%B1%BB%E5%9E%8B%E5%92%8C%E5%8F%82%E6%95%B0%E8%AE%BE%E7%BD%AE ​ 2, there are **task dependent nodes**, you can achieve cross-process tasks or process dependencies, please refer to the (DEPENDENT) node design in the system-manual.
​ Note: **Cross-project processes or task dependencies are not supported** ​ Note: **Cross-project processes or task dependencies are not supported**
...@@ -248,7 +248,7 @@ If it is a Spark task --queue mode specifies the queue ...@@ -248,7 +248,7 @@ If it is a Spark task --queue mode specifies the queue
## Q : Master or Worker reports the following alarm ## Q : Master or Worker reports the following alarm
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/master_worker_lack_res.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/master_worker_lack_res.png" width="60%" />
</p> </p>
...@@ -258,11 +258,10 @@ A : Change the value of master.properties **master.reserved.memory** under con ...@@ -258,11 +258,10 @@ A : Change the value of master.properties **master.reserved.memory** under con
## Q: The hive version is 1.1.0+cdh5.15.0, and the SQL hive task connection is reported incorrectly. ## Q: The hive version is 1.1.0+cdh5.15.0, and the SQL hive task connection is reported incorrectly.
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/cdh_hive_error.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/cdh_hive_error.png" width="60%" />
</p> </p>
A : Will hive pom A : Will hive pom
``` ```
......
...@@ -38,7 +38,7 @@ Its main objectives are as follows: ...@@ -38,7 +38,7 @@ Its main objectives are as follows:
Stability | Easy to use | Features | Scalability | Stability | Easy to use | Features | Scalability |
-- | -- | -- | -- -- | -- | -- | --
Decentralized multi-master and multi-worker | Visualization process defines key information such as task status, task type, retry times, task running machine, visual variables and so on at a glance.  |  Support pause, recover operation | support custom task types Decentralized multi-master and multi-worker | Visualization process defines key information such as task status, task type, retry times, task running machine, visual variables and so on at a glance.  |  Support pause, recover operation | support custom task types
HA is supported by itself | All process definition operations are visualized, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, the api mode operation is provided. | Users on easyscheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs. " Supports traditional shell tasks, while supporting large data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | The scheduler uses distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster. Master and Worker support dynamic online and offline. HA is supported by itself | All process definition operations are visualized, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, the api mode operation is provided. | Users on easyscheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs. | The scheduler uses distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster. Master and Worker support dynamic online and offline.
Overload processing: Task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured, when too many tasks will be cached in the task queue, will not cause machine jam. | One-click deployment | Supports traditional shell tasks, and also support big data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | | Overload processing: Task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured, when too many tasks will be cached in the task queue, will not cause machine jam. | One-click deployment | Supports traditional shell tasks, and also support big data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process | |
...@@ -55,17 +55,17 @@ Overload processing: Task queue mechanism, the number of schedulable tasks on a ...@@ -55,17 +55,17 @@ Overload processing: Task queue mechanism, the number of schedulable tasks on a
### Document ### Document
- <a href="https://analysys.github.io/easyscheduler_docs_cn/后端部署文档.html" target="_blank">Backend deployment documentation</a> - <a href="https://analysys.github.io/easyscheduler_docs/backend-deployment.html" target="_blank">Backend deployment documentation</a>
- <a href="https://analysys.github.io/easyscheduler_docs_cn/前端部署文档.html" target="_blank">Front-end deployment documentation</a> - <a href="https://analysys.github.io/easyscheduler_docs/frontend-deployment.html" target="_blank">Front-end deployment documentation</a>
- [**User manual**](https://analysys.github.io/easyscheduler_docs_cn/系统使用手册.html?_blank "User manual") - [**User manual**](https://analysys.github.io/easyscheduler_docs/system-manual.html?_blank "User manual")
- [**Upgrade document**](https://analysys.github.io/easyscheduler_docs_cn/升级文档.html?_blank "Upgrade document") - [**Upgrade document**](https://analysys.github.io/easyscheduler_docs/upgrade.html?_blank "Upgrade document")
- <a href="http://52.82.13.76:8888" target="_blank">Online Demo</a> - <a href="http://52.82.13.76:8888" target="_blank">Online Demo</a>
More documentation please refer to <a href="https://analysys.github.io/easyscheduler_docs_cn/" target="_blank">[EasyScheduler online documentation]</a> More documentation please refer to <a href="https://analysys.github.io/easyscheduler_docs/" target="_blank">[EasyScheduler online documentation]</a>
### Recent R&D plan ### Recent R&D plan
Work plan of Easy Scheduler: [R&D plan](https://github.com/analysys/EasyScheduler/projects/1), where `In Develop` card is the features of 1.1.0 version , TODO card is to be done (including feature ideas) Work plan of Easy Scheduler: [R&D plan](https://github.com/analysys/EasyScheduler/projects/1), where `In Develop` card is the features of 1.1.0 version , TODO card is to be done (including feature ideas)
......
# Summary
* [Instruction](README.md)
* Frontend Deployment
* [Preparations](frontend-deployment.md#Preparations)
* [Deployment](frontend-deployment.md#Deployment)
* [FAQ](frontend-deployment.md#FAQ)
* Backend Deployment
* [Preparations](backend-deployment.md#Preparations)
* [Deployment](backend-deployment.md#Deployment)
* [Quick Start](quick-start.md#Quick Start)
* System Use Manual
* [Operational Guidelines](system-manual.md#Operational Guidelines)
* [Security](system-manual.md#Security)
* [Monitor center](system-manual.md#Monitor center)
* [Task Node Type and Parameter Setting](system-manual.md#Task Node Type and Parameter Setting)
* [System parameter](system-manual.md#System parameter)
* [Architecture Design](architecture-design.md)
* Front-end development
* [Development environment](frontend-development.md#Development environment)
* [Project directory structure](frontend-development.md#Project directory structure)
* [System function module](frontend-development.md#System function module)
* [Routing and state management](frontend-development.md#Routing and state management)
* [specification](frontend-development.md#specification)
* [interface](frontend-development.md#interface)
* [Extended development](frontend-development.md#Extended development)
* Backend development documentation
* [Environmental requirements](backend-development.md#Environmental requirements)
* [Project compilation](backend-development.md#Project compilation)
* [Interface documentation](http://52.82.13.76:8888/escheduler/doc.html?language=en_US&lang=en)
* FAQ
* [FAQ](EasyScheduler-FAQ.md)
* EasyScheduler upgrade documentation
* [upgrade documentation](upgrade.md)
* History release notes
* [1.1.0 release](1.1.0-release.md)
* [1.0.5 release](1.0.5-release.md)
* [1.0.4 release](1.0.4-release.md)
* [1.0.3 release](1.0.3-release.md)
* [1.0.2 release](1.0.2-release.md)
* [1.0.1 release](1.0.1-release.md)
* [1.0.0 release]
...@@ -2,12 +2,12 @@ ...@@ -2,12 +2,12 @@
There are two deployment modes for the backend: There are two deployment modes for the backend:
- 1. automatic deployment - automatic deployment
- 2. source code compile and then deployment - source code compile and then deployment
## 1、Preparations ## Preparations
Download the latest version of the installation package, download address: [gitee download](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) download escheduler-backend-x.x.x.tar.gz(back-end referred to as escheduler-backend),escheduler-ui-x.x.x.tar.gz(front-end referred to as escheduler-ui) Download the latest version of the installation package, download address: [gitee download](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) or [github download](https://github.com/analysys/EasyScheduler/releases), download escheduler-backend-x.x.x.tar.gz(back-end referred to as escheduler-backend),escheduler-ui-x.x.x.tar.gz(front-end referred to as escheduler-ui)
...@@ -27,9 +27,9 @@ Download the latest version of the installation package, download address: [gi ...@@ -27,9 +27,9 @@ Download the latest version of the installation package, download address: [gi
#### Preparations 2: Create deployment users #### Preparations 2: Create deployment users
- Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in sudo-u {linux-user}, so deployment users need sudo privileges and are confidential. - Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in `sudo-u {linux-user}`, so deployment users need sudo privileges and are confidential.
```Deployment account ```
vi /etc/sudoers vi /etc/sudoers
# For example, the deployment user is an escheduler account # For example, the deployment user is an escheduler account
...@@ -50,7 +50,7 @@ Configure SSH secret-free login on deployment machines and other installation ma ...@@ -50,7 +50,7 @@ Configure SSH secret-free login on deployment machines and other installation ma
Execute the following command to create database and account Execute the following command to create database and account
```sql ```
CREATE DATABASE escheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE escheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}'; GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}'; GRANT ALL PRIVILEGES ON escheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
...@@ -65,7 +65,9 @@ Configure SSH secret-free login on deployment machines and other installation ma ...@@ -65,7 +65,9 @@ Configure SSH secret-free login on deployment machines and other installation ma
spring.datasource.username spring.datasource.username
spring.datasource.password spring.datasource.password
``` ```
Execute scripts for creating tables and importing basic data Execute scripts for creating tables and importing basic data
``` ```
sh ./script/create_escheduler.sh sh ./script/create_escheduler.sh
``` ```
...@@ -100,10 +102,10 @@ install.sh : One-click deployment script ...@@ -100,10 +102,10 @@ install.sh : One-click deployment script
- If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory - If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory
## 2、Deployment ## Deployment
Automated deployment is recommended, and experienced partners can use source deployment as well. Automated deployment is recommended, and experienced partners can use source deployment as well.
### 2.1 Automated Deployment ### Automated Deployment
- Install zookeeper tools - Install zookeeper tools
...@@ -128,7 +130,7 @@ If all services are normal, the automatic deployment is successful ...@@ -128,7 +130,7 @@ If all services are normal, the automatic deployment is successful
After successful deployment, the log can be viewed and stored in a specified folder. After successful deployment, the log can be viewed and stored in a specified folder.
```log path ```logPath
logs/ logs/
├── escheduler-alert-server.log ├── escheduler-alert-server.log
├── escheduler-master-server.log ├── escheduler-master-server.log
...@@ -137,7 +139,7 @@ After successful deployment, the log can be viewed and stored in a specified fol ...@@ -137,7 +139,7 @@ After successful deployment, the log can be viewed and stored in a specified fol
|—— escheduler-logger-server.log |—— escheduler-logger-server.log
``` ```
### 2.2 Compile source code to deploy ### Compile source code to deploy
After downloading the release version of the source package, unzip it into the root directory After downloading the release version of the source package, unzip it into the root directory
...@@ -152,7 +154,7 @@ After downloading the release version of the source package, unzip it into the r ...@@ -152,7 +154,7 @@ After downloading the release version of the source package, unzip it into the r
After normal compilation, ./target/escheduler-{version}/ is generated in the current directory After normal compilation, ./target/escheduler-{version}/ is generated in the current directory
### 2.3 Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details) ### Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details)
* stop all services in the cluster * stop all services in the cluster
...@@ -164,38 +166,38 @@ After normal compilation, ./target/escheduler-{version}/ is generated in the cur ...@@ -164,38 +166,38 @@ After normal compilation, ./target/escheduler-{version}/ is generated in the cur
* start and stop one master server * start and stop one master server
```start master ```master
sh ./bin/escheduler-daemon.sh start master-server sh ./bin/escheduler-daemon.sh start master-server
sh ./bin/escheduler-daemon.sh stop master-server sh ./bin/escheduler-daemon.sh stop master-server
``` ```
* start and stop one worker server * start and stop one worker server
```start worker ```worker
sh ./bin/escheduler-daemon.sh start worker-server sh ./bin/escheduler-daemon.sh start worker-server
sh ./bin/escheduler-daemon.sh stop worker-server sh ./bin/escheduler-daemon.sh stop worker-server
``` ```
* start and stop api server * start and stop api server
```start Api ```Api
sh ./bin/escheduler-daemon.sh start api-server sh ./bin/escheduler-daemon.sh start api-server
sh ./bin/escheduler-daemon.sh stop api-server sh ./bin/escheduler-daemon.sh stop api-server
``` ```
* start and stop logger server * start and stop logger server
```start Logger ```Logger
sh ./bin/escheduler-daemon.sh start logger-server sh ./bin/escheduler-daemon.sh start logger-server
sh ./bin/escheduler-daemon.sh stop logger-server sh ./bin/escheduler-daemon.sh stop logger-server
``` ```
* start and stop alert server * start and stop alert server
```start Alert ```Alert
sh ./bin/escheduler-daemon.sh start alert-server sh ./bin/escheduler-daemon.sh start alert-server
sh ./bin/escheduler-daemon.sh stop alert-server sh ./bin/escheduler-daemon.sh stop alert-server
``` ```
## 3、Database Upgrade ## Database Upgrade
Database upgrade is a function added in version 1.0.2. The database can be upgraded automatically by executing the following command: Database upgrade is a function added in version 1.0.2. The database can be upgraded automatically by executing the following command:
```upgrade ```upgrade
......
{
"title": "EasyScheduler",
"author": "",
"description": "Scheduler",
"language": "en-US",
"gitbook": "3.2.3",
"styles": {
"website": "./styles/website.css"
},
"structure": {
"readme": "README.md"
},
"plugins":[
"expandable-chapters",
"insert-logo-link"
],
"pluginsConfig": {
"insert-logo-link": {
"src": "http://geek.analysys.cn/static/upload/236/2019-03-29/379450b4-7919-4707-877c-4d33300377d4.png",
"url": "https://github.com/analysys/EasyScheduler"
}
}
}
\ No newline at end of file
# Front End Deployment Document # frontend-deployment
The front-end has three deployment modes: automated deployment, manual deployment and compiled source deployment. The front-end has three deployment modes: automated deployment, manual deployment and compiled source deployment.
## 1、Preparations ## Preparations
#### Download the installation package #### Download the installation package
Please download the latest version of the installation package, download address: [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) Please download the latest version of the installation package, download address: [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files/)
...@@ -14,10 +15,11 @@ After downloading escheduler-ui-x.x.x.tar.gz,decompress`tar -zxvf escheduler-u ...@@ -14,10 +15,11 @@ After downloading escheduler-ui-x.x.x.tar.gz,decompress`tar -zxvf escheduler-u
## 2、Deployment ## Deployment
Automated deployment is recommended for either of the following two ways Automated deployment is recommended for either of the following two ways
### 2.1 Automated Deployment ### Automated Deployment
Edit the installation file`vi install-escheduler-ui.sh` in the` escheduler-ui` directory Edit the installation file`vi install-escheduler-ui.sh` in the` escheduler-ui` directory
...@@ -36,7 +38,7 @@ esc_proxy_port="http://192.168.xx.xx:12345" ...@@ -36,7 +38,7 @@ esc_proxy_port="http://192.168.xx.xx:12345"
under this directory, execute`./install-escheduler-ui.sh` under this directory, execute`./install-escheduler-ui.sh`
### 2.2 Manual Deployment ### Manual Deployment
Install epel source `yum install epel-release -y` Install epel source `yum install epel-release -y`
...@@ -44,10 +46,13 @@ Install Nginx `yum install nginx -y` ...@@ -44,10 +46,13 @@ Install Nginx `yum install nginx -y`
> #### Nginx configuration file address > #### Nginx configuration file address
``` ```
/etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
``` ```
> #### Configuration information (self-modifying) > #### Configuration information (self-modifying)
``` ```
server { server {
listen 8888;# access port listen 8888;# access port
...@@ -81,7 +86,9 @@ server { ...@@ -81,7 +86,9 @@ server {
} }
} }
``` ```
> #### Restart the Nginx service > #### Restart the Nginx service
``` ```
systemctl restart nginx systemctl restart nginx
``` ```
...@@ -95,9 +102,11 @@ systemctl restart nginx ...@@ -95,9 +102,11 @@ systemctl restart nginx
- status `systemctl status nginx` - status `systemctl status nginx`
## Front-end Frequently Asked Questions ## FAQ
#### 1.Upload file size limit #### Upload file size limit
Edit the configuration file `vi /etc/nginx/nginx.conf` Edit the configuration file `vi /etc/nginx/nginx.conf`
``` ```
# change upload size # change upload size
client_max_body_size 1024m client_max_body_size 1024m
......
# Quick Start # Quick Start
* Administrator user login * Administrator user login
> Address:192.168.xx.xx:8888 Username and password:admin/escheduler123 > Address:192.168.xx.xx:8888 Username and password:admin/escheduler123
<p align="center"> <p align="center">
......
# System Use Manual # System Use Manual
## Quick Start
> Refer to[ Quick Start ]( Quick-Start.md)
## Operational Guidelines ## Operational Guidelines
### Create a project ### Create a project
...@@ -47,7 +42,7 @@ ...@@ -47,7 +42,7 @@
- Click "Save", enter the name of the process definition, the description of the process definition, and set the global parameters. - Click "Save", enter the name of the process definition, the description of the process definition, and set the global parameters.
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61778891-3c03f180-ae32-11e9-812a-9d9f6c151301.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/save-definition.png" width="60%" />
</p> </p>
- For other types of nodes, refer to [task node types and parameter settings](#task node types and parameter settings) - For other types of nodes, refer to [task node types and parameter settings](#task node types and parameter settings)
...@@ -66,13 +61,15 @@ ...@@ -66,13 +61,15 @@
* Notification group: When the process ends or fault tolerance occurs, process information is sent to all members of the notification group by mail. * Notification group: When the process ends or fault tolerance occurs, process information is sent to all members of the notification group by mail.
* Recipient: Enter the mailbox and press Enter key to save. When the process ends and fault tolerance occurs, an alert message is sent to the recipient list. * Recipient: Enter the mailbox and press Enter key to save. When the process ends and fault tolerance occurs, an alert message is sent to the recipient list.
* Cc: Enter the mailbox and press Enter key to save. When the process is over and fault-tolerant occurs, alarm messages are copied to the copier list. * Cc: Enter the mailbox and press Enter key to save. When the process is over and fault-tolerant occurs, alarm messages are copied to the copier list.
<p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61779865-0829cb80-ae34-11e9-901f-00cb3bf80e36.png" width="60%" /> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs/images/start-process.png" width="60%" />
</p> </p>
* Complement: To implement the workflow definition of a specified date, you can select the time range of the complement (currently only support for continuous days), such as the data from May 1 to May 10, as shown in the figure: * Complement: To implement the workflow definition of a specified date, you can select the time range of the complement (currently only support for continuous days), such as the data from May 1 to May 10, as shown in the figure:
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61780083-6a82cc00-ae34-11e9-9839-fda9153f693b.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/complement.png" width="60%" />
</p> </p>
> Complement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously. > Complement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously.
...@@ -80,8 +77,9 @@ ...@@ -80,8 +77,9 @@
### Timing Process Definition ### Timing Process Definition
- Create Timing: "Process Definition - > Timing" - Create Timing: "Process Definition - > Timing"
- Choose start-stop time, in the start-stop time range, regular normal work, beyond the scope, will not continue to produce timed workflow instances. - Choose start-stop time, in the start-stop time range, regular normal work, beyond the scope, will not continue to produce timed workflow instances.
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61781565-28a75500-ae37-11e9-9ca5-85f211f341b2.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/timing.png" width="60%" />
</p> </p>
- Add a timer to be executed once a day at 5:00 a.m. as shown below: - Add a timer to be executed once a day at 5:00 a.m. as shown below:
...@@ -211,7 +209,7 @@ ...@@ -211,7 +209,7 @@
</p> </p>
Note: If **kerberos** is turned on, you need to fill in **Principal ** Note: If **kerberos** is turned on, you need to fill in **Principal**
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61784847-0adcee80-ae3d-11e9-8ac7-ba8a13aef90c.png" width="60%" /> <img src="https://user-images.githubusercontent.com/53217792/61784847-0adcee80-ae3d-11e9-8ac7-ba8a13aef90c.png" width="60%" />
</p> </p>
...@@ -330,7 +328,7 @@ conf/common/hadoop.properties ...@@ -330,7 +328,7 @@ conf/common/hadoop.properties
<img src="https://user-images.githubusercontent.com/53217792/61841562-c6e2fb80-aec7-11e9-9481-4202d63dab6f.png" width="60%" /> <img src="https://user-images.githubusercontent.com/53217792/61841562-c6e2fb80-aec7-11e9-9481-4202d63dab6f.png" width="60%" />
</p> </p>
## Security (Privilege System) ## Security
- The security has the functions of queue management, tenant management, user management, warning group management, worker group manager, token manage and other functions. It can also authorize resources, data sources, projects, etc. - The security has the functions of queue management, tenant management, user management, warning group management, worker group manager, token manage and other functions. It can also authorize resources, data sources, projects, etc.
- Administrator login, default username password: admin/escheduler 123 - Administrator login, default username password: admin/escheduler 123
...@@ -433,11 +431,8 @@ conf/common/hadoop.properties ...@@ -433,11 +431,8 @@ conf/common/hadoop.properties
- 2.Select the project button to authorize the project - 2.Select the project button to authorize the project
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61842992-af5a4180-aecc-11e9-9553-43e836aee78b.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/auth-project.png" width="60%" />
</p>
</p>
### Monitor center ### Monitor center
- Service management is mainly to monitor and display the health status and basic information of each service in the system. - Service management is mainly to monitor and display the health status and basic information of each service in the system.
...@@ -474,7 +469,7 @@ conf/common/hadoop.properties ...@@ -474,7 +469,7 @@ conf/common/hadoop.properties
### Shell ### Shell
- The shell node, when the worker executes, generates a temporary shell script, which is executed by a Linux user with the same name as the tenant. - The shell node, when the worker executes, generates a temporary shell script, which is executed by a Linux user with the same name as the tenant.
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SHELL.png) task node in the toolbar onto the palette and double-click the task node as follows: > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SHELL.png) task node in the toolbar onto the palette and double-click the task node as follows:
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61843728-6788e980-aecf-11e9-8006-241a7ec5024b.png" width="60%" /> <img src="https://user-images.githubusercontent.com/53217792/61843728-6788e980-aecf-11e9-8006-241a7ec5024b.png" width="60%" />
...@@ -506,7 +501,7 @@ conf/common/hadoop.properties ...@@ -506,7 +501,7 @@ conf/common/hadoop.properties
- Dependent nodes are **dependent checking nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node checks whether process B has a successful execution instance yesterday. - Dependent nodes are **dependent checking nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node checks whether process B has a successful execution instance yesterday.
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png) ask node in the toolbar onto the palette and double-click the task node as follows: > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_DEPENDENT.png) ask node in the toolbar onto the palette and double-click the task node as follows:
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61844369-be8fbe00-aed1-11e9-965d-ddb9aeeba9db.png" width="60%" /> <img src="https://user-images.githubusercontent.com/53217792/61844369-be8fbe00-aed1-11e9-965d-ddb9aeeba9db.png" width="60%" />
...@@ -515,26 +510,24 @@ conf/common/hadoop.properties ...@@ -515,26 +510,24 @@ conf/common/hadoop.properties
> Dependent nodes provide logical judgment functions, such as checking whether yesterday's B process was successful or whether the C process was successfully executed. > Dependent nodes provide logical judgment functions, such as checking whether yesterday's B process was successful or whether the C process was successfully executed.
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node.png" width="80%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/depend-b-and-c.png" width="80%" />
</p> </p>
> For example, process A is a weekly task and process B and C are daily tasks. Task A requires that task B and C be successfully executed every day of the week, as shown in the figure: > For example, process A is a weekly task and process B and C are daily tasks. Task A requires that task B and C be successfully executed every day of the last week, as shown in the figure:
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node2.png" width="80%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/depend-week.png" width="80%" />
</p> </p>
> If weekly A also needs to be implemented successfully on Tuesday: > If weekly A also needs to be implemented successfully on Tuesday:
>
>
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node3.png" width="80%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/depend-last-tuesday.png" width="80%" />
</p> </p>
### PROCEDURE ### PROCEDURE
- The procedure is executed according to the selected data source. - The procedure is executed according to the selected data source.
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PROCEDURE.png) task node in the toolbar onto the palette and double-click the task node as follows: > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PROCEDURE.png) task node in the toolbar onto the palette and double-click the task node as follows:
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61844464-1af2dd80-aed2-11e9-9486-6cf1b8585aa5.png" width="60%" /> <img src="https://user-images.githubusercontent.com/53217792/61844464-1af2dd80-aed2-11e9-9486-6cf1b8585aa5.png" width="60%" />
...@@ -551,7 +544,7 @@ conf/common/hadoop.properties ...@@ -551,7 +544,7 @@ conf/common/hadoop.properties
</p> </p>
- Executing the query SQL function, you can choose to send mail in the form of tables and attachments to the designated recipients. - Executing the query SQL function, you can choose to send mail in the form of tables and attachments to the designated recipients.
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SQL.png) task node in the toolbar onto the palette and double-click the task node as follows: > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SQL.png) task node in the toolbar onto the palette and double-click the task node as follows:
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61850594-4d5b0580-aee7-11e9-9c9e-1934c91962b9.png" width="60%" /> <img src="https://user-images.githubusercontent.com/53217792/61850594-4d5b0580-aee7-11e9-9c9e-1934c91962b9.png" width="60%" />
...@@ -570,7 +563,7 @@ conf/common/hadoop.properties ...@@ -570,7 +563,7 @@ conf/common/hadoop.properties
- Through SPARK node, SPARK program can be directly executed. For spark node, worker will use `spark-submit` mode to submit tasks. - Through SPARK node, SPARK program can be directly executed. For spark node, worker will use `spark-submit` mode to submit tasks.
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png) task node in the toolbar onto the palette and double-click the task node as follows: > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SPARK.png) task node in the toolbar onto the palette and double-click the task node as follows:
> >
> >
...@@ -595,7 +588,7 @@ Note: JAVA and Scala are just used for identification, no difference. If it's a ...@@ -595,7 +588,7 @@ Note: JAVA and Scala are just used for identification, no difference. If it's a
- Using MR nodes, MR programs can be executed directly. For Mr nodes, worker submits tasks using `hadoop jar` - Using MR nodes, MR programs can be executed directly. For Mr nodes, worker submits tasks using `hadoop jar`
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png) task node in the toolbar onto the palette and double-click the task node as follows: > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_MR.png) task node in the toolbar onto the palette and double-click the task node as follows:
1. JAVA program 1. JAVA program
...@@ -631,7 +624,7 @@ Note: JAVA and Scala are just used for identification, no difference. If it's a ...@@ -631,7 +624,7 @@ Note: JAVA and Scala are just used for identification, no difference. If it's a
> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png) task node in the toolbar onto the palette and double-click the task node as follows: > Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PYTHON.png) task node in the toolbar onto the palette and double-click the task node as follows:
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61851959-daec2480-aeea-11e9-83fd-3e00a030cb84.png" width="60%" /> <img src="https://user-images.githubusercontent.com/53217792/61851959-daec2480-aeea-11e9-83fd-3e00a030cb84.png" width="60%" />
...@@ -690,9 +683,8 @@ Note: JAVA and Scala are just used for identification, no difference. If it's a ...@@ -690,9 +683,8 @@ Note: JAVA and Scala are just used for identification, no difference. If it's a
> User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process. > User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process.
> For example: > For example:
<p align="center"> <p align="center">
<img src="https://user-images.githubusercontent.com/53217792/61864229-a0db4c80-af03-11e9-962c-044ab12991c7.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs/images/save-global-parameters.png" width="60%" />
</p> </p>
> global_bizdate is a global parameter, referring to system parameters. > global_bizdate is a global parameter, referring to system parameters.
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
## 1、准备工作 ## 1、准备工作
#### 下载安装包 #### 下载安装包
请下载最新版本的安装包,下载地址: [码云下载](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) 请下载最新版本的安装包,下载地址: [码云下载](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) 或者 [github下载](https://github.com/analysys/EasyScheduler/releases)
下载 escheduler-ui-x.x.x.tar.gz 后,解压`tar -zxvf escheduler-ui-x.x.x.tar.gz ./`后,进入`escheduler-ui`目录 下载 escheduler-ui-x.x.x.tar.gz 后,解压`tar -zxvf escheduler-ui-x.x.x.tar.gz ./`后,进入`escheduler-ui`目录
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
## 1、准备工作 ## 1、准备工作
请下载最新版本的安装包,下载地址: [码云下载](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) ,下载escheduler-backend-x.x.x.tar.gz(后端简称escheduler-backend),escheduler-ui-x.x.x.tar.gz(前端简称escheduler-ui) 请下载最新版本的安装包,下载地址: [码云下载](https://gitee.com/easyscheduler/EasyScheduler/attach_files/)或者[github下载](https://github.com/analysys/EasyScheduler/releases) ,下载escheduler-backend-x.x.x.tar.gz(后端简称escheduler-backend),escheduler-ui-x.x.x.tar.gz(前端简称escheduler-ui)
#### 准备一: 基础软件安装(必装项请自行安装) #### 准备一: 基础软件安装(必装项请自行安装)
......
...@@ -37,13 +37,13 @@ ...@@ -37,13 +37,13 @@
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/project.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/project.png" width="60%" />
</p> </p>
* 点击工作流定义->创建工作流定义->上线流程定义 * 点击工作流定义->创建工作流定义->上线工作流定义
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag1.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag1.png" width="60%" />
</p> </p>
* 运行流程定义->点击工作流实例->点击流程实例名称->双击任务节点->查看任务执行日志 * 运行工作流定义->点击工作流实例->点击工作流实例名称->双击任务节点->查看任务执行日志
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log.png" width="60%" />
......
...@@ -15,16 +15,16 @@ ...@@ -15,16 +15,16 @@
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/project.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/project.png" width="60%" />
</p> </p>
> 项目首页其中包含任务状态统计,流程状态统计、流程定义统计 > 项目首页其中包含任务状态统计,流程状态统计、工作流定义统计
- 任务状态统计:是指在指定时间范围内,统计任务实例中的待运行、失败、运行中、完成、成功的个数 - 任务状态统计:是指在指定时间范围内,统计任务实例中的待运行、失败、运行中、完成、成功的个数
- 流程状态统计:是指在指定时间范围内,统计流程实例中的待运行、失败、运行中、完成、成功的个数 - 流程状态统计:是指在指定时间范围内,统计工作流实例中的待运行、失败、运行中、完成、成功的个数
- 流程定义统计:是统计该用户创建的流程定义及管理员授予该用户的流程定义 - 工作流定义统计:是统计该用户创建的工作流定义及管理员授予该用户的工作流定义
### 创建工作流定义 ### 创建工作流定义
- 进入项目首页,点击“工作流定义”,进入流程定义列表页。 - 进入项目首页,点击“工作流定义”,进入工作流定义列表页。
- 点击“创建工作流”,创建新的流程定义。 - 点击“创建工作流”,创建新的工作流定义。
- 拖拽“SHELL"节点到画布,新增一个Shell任务。 - 拖拽“SHELL"节点到画布,新增一个Shell任务。
- 填写”节点名称“,”描述“,”脚本“字段。 - 填写”节点名称“,”描述“,”脚本“字段。
- 选择“任务优先级”,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行。 - 选择“任务优先级”,级别高的任务在执行队列中会优先执行,相同优先级的任务按照先进先出的顺序执行。
...@@ -45,7 +45,7 @@ ...@@ -45,7 +45,7 @@
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag3.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag3.png" width="60%" />
</p> </p>
- 点击”保存“,输入流程定义名称,流程定义描述,设置全局参数,参考[自定义参数](#用户自定义参数) - 点击”保存“,输入工作流定义名称,工作流定义描述,设置全局参数,参考[自定义参数](#用户自定义参数)
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag4.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dag4.png" width="60%" />
...@@ -53,9 +53,9 @@ ...@@ -53,9 +53,9 @@
- 其他类型节点,请参考 [任务节点类型和参数设置](#任务节点类型和参数设置) - 其他类型节点,请参考 [任务节点类型和参数设置](#任务节点类型和参数设置)
### 执行流程定义 ### 执行工作流定义
- **未上线状态的流程定义可以编辑,但是不可以运行**,所以先上线工作流 - **未上线状态的工作流定义可以编辑,但是不可以运行**,所以先上线工作流
> 点击工作流定义,返回流程定义列表,点击”上线“图标,上线工作流定义。 > 点击工作流定义,返回工作流定义列表,点击”上线“图标,上线工作流定义。
> 下线工作流定义的时候,要先将定时管理中的定时任务下线,这样才能成功下线工作流定义 > 下线工作流定义的时候,要先将定时管理中的定时任务下线,这样才能成功下线工作流定义
...@@ -92,8 +92,8 @@ ...@@ -92,8 +92,8 @@
- 定时上线,**新创建的定时是下线状态,需要点击“定时管理->上线”,定时才能正常工作** - 定时上线,**新创建的定时是下线状态,需要点击“定时管理->上线”,定时才能正常工作**
### 查看流程实例 ### 查看工作流实例
> 点击“工作流实例”,查看流程实例列表。 > 点击“工作流实例”,查看工作流实例列表。
> 点击工作流名称,查看任务执行状态。 > 点击工作流名称,查看任务执行状态。
...@@ -107,7 +107,7 @@ ...@@ -107,7 +107,7 @@
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log.png" width="60%" />
</p> </p>
> 点击任务实例节点,点击**查看历史**,可以查看该流程实例运行的该任务实例列表 > 点击任务实例节点,点击**查看历史**,可以查看该工作流实例运行的该任务实例列表
<p align="center"> <p align="center">
<img src="https://analysys.github.io/EasyScheduler/zh_CN/images/task_history.png" width="60%" /> <img src="https://analysys.github.io/EasyScheduler/zh_CN/images/task_history.png" width="60%" />
...@@ -120,14 +120,14 @@ ...@@ -120,14 +120,14 @@
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/instance-list.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/instance-list.png" width="60%" />
</p> </p>
* 编辑:可以对已经终止的流程进行编辑,编辑后保存的时候,可以选择是否更新到流程定义。 * 编辑:可以对已经终止的流程进行编辑,编辑后保存的时候,可以选择是否更新到工作流定义。
* 重跑:可以对已经终止的流程进行重新执行。 * 重跑:可以对已经终止的流程进行重新执行。
* 恢复失败:针对失败的流程,可以执行恢复失败操作,从失败的节点开始执行。 * 恢复失败:针对失败的流程,可以执行恢复失败操作,从失败的节点开始执行。
* 停止:对正在运行的流程进行**停止**操作,后台会先对worker进程`kill`,再执行`kill -9`操作 * 停止:对正在运行的流程进行**停止**操作,后台会先对worker进程`kill`,再执行`kill -9`操作
* 暂停:可以对正在运行的流程进行**暂停**操作,系统状态变为**等待执行**,会等待正在执行的任务结束,暂停下一个要执行的任务。 * 暂停:可以对正在运行的流程进行**暂停**操作,系统状态变为**等待执行**,会等待正在执行的任务结束,暂停下一个要执行的任务。
* 恢复暂停:可以对暂停的流程恢复,直接从**暂停的节点**开始运行 * 恢复暂停:可以对暂停的流程恢复,直接从**暂停的节点**开始运行
* 删除:删除流程实例及流程实例下的任务实例 * 删除:删除工作流实例及工作流实例下的任务实例
* 甘特图:Gantt图纵轴是某个流程实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示: * 甘特图:Gantt图纵轴是某个工作流实例下的任务实例的拓扑排序,横轴是任务实例的运行时间,如图示:
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/gant-pic.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/gant-pic.png" width="60%" />
</p> </p>
...@@ -343,8 +343,8 @@ conf/common/hadoop.properties ...@@ -343,8 +343,8 @@ conf/common/hadoop.properties
### 创建普通用户 ### 创建普通用户
- 用户分为**管理员用户****普通用户** - 用户分为**管理员用户****普通用户**
* 管理员有**授权和用户管理**等权限,没有**创建项目和流程定义**的操作的权限 * 管理员有**授权和用户管理**等权限,没有**创建项目和工作流定义**的操作的权限
* 普通用户可以**创建项目和对流程定义的创建,编辑,执行**等操作。 * 普通用户可以**创建项目和对工作流定义的创建,编辑,执行**等操作。
* 注意:**如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下** * 注意:**如果该用户切换了租户,则该用户所在租户下所有资源将复制到切换的新租户下**
<p align="center"> <p align="center">
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/useredit2.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/useredit2.png" width="60%" />
...@@ -465,7 +465,7 @@ conf/common/hadoop.properties ...@@ -465,7 +465,7 @@ conf/common/hadoop.properties
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/shell_edit.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/shell_edit.png" width="60%" />
</p> </p>
- 节点名称:一个流程定义中的节点名称是唯一的 - 节点名称:一个工作流定义中的节点名称是唯一的
- 运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。 - 运行标志:标识这个节点是否能正常调度,如果不需要执行,可以打开禁止执行开关。
- 描述信息:描述该节点的功能 - 描述信息:描述该节点的功能
- 失败重试次数:任务失败重新提交的次数,支持下拉和手填 - 失败重试次数:任务失败重新提交的次数,支持下拉和手填
...@@ -482,10 +482,10 @@ conf/common/hadoop.properties ...@@ -482,10 +482,10 @@ conf/common/hadoop.properties
<img src="https://analysys.github.io/easyscheduler_docs_cn/images/subprocess_edit.png" width="60%" /> <img src="https://analysys.github.io/easyscheduler_docs_cn/images/subprocess_edit.png" width="60%" />
</p> </p>
- 节点名称:一个流程定义中的节点名称是唯一的 - 节点名称:一个工作流定义中的节点名称是唯一的
- 运行标志:标识这个节点是否能正常调度 - 运行标志:标识这个节点是否能正常调度
- 描述信息:描述该节点的功能 - 描述信息:描述该节点的功能
- 子节点:是选择子流程的流程定义,右上角进入该子节点可以跳转到所选子流程的流程定义 - 子节点:是选择子流程的工作流定义,右上角进入该子节点可以跳转到所选子流程的工作流定义
### 依赖(DEPENDENT)节点 ### 依赖(DEPENDENT)节点
- 依赖节点,就是**依赖检查节点**。比如A流程依赖昨天的B流程执行成功,依赖节点会去检查B流程在昨天是否有执行成功的实例。 - 依赖节点,就是**依赖检查节点**。比如A流程依赖昨天的B流程执行成功,依赖节点会去检查B流程在昨天是否有执行成功的实例。
...@@ -658,7 +658,7 @@ conf/common/hadoop.properties ...@@ -658,7 +658,7 @@ conf/common/hadoop.properties
### 用户自定义参数 ### 用户自定义参数
> 用户自定义参数分为全局参数和局部参数。全局参数是保存流程定义和流程实例的时候传递的全局参数,全局参数可以在整个流程中的任何一个任务节点的局部参数引用。 > 用户自定义参数分为全局参数和局部参数。全局参数是保存工作流定义和工作流实例的时候传递的全局参数,全局参数可以在整个流程中的任何一个任务节点的局部参数引用。
> 例如: > 例如:
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册