未验证 提交 eee22fb8 编写于 作者: W Wing 提交者: GitHub

Refine backend-cluster.md (#7009)

上级 379c4e1a
# Cluster Management
In many product environments, backend needs to support high throughput and provides HA to keep robustness,
so you should need cluster management always in product env.
In many product environments, the backend needs to support high throughput and provide HA to maintain robustness,
so you always need cluster management in product env.
Backend provides several ways to do cluster management. Choose the one you need/want.
There are various ways to manage the cluster in the backend. Choose the one that best suits your needs.
- [Zookeeper coordinator](#zookeeper-coordinator). Use Zookeeper to let backend instance detects and communicates
- [Zookeeper coordinator](#zookeeper-coordinator). Use Zookeeper to let the backend instances detect and communicate
with each other.
- [Kubernetes](#kubernetes). When backend cluster are deployed inside kubernetes, you could choose this
by using k8s native APIs to manage cluster.
- [Consul](#consul). Use Consul as backend cluster management implementor, to coordinate backend instances.
- [Kubernetes](#kubernetes). When the backend clusters are deployed inside Kubernetes, you could make use of this method
by using k8s native APIs to manage clusters.
- [Consul](#consul). Use Consul as the backend cluster management implementor and coordinate backend instances.
- [Etcd](#etcd). Use Etcd to coordinate backend instances.
- [Nacos](#nacos). Use Nacos to coordinate backend instances.
In the `application.yml`, there're default configurations for the aforementioned coordinators under the section `cluster`,
you can specify one of them in the `selector` property to enable it.
In the `application.yml` file, there are default configurations for the aforementioned coordinators under the section `cluster`.
You can specify any of them in the `selector` property to enable it.
## Zookeeper coordinator
Zookeeper is a very common and wide used cluster coordinator. Set the **cluster/selector** to **zookeeper** in the yml to enable.
Zookeeper is a very common and widely used cluster coordinator. Set the **cluster/selector** to **zookeeper** in the yml to enable it.
Required Zookeeper version, 3.4+
Required Zookeeper version: 3.4+
```yaml
cluster:
......@@ -32,13 +32,13 @@ cluster:
- `hostPort`, `baseSleepTimeMs` and `maxRetries` are settings of Zookeeper curator client.
Note:
- If `Zookeeper ACL` is enabled and `/skywalking` existed, must be sure `SkyWalking` has `CREATE`, `READ` and `WRITE` permissions. If `/skywalking` is not exists, it will be created by SkyWalking and grant all permissions to the specified user. Simultaneously, znode is granted READ to anyone.
- If set `schema` as `digest`, the password of expression is set in **clear text**.
- If `Zookeeper ACL` is enabled and `/skywalking` exists, you must make sure that `SkyWalking` has `CREATE`, `READ` and `WRITE` permissions. If `/skywalking` does not exist, it will be created by SkyWalking and all permissions to the specified user will be granted. Simultaneously, znode grants the READ permission to anyone.
- If you set `schema` as `digest`, the password of the expression is set in **clear text**.
In some cases, oap default gRPC host and port in core are not suitable for internal communication among the oap nodes.
The following setting are provided to set the host and port manually, based on your own LAN env.
- internalComHost, the host registered and other oap node use this to communicate with current node.
- internalComPort, the port registered and other oap node use this to communicate with current node.
In some cases, the OAP default gRPC host and port in core are not suitable for internal communication among the OAP nodes.
The following settings are provided to set the host and port manually, based on your own LAN env.
- internalComHost: The registered host and other OAP nodes use this to communicate with the current node.
- internalComPort: the registered port and other OAP nodes use this to communicate with the current node.
```yaml
zookeeper:
......@@ -57,7 +57,7 @@ zookeeper:
## Kubernetes
Require backend cluster are deployed inside kubernetes, guides are in [Deploy in kubernetes](backend-k8s.md).
The require backend clusters are deployed inside Kubernetes. See the guides in [Deploy in kubernetes](backend-k8s.md).
Set the selector to `kubernetes`.
```yaml
......@@ -67,8 +67,8 @@ cluster:
```
## Consul
Now, consul is becoming a famous system, many of companies and developers using consul to be
their service discovery solution. Set the **cluster/selector** to **consul** in the yml to enable.
Recently, the Consul system has become more and more popular, and many companies and developers now use Consul as
their service discovery solution. Set the **cluster/selector** to **consul** in the yml to enable it.
```yaml
cluster:
......@@ -76,15 +76,15 @@ cluster:
# other configurations
```
Same as Zookeeper coordinator,
in some cases, oap default gRPC host and port in core are not suitable for internal communication among the oap nodes.
The following setting are provided to set the host and port manually, based on your own LAN env.
- internalComHost, the host registered and other oap node use this to communicate with current node.
- internalComPort, the port registered and other oap node use this to communicate with current node.
Same as the Zookeeper coordinator,
in some cases, the OAP default gRPC host and port in core are not suitable for internal communication among the OAP nodes.
The following settings are provided to set the host and port manually, based on your own LAN env.
- internalComHost: The registed host and other OAP nodes use this to communicate with the current node.
- internalComPort: The registered port and other OAP nodes use this to communicate with the current node.
## Etcd
Set the **cluster/selector** to **etcd** in the yml to enable.
Set the **cluster/selector** to **etcd** in the yml to enable it.
```yaml
cluster:
......@@ -92,14 +92,14 @@ cluster:
# other configurations
```
Same as Zookeeper coordinator,
in some cases, oap default gRPC host and port in core are not suitable for internal communication among the oap nodes.
The following setting are provided to set the host and port manually, based on your own LAN env.
- internalComHost, the host registered and other oap node use this to communicate with current node.
- internalComPort, the port registered and other oap node use this to communicate with current node.
Same as the Zookeeper coordinator,
in some cases, the OAP default gRPC host and port in core are not suitable for internal communication among the oap nodes.
The following settings are provided to set the host and port manually, based on your own LAN env.
- internalComHost: The registered host and other OAP nodes use this to communicate with the current node.
- internalComPort: The registered port and other OAP nodes use this to communicate with the current node.
## Nacos
Set the **cluster/selector** to **nacos** in the yml to enable.
Set the **cluster/selector** to **nacos** in the yml to enable it.
```yaml
cluster:
......@@ -107,7 +107,7 @@ cluster:
# other configurations
```
Nacos support authenticate by username or accessKey, empty means no need auth. extra config is bellow:
Nacos supports authentication by username or accessKey. Empty means that there is no need for authentication. Extra config is as follows:
```yaml
nacos:
username:
......@@ -116,8 +116,8 @@ nacos:
secretKey:
```
Same as Zookeeper coordinator,
in some cases, oap default gRPC host and port in core are not suitable for internal communication among the oap nodes.
The following setting are provided to set the host and port manually, based on your own LAN env.
- internalComHost, the host registered and other oap node use this to communicate with current node.
- internalComPort, the port registered and other oap node use this to communicate with current node.
\ No newline at end of file
Same as the Zookeeper coordinator,
in some cases, the OAP default gRPC host and port in core are not suitable for internal communication among the OAP nodes.
The following settings are provided to set the host and port manually, based on your own LAN env.
- internalComHost: The registered host and other OAP nodes use this to communicate with the current node.
- internalComPort: The registered port and other OAP nodes use this to communicate with the current node.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册