未验证 提交 cf08a2da 编写于 作者: W Wing 提交者: GitHub

refine backend doc (#7451)

上级 dc055393
# Start up mode
In different deployment tool, such as k8s, you may need different startup mode.
We provide another two optional startup modes.
In different deployment tools, such as k8s, you may need different startup modes.
We provide two other optional startup modes.
## Default mode
Default mode. Do initialization works if necessary, start listen and provide service.
The default mode carries out tasks to initialize as necessary, starts to listen, and provide services.
Run `/bin/oapService.sh`(.bat) to start in this mode. Also when use `startup.sh`(.bat) to start.
Run `/bin/oapService.sh`(.bat) to start in this mode. This is also applicable when you're using `startup.sh`(.bat) to start.
## Init mode
In this mode, oap server starts up to do initialization works, then exit.
You could use this mode to init your storage, such as ElasticSearch indexes, MySQL and TiDB tables,
and init data.
In this mode, the OAP server starts up to carry out initialization, and then exits.
You could use this mode to initialize your storage (such as ElasticSearch indexes, MySQL, and TiDB tables),
as well as your data.
Run `/bin/oapServiceInit.sh`(.bat) to start in this mode.
## No-init mode
In this mode, oap server starts up without initialization works,
but it waits for ElasticSearch indexes, MySQL and TiDB tables existed,
start listen and provide service. Meaning,
this oap server expect another oap server to do the initialization.
In this mode, the OAP server starts up without carrying out initialization. Rather, it watches out for the ElasticSearch indexes, MySQL, and TiDB tables,
starts to listen, and provide services. In other words, the OAP server would anticipate having another OAP server to carry out the initialization.
Run `/bin/oapServiceNoInit.sh`(.bat) to start in this mode.
\ No newline at end of file
Run `/bin/oapServiceNoInit.sh`(.bat) to start in this mode.
# Backend storage
SkyWalking storage is pluggable, we have provided the following storage solutions, you could easily
use one of them by specifying it as the `selector` in the `application.yml`
The SkyWalking storage is pluggable. We have provided the following storage solutions, which allows you to easily
use one of them by specifying it as the `selector` in `application.yml`
```yaml
storage:
selector: ${SW_STORAGE:elasticsearch7}
```
Native supported storage
Natively supported storage:
- H2
- OpenSearch
- ElasticSearch 6, 7
......@@ -18,9 +18,9 @@ Native supported storage
## H2
Active H2 as storage, set storage provider to **H2** In-Memory Databases. Default in distribution package.
Please read `Database URL Overview` in [H2 official document](http://www.h2database.com/html/features.html),
you could set the target to H2 in **Embedded**, **Server** and **Mixed** modes.
Activate H2 as storage, set storage provider to **H2** In-Memory Databases. Default in distribution package.
Please read `Database URL Overview` in [H2 official document](http://www.h2database.com/html/features.html).
You can set the target to H2 in **Embedded**, **Server** and **Mixed** modes.
Setting fragment example
```yaml
......@@ -40,7 +40,7 @@ Please download the `apache-skywalking-bin-es7.tar.gz` if you want to use OpenSe
## ElasticSearch
**NOTICE:** Elastic announced through their blog that Elasticsearch will be moving over to a Server Side Public
**NOTE:** Elastic announced through their blog that Elasticsearch will be moving over to a Server Side Public
License (SSPL), which is incompatible with Apache License 2.0. This license change is effective from Elasticsearch
version 7.11. So please choose the suitable ElasticSearch version according to your usage.
......@@ -49,10 +49,10 @@ version 7.11. So please choose the suitable ElasticSearch version according to y
**Required ElasticSearch 6.3.2 or higher. HTTP RestHighLevelClient is used to connect server.**
- For ElasticSearch 6.3.2 ~ 7.0.0 (excluded), please download the `apache-skywalking-bin.tar.gz`,
- For ElasticSearch 7.0.0 ~ 8.0.0 (excluded), please download the `apache-skywalking-bin-es7.tar.gz`.
- For ElasticSearch 6.3.2 ~ 7.0.0 (excluded), please download `apache-skywalking-bin.tar.gz`.
- For ElasticSearch 7.0.0 ~ 8.0.0 (excluded), please download `apache-skywalking-bin-es7.tar.gz`.
For now, ElasticSearch 6 and ElasticSearch 7 share the same configurations, as follows:
For now, ElasticSearch 6 and ElasticSearch 7 share the same configurations as follows:
```yaml
storage:
......@@ -88,7 +88,7 @@ storage:
### ElasticSearch 6 With Https SSL Encrypting communications.
example:
Example:
```yaml
storage:
......@@ -103,37 +103,37 @@ storage:
protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"https"}
...
```
- File at `trustStorePath` is being monitored, once it is changed, the ElasticSearch client will do reconnecting.
- `trustStorePass` could be changed on the runtime through [**Secrets Management File Of ElasticSearch Authentication**](#secrets-management-file-of-elasticsearch-authentication).
- File at `trustStorePath` is being monitored. Once it is changed, the ElasticSearch client will reconnect.
- `trustStorePass` could be changed in the runtime through [**Secrets Management File Of ElasticSearch Authentication**](#secrets-management-file-of-elasticsearch-authentication).
### Daily Index Step
Daily index step(`storage/elasticsearch/dayStep`, default 1) represents the index creation period. In this period, several days(dayStep value)' metrics are saved.
Daily index step(`storage/elasticsearch/dayStep`, default 1) represents the index creation period. In this period, metrics for several days (dayStep value) are saved.
Mostly, users don't need to change the value manually. As SkyWalking is designed to observe large scale distributed system.
But in some specific cases, users want to set a long TTL value, such as more than 60 days, but their ElasticSearch cluster isn't powerful due to the low traffic in the production environment.
This value could be increased to 5(or more), if users could make sure single one index could support these days(5 in this case) metrics and traces.
In most cases, users don't need to change the value manually, as SkyWalking is designed to observe large scale distributed systems.
But in some cases, users may want to set a long TTL value, such as more than 60 days. However, their ElasticSearch cluster may not be powerful enough due to low traffic in the production environment.
This value could be increased to 5 (or more), if users could ensure a single index could support the metrics and traces for these days (5 in this case).
Such as, if dayStep == 11,
1. data in [2000-01-01, 2000-01-11] will be merged into the index-20000101.
1. data in [2000-01-12, 2000-01-22] will be merged into the index-20000112.
For example, if dayStep == 11,
1. Data in [2000-01-01, 2000-01-11] will be merged into the index-20000101.
1. Data in [2000-01-12, 2000-01-22] will be merged into the index-20000112.
`storage/elasticsearch/superDatasetDayStep` override the `storage/elasticsearch/dayStep` if the value is positive.
This would affect the record related entities, such as the trace segment. In some cases, the size of metrics is much less than the record(trace), this would help the shards balance in the ElasticSearch cluster.
`storage/elasticsearch/superDatasetDayStep` overrides the `storage/elasticsearch/dayStep` if the value is positive.
This would affect the record-related entities, such as trace segments. In some cases, the size of metrics is much smaller than the record (trace). This would improve the shards balance in the ElasticSearch cluster.
NOTICE, TTL deletion would be affected by these. You should set an extra more dayStep in your TTL. Such as you want to TTL == 30 days and dayStep == 10, you actually need to set TTL = 40;
NOTE: TTL deletion would be affected by these steps. You should set an extra dayStep in your TTL. For example, if you want to have TTL == 30 days and dayStep == 10, you are commended to set TTL = 40.
### Secrets Management File Of ElasticSearch Authentication
The value of `secretsManagementFile` should point to the secrets management file absolute path.
The file includes username, password and JKS password of ElasticSearch server in the properties format.
The file includes username, password, and JKS password of the ElasticSearch server in the properties format.
```properties
user=xxx
password=yyy
trustStorePass=zzz
```
The major difference between using `user, password, trustStorePass` configs in the `application.yaml` file is, the **Secrets Management File** is being watched by the OAP server.
Once it is changed manually or through 3rd party tool, such as [Vault](https://github.com/hashicorp/vault),
the storage provider will use the new username, password and JKS password to establish the connection and close the old one. If the information exist in the file,
The major difference between using `user, password, trustStorePass` configs in the `application.yaml` file is that the **Secrets Management File** is being watched by the OAP server.
Once it is changed manually or through a 3rd party tool, such as [Vault](https://github.com/hashicorp/vault),
the storage provider will use the new username, password, and JKS password to establish the connection and close the old one. If the information exists in the file,
the `user/password` will be overrided.
### Advanced Configurations For Elasticsearch Index
......@@ -149,7 +149,7 @@ storage:
```
### Recommended ElasticSearch server-side configurations
You could add following config to `elasticsearch.yml`, set the value based on your env.
You could add the following configuration to `elasticsearch.yml`, and set the value based on your environment.
```yml
# In tracing scenario, consider to set more than this at least.
......@@ -160,13 +160,12 @@ thread_pool.write.queue_size: 1000 # Suitable for ElasticSearch 6 and 7
index.max_result_window: 1000000
```
We strongly advice you to read more about these configurations from ElasticSearch official document.
This effects the performance of ElasticSearch very much.
We strongly recommend that you read more about these configurations from ElasticSearch's official document, since they have a direct impact on the performance of ElasticSearch.
### ElasticSearch 7 with Zipkin trace extension
This implementation shares most of `elasticsearch7`, just extends to support zipkin span storage.
It has all same configs.
This implementation is very similar to `elasticsearch7`, except that it extends to support Zipkin span storage.
The configurations are largely the same.
```yaml
storage:
selector: ${SW_STORAGE:zipkin-elasticsearch7}
......@@ -186,13 +185,13 @@ storage:
```
### About Namespace
When namespace is set, names of all indexes in ElasticSearch will use it as prefix.
When namespace is set, all index names in ElasticSearch will use it as prefix.
## MySQL
Active MySQL as storage, set storage provider to **mysql**.
**NOTICE:** MySQL driver is NOT allowed in Apache official distribution and source codes.
Please download MySQL driver by yourself. Copy the connection driver jar to `oap-libs`.
**NOTE:** MySQL driver is NOT allowed in Apache official distribution and source codes.
Please download MySQL driver on your own. Copy the connection driver jar to `oap-libs`.
```yaml
storage:
......@@ -208,12 +207,12 @@ storage:
dataSource.useServerPrepStmts: ${SW_DATA_SOURCE_USE_SERVER_PREP_STMTS:true}
metadataQueryMaxSize: ${SW_STORAGE_MYSQL_QUERY_MAX_SIZE:5000}
```
All connection related settings including link url, username and password are in `application.yml`.
Here are some of the settings, please follow [HikariCP](https://github.com/brettwooldridge/HikariCP) connection pool document for all the settings.
All connection-related settings, including URL link, username, and password are found in `application.yml`.
Only part of the settings are listed here. See the [HikariCP](https://github.com/brettwooldridge/HikariCP) connection pool document for full settings.
## TiDB
Tested TiDB Server 4.0.8 version and Mysql Client driver 8.0.13 version currently.
Active TiDB as storage, set storage provider to **tidb**.
Tested TiDB Server 4.0.8 version and MySQL Client driver 8.0.13 version are currently available.
Activate TiDB as storage, and set storage provider to **tidb**.
```yaml
storage:
......@@ -232,8 +231,8 @@ storage:
maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20}
numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2}
```
All connection related settings including link url, username and password are in `application.yml`.
These settings can refer to the configuration of *MySQL* above.
All connection-related settings, including URL link, username, and password are found in `application.yml`.
For details on settings, refer to the configuration of *MySQL* above.
## InfluxDB
InfluxDB storage provides a time-series database as a new storage option.
......@@ -250,11 +249,11 @@ storage:
duration: ${SW_STORAGE_INFLUXDB_DURATION:1000} # the time to wait at most (milliseconds)
fetchTaskLogMaxSize: ${SW_STORAGE_INFLUXDB_FETCH_TASK_LOG_MAX_SIZE:5000} # the max number of fetch task log in a request
```
All connection related settings including link url, username and password are in `application.yml`. The Metadata storage provider settings can refer to the configuration of **H2/MySQL** above.
All connection related settings, including URL link, username, and password are found in `application.yml`. For metadata storage provider settings, refer to the configurations of **H2/MySQL** above.
## PostgreSQL
PostgreSQL jdbc driver uses version 42.2.18, it supports PostgreSQL 8.2 or newer.
Active PostgreSQL as storage, set storage provider to **postgresql**.
PostgreSQL jdbc driver uses version 42.2.18. It supports PostgreSQL 8.2 or newer.
Activate PostgreSQL as storage, and set storage provider to **postgresql**.
```yaml
storage:
......@@ -272,9 +271,9 @@ storage:
maxSizeOfArrayColumn: ${SW_STORAGE_MAX_SIZE_OF_ARRAY_COLUMN:20}
numOfSearchableValuesPerTag: ${SW_STORAGE_NUM_OF_SEARCHABLE_VALUES_PER_TAG:2}
```
All connection related settings including link url, username and password are in `application.yml`.
Here are some of the settings, please follow [HikariCP](https://github.com/brettwooldridge/HikariCP) connection pool document for all the settings.
All connection-related settings, including URL link, username, and password are found in `application.yml`.
Only part of the settings are listed here. Please follow [HikariCP](https://github.com/brettwooldridge/HikariCP) connection pool document for full settings.
## More storage solution extension
Follow [Storage extension development guide](../../guides/storage-extention.md)
in [Project Extensions document](../../guides/README.md#project-extensions) in development guide.
## More storage extension solutions
Follow the [Storage extension development guide](../../guides/storage-extention.md)
in the [Project Extensions document](../../guides/README.md#project-extensions).
......@@ -2,7 +2,7 @@
The OAP backend cluster itself is a distributed streaming process system. To assist the Ops team,
we provide the telemetry for the OAP backend itself.
By default, the telemetry is disabled by setting `selector` to `none`, like this
By default, the telemetry is disabled by setting `selector` to `none`, like this:
```yaml
telemetry:
......@@ -16,16 +16,16 @@ telemetry:
sslCertChainPath: ${SW_TELEMETRY_PROMETHEUS_SSL_CERT_CHAIN_PATH:""}
```
but you can set one of `prometheus` to enable them, for more information, refer to the details below.
You may also set `Prometheus` to enable them. For more information, refer to the details below.
## Self Observability
### Static IP or hostname
SkyWalking supports to collect telemetry data into OAP backend directly. Users could check them out through UI or
GraphQL API then.
SkyWalking supports collecting telemetry data into OAP backend directly. Users could check them out through UI or
GraphQL API.
Adding following configuration to enable self-observability related modules.
Add the following configuration to enable self-observability related modules.
1. Setting up prometheus telemetry.
1. Set up prometheus telemetry.
```yaml
telemetry:
selector: ${SW_TELEMETRY:prometheus}
......@@ -34,7 +34,7 @@ telemetry:
port: 1543
```
2. Setting up prometheus fetcher
2. Set up prometheus fetcher.
```yaml
prometheus-fetcher:
......@@ -46,8 +46,8 @@ prometheus-fetcher:
3. Make sure `config/fetcher-prom-rules/self.yaml` exists.
Once you deploy an oap-server cluster, the target host should be replaced with a dedicated IP or hostname. For instances,
there are three oap server in your cluster, their host is `service1`, `service2` and `service3` respectively. You should
update each `self.yaml` to twist target host.
there are three OAP servers in your cluster. Their host is `service1`, `service2`, and `service3` respectively. You should
update each `self.yaml` to switch the target host.
service1:
```yaml
......@@ -91,25 +91,25 @@ staticConfig:
...
```
### Service discovery (k8s)
If you deploy an oap-server cluster on k8s, the oap-server instance(pod) could not has the static IP or hostname. We can leverage [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/#kubernetes) to discover the oap-server instance and scrape & transfer the metrics to OAP [OpenTelemetry receiver](backend-receivers.md#opentelemetry-receiver).
If you deploy an oap-server cluster on k8s, the oap-server instance (pod) would not have a static IP or hostname. We can leverage [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/#kubernetes) to discover the oap-server instance, and scrape & transfer the metrics to OAP [OpenTelemetry receiver](backend-receivers.md#opentelemetry-receiver).
How to install SkyWalking on k8s can refer to [Apache SkyWalking Kubernetes](https://github.com/apache/skywalking-kubernetes)
On how to install SkyWalking on k8s, you can refer to [Apache SkyWalking Kubernetes](https://github.com/apache/skywalking-kubernetes).
To set this up by the following steps:
Set this up following these steps:
1. Set up oap-server
- Set the metrics port
1. Set up oap-server.
- Set the metrics port.
```
prometheus-port: 1234
```
- Set environment variables
- Set environment variables.
```
SW_TELEMETRY=prometheus
SW_OTEL_RECEIVER=default
SW_OTEL_RECEIVER_ENABLED_OC_RULES=oap
```
Here is the example to install by Apache SkyWalking Kubernetes:
Here is an example to install by Apache SkyWalking Kubernetes:
```
helm -n istio-system install skywalking skywalking \
--set elasticsearch.replicas=1 \
......@@ -144,20 +144,19 @@ To set this up by the following steps:
regex: (.+)
replacement: $$1
```
The full example for OpenTelemetry Collector configuration and recommend version can refer to [otel-collector-oap.yaml](otel-collector-oap.yaml).
For the full example for OpenTelemetry Collector configuration and recommended version, you can refer to [otel-collector-oap.yaml](otel-collector-oap.yaml).
___
**WARNING**, since Apr 21, 2021, **Grafana** project has been relicensed to **AGPL-v3**, no as Apache 2.0 anymore. Check the LICENSE details.
The following Prometheus + Grafana solution is optional, not a recommendation.
**NOTE**: Since Apr 21, 2021, the **Grafana** project has been relicensed to **AGPL-v3**, and is no longer licensed for Apache 2.0. Check the LICENSE details.
The following Prometheus + Grafana solution is optional, rather than recommended.
## Prometheus
Prometheus is supported as telemetry implementor.
By using this, prometheus collects metrics from SkyWalking backend.
Prometheus is supported as a telemetry implementor, which collects metrics from SkyWalking's backend.
Set `prometheus` to provider. The endpoint open at `http://0.0.0.0:1234/` and `http://0.0.0.0:1234/metrics`.
Set `prometheus` to provider. The endpoint opens at `http://0.0.0.0:1234/` and `http://0.0.0.0:1234/metrics`.
```yaml
telemetry:
selector: ${SW_TELEMETRY:prometheus}
......@@ -173,7 +172,7 @@ telemetry:
port: 1543
```
Set SSL relevant settings to expose a secure endpoint. Notice private key file and cert chain file could be uploaded once
Set relevant SSL settings to expose a secure endpoint. Note that the private key file and cert chain file could be uploaded once
changes are applied to them.
```yaml
telemetry:
......@@ -187,7 +186,7 @@ telemetry:
```
### Grafana Visualization
Provide the grafana dashboard settings.
Provide the Grafana dashboard settings.
Check [SkyWalking OAP Cluster Monitor Dashboard](grafana-cluster.json) config and [SkyWalking OAP Instance Monitor Dashboard](grafana-instance.json) config.
......
......@@ -2,12 +2,12 @@
## Supported version
7.0.0+
## Why need token authentication after we have TLS?
TLS is about transport security, which makes sure the network can be trusted.
The token authentication is about monitoring application data **can be trusted**.
## Why do we need token authentication after TLS?
TLS is about transport security, which makes sure that a network can be trusted.
On the other hand, token authentication is about monitoring **whether application data can be trusted**.
## Token
In current version, Token is considered as a simple string.
In the current version, token is considered a simple string.
### Set Token
1. Set token in agent.config file
......@@ -25,19 +25,18 @@ receiver-sharing-server:
······
```
## Authentication fails
The Skywalking OAP verifies every request from agent, only allows requests whose token matches the one configured in `application.yml`.
## Authentication failure
The Skywalking OAP verifies every request from the agent, and only allows requests whose token matches the one configured in `application.yml` to pass through.
If the token is not right, you will see the following log in agent
If the token does not match, you will see the following log in the agent:
```
org.apache.skywalking.apm.dependencies.io.grpc.StatusRuntimeException: PERMISSION_DENIED
```
## FAQ
### Can I use token authentication instead of TLS?
No, you shouldn't. In tech way, you can of course, but token and TLS are used for untrusted network env. In that circumstance,
TLS has higher priority than this. Token can be trusted only under TLS protection.Token can be stolen easily if you
send it through a non-TLS network.
No, you shouldn't. Of course it's technically possible, but token and TLS are used for untrusted network environments. In these circumstances,
TLS has a higher priority. Tokens can be trusted only under TLS protection, and they can be easily stolen if sent through a non-TLS network.
### Do you support other authentication mechanisms? Such as ak/sk?
For now, no. But we appreciate someone contributes this feature.
### Do you support other authentication mechanisms, such as ak/sk?
Not for now. But we welcome contributions on this feature.
# VMs monitoring
SkyWalking leverages Prometheus node-exporter for collecting metrics data from the VMs, and leverages OpenTelemetry Collector to transfer the metrics to
SkyWalking leverages Prometheus node-exporter to collect metrics data from the VMs, and leverages OpenTelemetry Collector to transfer the metrics to
[OpenTelemetry receiver](backend-receivers.md#opentelemetry-receiver) and into the [Meter System](./../../concepts-and-designs/meter.md).
We define the VM entity as a `Service` in OAP, and use `vm::` as a prefix to identify it.
## Data flow
1. The Prometheus node-exporter collects metrics data from the VMs.
2. The OpenTelemetry Collector fetches metrics from the node-exporter via Prometheus Receiver and pushes metrics to SkyWalking OAP Server via the OpenCensus gRPC Exporter.
2. The OpenTelemetry Collector fetches metrics from node-exporter via Prometheus Receiver and pushes metrics to the SkyWalking OAP Server via the OpenCensus gRPC Exporter.
3. The SkyWalking OAP Server parses the expression with [MAL](../../concepts-and-designs/mal.md) to filter/calculate/aggregate and store the results.
## Setup
1. Setup [Prometheus node-exporter](https://prometheus.io/docs/guides/node-exporter/).
2. Setup [OpenTelemetry Collector ](https://opentelemetry.io/docs/collector/). This is an example for OpenTelemetry Collector configuration [otel-collector-config.yaml](../../../../test/e2e/e2e-test/docker/promOtelVM/otel-collector-config.yaml).
1. Set up [Prometheus node-exporter](https://prometheus.io/docs/guides/node-exporter/).
2. Set up [OpenTelemetry Collector ](https://opentelemetry.io/docs/collector/). This is an example for OpenTelemetry Collector configuration [otel-collector-config.yaml](../../../../test/e2e/e2e-test/docker/promOtelVM/otel-collector-config.yaml).
3. Config SkyWalking [OpenTelemetry receiver](backend-receivers.md#opentelemetry-receiver).
## Supported Metrics
......@@ -23,7 +23,7 @@ We define the VM entity as a `Service` in OAP, and use `vm::` as a prefix to ide
| CPU Average Used | % | meter_vm_cpu_average_used | The percentage usage of the CPU core in each mode | Prometheus node-exporter |
| CPU Load | | meter_vm_cpu_load1<br />meter_vm_cpu_load5<br />meter_vm_cpu_load15 | The CPU 1m / 5m / 15m average load | Prometheus node-exporter |
| Memory RAM | MB | meter_vm_memory_total<br />meter_vm_memory_available<br />meter_vm_memory_used | The RAM statistics, including Total / Available / Used | Prometheus node-exporter |
| Memory Swap | MB | meter_vm_memory_swap_free<br />meter_vm_memory_swap_total | The swap memory statistics, including Free / Total | Prometheus node-exporter |
| Memory Swap | MB | meter_vm_memory_swap_free<br />meter_vm_memory_swap_total | Swap memory statistics, including Free / Total | Prometheus node-exporter |
| File System Mountpoint Usage | % | meter_vm_filesystem_percentage | The percentage usage of the file system at each mount point | Prometheus node-exporter |
| Disk R/W | KB/s | meter_vm_disk_read,meter_vm_disk_written | The disk read and written | Prometheus node-exporter |
| Network Bandwidth Usage | KB/s | meter_vm_network_receive<br />meter_vm_network_transmit | The network receive and transmit | Prometheus node-exporter |
......
# Zabbix Receiver
Zabbix receiver is accepting the metrics of [Zabbix Agent Active Checks protocol](https://www.zabbix.com/documentation/current/manual/appendix/items/activepassive#active_checks) format into the [Meter System](./../../concepts-and-designs/meter.md).
Zabbix Agent is base on GPL-2.0 License.
The Zabbix receiver acceps metrics of [Zabbix Agent Active Checks protocol](https://www.zabbix.com/documentation/current/manual/appendix/items/activepassive#active_checks) format into the [Meter System](./../../concepts-and-designs/meter.md).
Zabbix Agent is based on GPL-2.0 License.
## Module define
## Module definition
```yaml
receiver-zabbix:
selector: ${SW_RECEIVER_ZABBIX:default}
......@@ -16,16 +16,16 @@ receiver-zabbix:
```
## Configuration file
Zabbix receiver is configured via a configuration file. The configuration file defines everything related to receiving
The Zabbix receiver is configured via a configuration file that defines everything related to receiving
from agents, as well as which rule files to load.
OAP can load the configuration at bootstrap. If the new configuration is not well-formed, OAP fails to start up. The files
The OAP can load the configuration at bootstrap. If the new configuration is not well-formed, the OAP fails to start up. The files
are located at `$CLASSPATH/zabbix-rules`.
The file is written in YAML format, defined by the scheme described below. Square brackets indicate that a parameter is optional.
An example for zabbix agent configuration could be found [here](../../../../test/e2e/e2e-test/docker/zabbix/zabbix_agentd.conf).
You could find the Zabbix agent detail items from [Zabbix Agent documentation](https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/zabbix_agent).
An example for Zabbix agent configuration could be found [here](../../../../test/e2e/e2e-test/docker/zabbix/zabbix_agentd.conf).
You could find details on Zabbix agent items from [Zabbix Agent documentation](https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/zabbix_agent).
### Configuration file
......@@ -70,4 +70,4 @@ name: <string>
exp: <string>
```
More about MAL, please refer to [mal.md](../../concepts-and-designs/mal.md).
For more on MAL, please refer to [mal.md](../../concepts-and-designs/mal.md).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册