提交 229140f0 编写于 作者: Z zengqiao

init

上级
### Intellij ###
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and Webstorm
*.iml
## Directory-based project format:
.idea/
# if you remove the above rule, at least ignore the following:
# User-specific stuff:
# .idea/workspace.xml
# .idea/tasks.xml
# .idea/dictionaries
# .idea/shelf
# Sensitive or high-churn files:
.idea/dataSources.ids
.idea/dataSources.xml
.idea/sqlDataSources.xml
.idea/dynamic.xml
.idea/uiDesigner.xml
# Mongo Explorer plugin:
.idea/mongoSettings.xml
## File-based project format:
*.ipr
*.iws
## Plugin-specific files:
# IntelliJ
/out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties
### Java ###
*.class
# Mobile Tools for Java (J2ME)
.mtj.tmp/
# Package Files #
*.jar
*.war
*.ear
# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
hs_err_pid*
### OSX ###
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
/target
target/
*.log
*.log.*
*.bak
*.vscode
*/.vscode/*
*/.vscode
*/velocity.log*
*/*.log
*/*.log.*
web/node_modules/
web/node_modules/*
workspace.xml
/output/*
.gitversion
*/node_modules/*
*/templates/*
*/out/*
*/dist/*
.DS_Store
\ No newline at end of file
此差异已折叠。
 Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (C) 2017 Beijing Didi Infinity Technology and Development Co.,Ltd. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
![kafka-manager-logo](./docs/assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## 主要功能特性
### 集群监控维度
- 多版本集群管控,支持从`0.10.2``2.4`版本;
- 集群Topic、Broker等多维度历史与实时关键指标查看;
### 集群管控维度
- 集群运维,包括逻辑Region方式管理集群
- Broker运维,包括优先副本选举
- Topic运维,包括创建、查询、扩容、修改属性、数据采样及迁移等;
- 消费组运维,包括指定时间或指定偏移两种方式进行重置消费偏移
### 用户使用维度
- 管理员用户与普通用户视角区分
- 管理员用户与普通用户权限区分
## kafka-manager架构图
![kafka-manager-arch](./docs/assets/images/common/arch.png)
## 相关文档
- [kafka-manager安装手册](./docs/install_cn_guide.md)
- [kafka-manager使用手册](./docs/user_cn_guide.md)
## 钉钉交流群
![dingding_group](./docs/assets/images/common/dingding_group.jpg)
## 项目成员
### 内部核心人员
`iceyuhui``liuyaguang``limengmonty``zhangliangmike``nullhuangyiming``zengqiao``eilenexuzhe``huangjiaweihjw`
### 外部贡献者
`fangjunyu``zhoutaiyang`
## 协议
`kafka-manager`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE)
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.xiaojukeji.kafka</groupId>
<artifactId>kafka-manager-common</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>jar</packaging>
<parent>
<artifactId>kafka-manager</artifactId>
<groupId>com.xiaojukeji.kafka</groupId>
<version>1.0.0-SNAPSHOT</version>
</parent>
<properties>
<kafka-manager.revision>1.0.0-SNAPSHOT</kafka-manager.revision>
<maven.test.skip>true</maven.test.skip>
<downloadSources>true</downloadSources>
<java_source_version>1.8</java_source_version>
<java_target_version>1.8</java_target_version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<file_encoding>UTF-8</file_encoding>
</properties>
<dependencies>
<dependency>
<groupId>commons-beanutils</groupId>
<artifactId>commons-beanutils</artifactId>
<version>1.9.3</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>2.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
</dependency>
</dependencies>
</project>
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.constant;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
/**
* @author zengqiao
* @date 20/2/28
*/
public class Constant {
public static final String KAFKA_MANAGER_INNER_ERROR = "kafka-manager inner error";
public final static Map<Integer, List<String>> BROKER_METRICS_TYPE_MBEAN_NAME_MAP = new ConcurrentHashMap<>();
public final static Map<Integer, List<String>> TOPIC_METRICS_TYPE_MBEAN_NAME_MAP = new ConcurrentHashMap<>();
public static final String COLLECTOR_METRICS_LOGGER = "COLLECTOR_METRICS_LOGGER";
public static final String API_METRICS_LOGGER = "API_METRICS_LOGGER";
}
package com.xiaojukeji.kafka.manager.common.constant;
public class MetricsType {
/**
* Broker流量详情
*/
public static final int BROKER_FLOW_DETAIL = 0;
public static final int BROKER_TO_DB_METRICS = 1; // Broker入DB的Metrics指标
public static final int BROKER_REAL_TIME_METRICS = 2; // Broker入DB的Metrics指标
public static final int BROKER_OVER_VIEW_METRICS = 3; // Broker状态概览的指标
public static final int BROKER_OVER_ALL_METRICS = 4; // Broker状态总揽的指标
public static final int BROKER_ANALYSIS_METRICS = 5; // Broker分析的指标
public static final int BROKER_TOPIC_ANALYSIS_METRICS = 6; // Broker分析的指标
/**
* Topic流量详情
*/
public static final int TOPIC_FLOW_DETAIL = 100;
public static final int TOPIC_FLOW_OVERVIEW = 101;
public static final int TOPIC_METRICS_TO_DB = 102;
}
package com.xiaojukeji.kafka.manager.common.constant;
/**
* @author limeng
* @date 2017/11/21
*/
public enum OffsetStoreLocation {
ZOOKEEPER("zookeeper"),
BROKER("broker");
private final String location;
OffsetStoreLocation(String location) {
this.location = location;
}
public String getLocation() {
return location;
}
public static OffsetStoreLocation getOffsetStoreLocation(String location) {
if (location == null) {
return null;
}
for (OffsetStoreLocation offsetStoreLocation: OffsetStoreLocation.values()) {
if (offsetStoreLocation.location.equals(location)) {
return offsetStoreLocation;
}
}
return null;
}
}
package com.xiaojukeji.kafka.manager.common.constant;
public class StatusCode {
/*
* kafka-manager status code: 17000 ~ 17999
*
* 正常 - 0
* 参数错误 - 10000
* 资源未就绪 - 10001
*/
/*
* 已约定的状态码
*/
public static final Integer SUCCESS = 0;
public static final Integer PARAM_ERROR = 10000; //参数错误
public static final Integer RES_UNREADY = 10001; //资源未就绪
public static final Integer MY_SQL_SELECT_ERROR = 17210; // MySQL 查询数据异常
public static final Integer MY_SQL_INSERT_ERROR = 17211; // MySQL 插入数据异常
public static final Integer MY_SQL_DELETE_ERROR = 17212; // MySQL 删除数据异常
public static final Integer MY_SQL_UPDATE_ERROR = 17213; // MySQL 更新数据异常
public static final Integer MY_SQL_REPLACE_ERROR = 17214; // MySQL 替换数据异常
public static final Integer OPERATION_ERROR = 17300; // 请求操作异常
/**
* Topic相关的异常
*/
public static final Integer TOPIC_EXISTED = 17400; //Topic已经存在了
public static final Integer PARTIAL_SUCESS = 17700; //操作部分成功
}
package com.xiaojukeji.kafka.manager.common.constant.monitor;
import java.util.AbstractMap;
import java.util.ArrayList;
import java.util.List;
/**
* 条件类型
* @author zengqiao
* @date 19/5/12
*/
public enum MonitorConditionType {
BIGGER(">", "大于"),
EQUAL("=", "等于"),
LESS("<", "小于"),
NOT_EQUAL("!=", "不等于");
private String name;
private String message;
MonitorConditionType(String name, String message) {
this.name = name;
this.message = message;
}
public static boolean legal(String name) {
for (MonitorConditionType elem: MonitorConditionType.values()) {
if (elem.name.equals(name)) {
return true;
}
}
return false;
}
@Override
public String toString() {
return "ConditionType{" +
"name='" + name + '\'' +
", message='" + message + '\'' +
'}';
}
public static List<AbstractMap.SimpleEntry<String, String>> toList() {
List<AbstractMap.SimpleEntry<String, String>> conditionTypeList = new ArrayList<>();
for (MonitorConditionType elem: MonitorConditionType.values()) {
conditionTypeList.add(new AbstractMap.SimpleEntry<>(elem.name, elem.message));
}
return conditionTypeList;
}
/**
* 计算 operation(data1, data2) 是否为true
* @param data1
* @param data2
* @param operation
* @author zengqiao
* @date 19/5/12
* @return boolean
*/
public static boolean matchCondition(Double data1, Double data2, String operation) {
switch (operation) {
case ">": return data1 > data2;
case "<": return data1 < data2;
case "=": return data1.equals(data2);
case "!=": return !data1.equals(data2);
default:
}
return false;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.constant.monitor;
/**
* @author zengqiao
* @date 20/3/18
*/
public enum MonitorMatchStatus {
UNKNOWN(0),
YES(1),
NO(2);
public Integer status;
MonitorMatchStatus(Integer status) {
this.status = status;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.constant.monitor;
import java.util.AbstractMap;
import java.util.ArrayList;
import java.util.List;
/**
* 指标类型
* @author zengqiao
* @date 19/5/12
*/
public enum MonitorMetricsType {
BYTES_IN("BytesIn", "流入流量"),
BYTES_OUT("BytesOut", "流出流量"),
LAG("Lag", "消费组Lag");
private String name;
private String message;
MonitorMetricsType(String name, String message) {
this.name = name;
this.message = message;
}
public static boolean legal(String name) {
for (MonitorMetricsType elem: MonitorMetricsType.values()) {
if (elem.name.equals(name)) {
return true;
}
}
return false;
}
@Override
public String toString() {
return "MetricType{" +
"name='" + name + '\'' +
", message='" + message + '\'' +
'}';
}
public static List<AbstractMap.SimpleEntry<String, String>> toList() {
List<AbstractMap.SimpleEntry<String, String>> metricTypeList = new ArrayList<>();
for (MonitorMetricsType elem: MonitorMetricsType.values()) {
metricTypeList.add(new AbstractMap.SimpleEntry<>(elem.name, elem.message));
}
return metricTypeList;
}
public String getName() {
return name;
}
public String getMessage() {
return message;
}
}
package com.xiaojukeji.kafka.manager.common.constant.monitor;
import java.util.AbstractMap;
import java.util.ArrayList;
import java.util.List;
/**
* 通知类型
* @author huangyiminghappy@163.com
* @date 2019-05-06
*/
public enum MonitorNotifyType {
KAFKA_MESSAGE("KAFKA", "告警发送到KAFKA");
String name;
String message;
MonitorNotifyType(String name, String message){
this.name = name;
this.message = message;
}
public String getName() {
return name;
}
public String getMessage() {
return message;
}
public static boolean legal(String name) {
for (MonitorNotifyType elem: MonitorNotifyType.values()) {
if (elem.name.equals(name)) {
return true;
}
}
return false;
}
@Override
public String toString() {
return "NotifyType{" +
"name='" + name + '\'' +
", message='" + message + '\'' +
'}';
}
public static List<AbstractMap.SimpleEntry<String, String>> toList() {
List<AbstractMap.SimpleEntry<String, String>> notifyTypeList = new ArrayList<>();
for (MonitorNotifyType elem: MonitorNotifyType.values()) {
notifyTypeList.add(new AbstractMap.SimpleEntry<>(elem.name, elem.message));
}
return notifyTypeList;
}
}
package com.xiaojukeji.kafka.manager.common.entity;
import kafka.admin.AdminClient;
import java.util.*;
/**
* @author zengqiao
* @date 19/5/14
*/
public class ConsumerMetadata {
private Set<String> consumerGroupSet = new HashSet<>();
private Map<String, Set<String>> topicNameConsumerGroupMap = new HashMap<>();
private Map<String, AdminClient.ConsumerGroupSummary> consumerGroupSummaryMap = new HashMap<>();
public ConsumerMetadata(Set<String> consumerGroupSet,
Map<String, Set<String>> topicNameConsumerGroupMap,
Map<String, AdminClient.ConsumerGroupSummary> consumerGroupSummaryMap) {
this.consumerGroupSet = consumerGroupSet;
this.topicNameConsumerGroupMap = topicNameConsumerGroupMap;
this.consumerGroupSummaryMap = consumerGroupSummaryMap;
}
public Set<String> getConsumerGroupSet() {
return consumerGroupSet;
}
public Map<String, Set<String>> getTopicNameConsumerGroupMap() {
return topicNameConsumerGroupMap;
}
public Map<String, AdminClient.ConsumerGroupSummary> getConsumerGroupSummaryMap() {
return consumerGroupSummaryMap;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity;
/**
* ConsumerMetrics
* @author tukun
* @date 2015/11/12
*/
public class ConsumerMetrics {
private Long clusterId;
private String topicName;
private String consumerGroup;
private String location;
private Long sumLag;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public String getConsumerGroup() {
return consumerGroup;
}
public void setConsumerGroup(String consumerGroup) {
this.consumerGroup = consumerGroup;
}
public String getLocation() {
return location;
}
public void setLocation(String location) {
this.location = location;
}
public Long getSumLag() {
return sumLag;
}
public void setSumLag(Long sumLag) {
this.sumLag = sumLag;
}
@Override
public String toString() {
return "ConsumerMetrics{" +
"clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", consumerGroup='" + consumerGroup + '\'' +
", location='" + location + '\'' +
", sumLag=" + sumLag +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity;
import com.alibaba.fastjson.JSON;
import com.xiaojukeji.kafka.manager.common.constant.StatusCode;
import java.io.Serializable;
/**
* @author huangyiminghappy@163.com
* @date 2019-07-08
*/
public class Result<T> implements Serializable {
private static final long serialVersionUID = -2772975319944108658L;
private T data;
private String message;
private Integer code;
public Result(T data) {
this.data = data;
this.code = StatusCode.SUCCESS;
this.message = "成功";
}
public Result() {
this(null);
}
public Result(Integer code, String message) {
this.message = message;
this.code = code;
}
public Result(Integer code, T data, String message) {
this.data = data;
this.message = message;
this.code = code;
}
public T getData()
{
return (T)this.data;
}
public void setData(T data)
{
this.data = data;
}
public String getMessage()
{
return this.message;
}
public void setMessage(String message)
{
this.message = message;
}
public Integer getCode()
{
return this.code;
}
public void setCode(Integer code)
{
this.code = code;
}
@Override
public String toString()
{
return JSON.toJSONString(this);
}
}
package com.xiaojukeji.kafka.manager.common.entity.annotations;
import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
/**
* FieldSelector
* @author huangyiminghappy@163.com
* @date 2019-06-19
*/
@Target(ElementType.FIELD)
@Retention(RUNTIME)
@Documented
public @interface FieldSelector {
//注解的属性
String name() default "";
int[] types() default {};
}
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
/**
* 用户角色
* @author zengqiao_cn@163.com
* @date 19/4/15
*/
public enum AccountRoleEnum {
UNKNOWN(-1),
NORMAL(0),
SRE(1),
ADMIN(2);
private Integer role;
AccountRoleEnum(Integer role) {
this.role = role;
}
public Integer getRole() {
return role;
}
public static AccountRoleEnum getUserRoleEnum(Integer role) {
for (AccountRoleEnum elem: AccountRoleEnum.values()) {
if (elem.getRole().equals(role)) {
return elem;
}
}
return null;
}
}
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
/**
* 操作Topic的状态
* @author zengqiao
* @date 19/11/26
*/
public enum AdminTopicStatusEnum {
SUCCESS(0, "成功"),
REPLACE_DB_FAILED(1, "更新DB失败"),
PARAM_NULL_POINTER(2, "参数错误"),
PARTITION_NUM_ILLEGAL(3, "分区数错误"),
BROKER_NUM_NOT_ENOUGH(4, "Broker数不足错误"),
TOPIC_NAME_ILLEGAL(5, "Topic名称非法"),
TOPIC_EXISTED(6, "Topic已存在"),
UNKNOWN_TOPIC_PARTITION(7, "Topic未知"),
TOPIC_CONFIG_ILLEGAL(8, "Topic配置错误"),
TOPIC_IN_DELETING(9, "Topic正在删除"),
UNKNOWN_ERROR(10, "未知错误");
private Integer code;
private String message;
AdminTopicStatusEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public String getMessage() {
return message;
}
}
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
/**
* DBStatus状态含义
* @author zengqiao_cn@163.com
* @date 19/4/15
*/
public enum DBStatusEnum {
/**
* 逻辑删除
*/
DELETED(-1),
/**
* 普通
*/
NORMAL(0),
/**
* 已完成并通过
*/
PASSED(1);
private Integer status;
DBStatusEnum(Integer status) {
this.status = status;
}
public Integer getStatus() {
return status;
}
public static DBStatusEnum getDBStatusEnum(Integer status) {
for (DBStatusEnum elem: DBStatusEnum.values()) {
if (elem.getStatus().equals(status)) {
return elem;
}
}
return null;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
/**
* 操作类型
* @author zengqiao
* @date 19/11/21
*/
public enum OperationEnum {
CREATE_TOPIC("create_topic"),
DELETE_TOPIC("delete_topic"),
MODIFY_TOPIC_CONFIG("modify_topic_config"),
EXPAND_TOPIC_PARTITION("expand_topic_partition");
public String message;
OperationEnum(String message) {
this.message = message;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
public enum OrderStatusEnum {
WAIT_DEAL(0, "待处理"),
PASSED(1, "通过"),
REFUSED(2, "拒绝"),
CANCELLED(3, "取消");
private Integer code;
private String message;
OrderStatusEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public String getMessage() {
return message;
}
}
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
/**
* 工单类型
* @author zengqiao
* @date 19/6/23
*/
public enum OrderTypeEnum {
UNKNOWN(-1),
APPLY_TOPIC(0),
APPLY_PARTITION(1);
private Integer code;
OrderTypeEnum(Integer code) {
this.code = code;
}
public Integer getCode() {
return code;
}
public static OrderTypeEnum getOrderTypeEnum(Integer code) {
for (OrderTypeEnum elem: OrderTypeEnum.values()) {
if (elem.getCode().equals(code)) {
return elem;
}
}
return OrderTypeEnum.UNKNOWN;
}
}
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
/**
* 优先副本选举状态
* @author zengqiao
* @date 2017/6/29.
*/
public enum PreferredReplicaElectEnum {
SUCCESS(0, "成功[创建成功|执行成功]"),
RUNNING(1, "正在执行"),
ALREADY_EXIST(2, "任务已存在"),
PARAM_ILLEGAL(3, "参数错误"),
UNKNOWN(4, "进度未知");
private Integer code;
private String message;
PreferredReplicaElectEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public String getMessage() {
return message;
}
}
package com.xiaojukeji.kafka.manager.common.entity.bizenum;
/**
* 迁移状态
* @author zengqiao
* @date 19/12/29
*/
public enum ReassignmentStatusEnum {
WAITING(0, "等待执行"),
RUNNING(1, "正在执行"),
SUCCESS(2, "迁移成功"),
FAILED(3, "迁移失败"),
CANCELED(4, "取消任务");
private Integer code;
private String message;
ReassignmentStatusEnum(Integer code, String message) {
this.code = code;
this.message = message;
}
public Integer getCode() {
return code;
}
public String getMessage() {
return message;
}
public static boolean triggerTask(Integer status) {
if (WAITING.code.equals(status) || RUNNING.code.equals(status)) {
return true;
}
return false;
}
public static boolean cancelTask(Integer status) {
if (WAITING.code.equals(status)) {
return true;
}
return false;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto;
/**
* Broker基本信息
* @author zengqiao_cn@163.com
* @date 19/4/8
*/
public class BrokerBasicDTO {
private String host;
private Integer port;
private Integer jmxPort;
private Integer topicNum;
private Integer partitionCount;
private Long startTime;
private Integer leaderCount;
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public Integer getPort() {
return port;
}
public void setPort(Integer port) {
this.port = port;
}
public Integer getJmxPort() {
return jmxPort;
}
public void setJmxPort(Integer jmxPort) {
this.jmxPort = jmxPort;
}
public Integer getTopicNum() {
return topicNum;
}
public void setTopicNum(Integer topicNum) {
this.topicNum = topicNum;
}
public Integer getPartitionCount() {
return partitionCount;
}
public void setPartitionCount(Integer partitionCount) {
this.partitionCount = partitionCount;
}
public Long getStartTime() {
return startTime;
}
public void setStartTime(Long startTime) {
this.startTime = startTime;
}
public Integer getLeaderCount() {
return leaderCount;
}
public void setLeaderCount(Integer leaderCount) {
this.leaderCount = leaderCount;
}
@Override
public String toString() {
return "BrokerBasicInfoDTO{" +
"host='" + host + '\'' +
", port=" + port +
", jmxPort=" + jmxPort +
", topicNum=" + topicNum +
", partitionCount=" + partitionCount +
", startTime=" + startTime +
", leaderCount=" + leaderCount +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto;
import com.xiaojukeji.kafka.manager.common.entity.metrics.BrokerMetrics;
import com.xiaojukeji.kafka.manager.common.entity.zookeeper.BrokerMetadata;
/**
* @author zengqiao
* @date 19/4/21
*/
public class BrokerOverallDTO {
private Integer brokerId;
private String host;
private Integer port;
private Integer jmxPort;
private Long startTime;
private Integer partitionCount;
private Integer underReplicatedPartitions;
private Integer leaderCount;
private Double bytesInPerSec;
public Integer getBrokerId() {
return brokerId;
}
public void setBrokerId(Integer brokerId) {
this.brokerId = brokerId;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public Integer getPort() {
return port;
}
public void setPort(Integer port) {
this.port = port;
}
public Integer getJmxPort() {
return jmxPort;
}
public void setJmxPort(Integer jmxPort) {
this.jmxPort = jmxPort;
}
public Long getStartTime() {
return startTime;
}
public void setStartTime(Long startTime) {
this.startTime = startTime;
}
public Integer getPartitionCount() {
return partitionCount;
}
public void setPartitionCount(Integer partitionCount) {
this.partitionCount = partitionCount;
}
public Integer getUnderReplicatedPartitions() {
return underReplicatedPartitions;
}
public void setUnderReplicatedPartitions(Integer underReplicatedPartitions) {
this.underReplicatedPartitions = underReplicatedPartitions;
}
public Integer getLeaderCount() {
return leaderCount;
}
public void setLeaderCount(Integer leaderCount) {
this.leaderCount = leaderCount;
}
public Double getBytesInPerSec() {
return bytesInPerSec;
}
public void setBytesInPerSec(Double bytesInPerSec) {
this.bytesInPerSec = bytesInPerSec;
}
@Override
public String toString() {
return "BrokerOverallDTO{" +
"brokerId=" + brokerId +
", host='" + host + '\'' +
", port=" + port +
", jmxPort=" + jmxPort +
", startTime=" + startTime +
", partitionCount=" + partitionCount +
", underReplicatedPartitions=" + underReplicatedPartitions +
", leaderCount=" + leaderCount +
", bytesInPerSec=" + bytesInPerSec +
'}';
}
public static BrokerOverallDTO newInstance(BrokerMetadata brokerMetadata, BrokerMetrics brokerMetrics) {
BrokerOverallDTO brokerOverallDTO = new BrokerOverallDTO();
brokerOverallDTO.setBrokerId(brokerMetadata.getBrokerId());
brokerOverallDTO.setHost(brokerMetadata.getHost());
brokerOverallDTO.setPort(brokerMetadata.getPort());
brokerOverallDTO.setJmxPort(brokerMetadata.getJmxPort());
brokerOverallDTO.setStartTime(brokerMetadata.getTimestamp());
if (brokerMetrics == null) {
return brokerOverallDTO;
}
brokerOverallDTO.setPartitionCount(brokerMetrics.getPartitionCount());
brokerOverallDTO.setLeaderCount(brokerMetrics.getLeaderCount());
brokerOverallDTO.setBytesInPerSec(brokerMetrics.getBytesInPerSec());
brokerOverallDTO.setUnderReplicatedPartitions(brokerMetrics.getUnderReplicatedPartitions());
return brokerOverallDTO;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto;
import com.xiaojukeji.kafka.manager.common.entity.metrics.BrokerMetrics;
import com.xiaojukeji.kafka.manager.common.entity.zookeeper.BrokerMetadata;
import com.xiaojukeji.kafka.manager.common.entity.bizenum.DBStatusEnum;
/**
* @author zengqiao_cn@163.com
* @date 19/4/21
*/
public class BrokerOverviewDTO {
private Integer brokerId;
private String host;
private Integer port;
private Integer jmxPort;
private Long startTime;
private Double byteIn;
private Double byteOut;
private Integer status;
public Integer getBrokerId() {
return brokerId;
}
public void setBrokerId(Integer brokerId) {
this.brokerId = brokerId;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public Integer getPort() {
return port;
}
public void setPort(Integer port) {
this.port = port;
}
public Integer getJmxPort() {
return jmxPort;
}
public void setJmxPort(Integer jmxPort) {
this.jmxPort = jmxPort;
}
public Long getStartTime() {
return startTime;
}
public void setStartTime(Long startTime) {
this.startTime = startTime;
}
public Double getByteIn() {
return byteIn;
}
public void setByteIn(Double byteIn) {
this.byteIn = byteIn;
}
public Double getByteOut() {
return byteOut;
}
public void setByteOut(Double byteOut) {
this.byteOut = byteOut;
}
public Integer getStatus() {
return status;
}
public void setStatus(Integer status) {
this.status = status;
}
@Override
public String toString() {
return "BrokerInfoDTO{" +
"brokerId=" + brokerId +
", host='" + host + '\'' +
", port=" + port +
", jmxPort=" + jmxPort +
", startTime=" + startTime +
", byteIn=" + byteIn +
", byteOut=" + byteOut +
", status=" + status +
'}';
}
public static BrokerOverviewDTO newInstance(BrokerMetadata brokerMetadata, BrokerMetrics brokerMetrics) {
BrokerOverviewDTO brokerOverviewDTO = new BrokerOverviewDTO();
brokerOverviewDTO.setBrokerId(brokerMetadata.getBrokerId());
brokerOverviewDTO.setHost(brokerMetadata.getHost());
brokerOverviewDTO.setPort(brokerMetadata.getPort());
brokerOverviewDTO.setJmxPort(brokerMetadata.getJmxPort());
brokerOverviewDTO.setStartTime(brokerMetadata.getTimestamp());
brokerOverviewDTO.setStatus(DBStatusEnum.NORMAL.getStatus());
if (brokerMetrics == null) {
return brokerOverviewDTO;
}
brokerOverviewDTO.setByteIn(brokerMetrics.getBytesInPerSec());
brokerOverviewDTO.setByteOut(brokerMetrics.getBytesOutPerSec());
return brokerOverviewDTO;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto;
import java.util.Date;
/**
* @author zengqiao
* @date 19/4/22
*/
public class ControllerDTO {
private String clusterName;
private Integer brokerId;
private String host;
private Integer controllerVersion;
private Date controllerTimestamp;
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public Integer getBrokerId() {
return brokerId;
}
public void setBrokerId(Integer brokerId) {
this.brokerId = brokerId;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public Integer getControllerVersion() {
return controllerVersion;
}
public void setControllerVersion(Integer controllerVersion) {
this.controllerVersion = controllerVersion;
}
public Date getControllerTimestamp() {
return controllerTimestamp;
}
public void setControllerTimestamp(Date controllerTimestamp) {
this.controllerTimestamp = controllerTimestamp;
}
@Override
public String toString() {
return "ControllerInfoDTO{" +
"clusterName='" + clusterName + '\'' +
", brokerId=" + brokerId +
", host='" + host + '\'' +
", controllerVersion=" + controllerVersion +
", controllerTimestamp=" + controllerTimestamp +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto;
/**
* Topic Offset
* @author zengqiao
* @date 19/6/2
*/
public class PartitionOffsetDTO {
private Integer partitionId;
private Long offset;
private Long timestamp;
public PartitionOffsetDTO() {
}
public PartitionOffsetDTO(Integer partitionId, Long offset) {
this.partitionId = partitionId;
this.offset = offset;
}
public PartitionOffsetDTO(Integer partitionId, Long offset, Long timestamp) {
this.partitionId = partitionId;
this.offset = offset;
this.timestamp = timestamp;
}
public Integer getPartitionId() {
return partitionId;
}
public void setPartitionId(Integer partitionId) {
this.partitionId = partitionId;
}
public Long getOffset() {
return offset;
}
public void setOffset(Long offset) {
this.offset = offset;
}
public Long getTimestamp() {
return timestamp;
}
public void setTimestamp(Long timestamp) {
this.timestamp = timestamp;
}
@Override
public String toString() {
return "TopicOffsetDTO{" +
", partitionId=" + partitionId +
", offset=" + offset +
", timestamp=" + timestamp +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto;
/**
* @author arthur
* @date 2018/09/03
*/
public class TopicBasicDTO {
private String topicName;
private Integer partitionNum;
private Integer replicaNum;
private Integer brokerNum;
private String remark;
private Long modifyTime;
private Long createTime;
private String region;
private Long retentionTime;
private String principal;
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Integer getPartitionNum() {
return partitionNum;
}
public void setPartitionNum(Integer partitionNum) {
this.partitionNum = partitionNum;
}
public Integer getReplicaNum() {
return replicaNum;
}
public void setReplicaNum(Integer replicaNum) {
this.replicaNum = replicaNum;
}
public Integer getBrokerNum() {
return brokerNum;
}
public void setBrokerNum(Integer brokerNum) {
this.brokerNum = brokerNum;
}
public String getRemark() {
return remark;
}
public void setRemark(String remark) {
this.remark = remark;
}
public String getRegion() {
return region;
}
public void setRegion(String region) {
this.region = region;
}
public Long getRetentionTime() {
return retentionTime;
}
public void setRetentionTime(Long retentionTime) {
this.retentionTime = retentionTime;
}
public Long getModifyTime() {
return modifyTime;
}
public void setModifyTime(Long modifyTime) {
this.modifyTime = modifyTime;
}
public Long getCreateTime() {
return createTime;
}
public void setCreateTime(Long createTime) {
this.createTime = createTime;
}
public String getPrincipal() {
return principal;
}
public void setPrincipal(String principal) {
this.principal = principal;
}
@Override
public String toString() {
return "TopicBasicInfoDTO{" +
"topicName='" + topicName + '\'' +
", partitionNum=" + partitionNum +
", replicaNum=" + replicaNum +
", brokerNum=" + brokerNum +
", remark='" + remark + '\'' +
", modifyTime=" + modifyTime +
", createTime=" + createTime +
", region='" + region + '\'' +
", retentionTime=" + retentionTime +
", principal='" + principal + '\'' +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto;
public class TopicOverviewDTO {
private Long clusterId;
private String topicName;
private Integer replicaNum;
private Integer partitionNum;
private Double bytesInPerSec;
private Double produceRequestPerSec;
private Long updateTime;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Integer getReplicaNum() {
return replicaNum;
}
public void setReplicaNum(Integer replicaNum) {
this.replicaNum = replicaNum;
}
public Integer getPartitionNum() {
return partitionNum;
}
public void setPartitionNum(Integer partitionNum) {
this.partitionNum = partitionNum;
}
public Double getBytesInPerSec() {
return bytesInPerSec;
}
public void setBytesInPerSec(Double bytesInPerSec) {
this.bytesInPerSec = bytesInPerSec;
}
public Double getProduceRequestPerSec() {
return produceRequestPerSec;
}
public void setProduceRequestPerSec(Double produceRequestPerSec) {
this.produceRequestPerSec = produceRequestPerSec;
}
public Long getUpdateTime() {
return updateTime;
}
public void setUpdateTime(Long updateTime) {
this.updateTime = updateTime;
}
@Override
public String toString() {
return "TopicOverviewDTO{" +
"clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", replicaNum=" + replicaNum +
", partitionNum=" + partitionNum +
", bytesInPerSec=" + bytesInPerSec +
", produceRequestPerSec=" + produceRequestPerSec +
", updateTime=" + updateTime +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto;
import java.io.Serializable;
import java.util.List;
/**
* @author arthur
* @date 2017/6/6.
*/
public class TopicPartitionDTO implements Serializable {
private Integer partitionId;
private Long offset;
private Integer leaderBrokerId;
private Integer preferredBrokerId;
private Integer leaderEpoch;
private List<Integer> replicasBroker;
private List<Integer> isr;
private Boolean underReplicated;
public Integer getPartitionId() {
return partitionId;
}
public void setPartitionId(Integer partitionId) {
this.partitionId = partitionId;
}
public Long getOffset() {
return offset;
}
public void setOffset(Long offset) {
this.offset = offset;
}
public Integer getLeaderBrokerId() {
return leaderBrokerId;
}
public void setLeaderBrokerId(Integer leaderBrokerId) {
this.leaderBrokerId = leaderBrokerId;
}
public Integer getPreferredBrokerId() {
return preferredBrokerId;
}
public void setPreferredBrokerId(Integer preferredBrokerId) {
this.preferredBrokerId = preferredBrokerId;
}
public Integer getLeaderEpoch() {
return leaderEpoch;
}
public void setLeaderEpoch(Integer leaderEpoch) {
this.leaderEpoch = leaderEpoch;
}
public List<Integer> getReplicasBroker() {
return replicasBroker;
}
public void setReplicasBroker(List<Integer> replicasBroker) {
this.replicasBroker = replicasBroker;
}
public List<Integer> getIsr() {
return isr;
}
public void setIsr(List<Integer> isr) {
this.isr = isr;
}
public boolean isUnderReplicated() {
return underReplicated;
}
public void setUnderReplicated(boolean underReplicated) {
this.underReplicated = underReplicated;
}
@Override
public String toString() {
return "TopicPartitionDTO{" +
"partitionId=" + partitionId +
", offset=" + offset +
", leaderBrokerId=" + leaderBrokerId +
", preferredBrokerId=" + preferredBrokerId +
", leaderEpoch=" + leaderEpoch +
", replicasBroker=" + replicasBroker +
", isr=" + isr +
", underReplicated=" + underReplicated +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto.alarm;
/**
* 告警通知
* @author zengqiao
* @date 2020-02-14
*/
public class AlarmNotifyDTO {
private Long alarmRuleId;
private String actionTag;
private String message;
public Long getAlarmRuleId() {
return alarmRuleId;
}
public void setAlarmRuleId(Long alarmRuleId) {
this.alarmRuleId = alarmRuleId;
}
public String getActionTag() {
return actionTag;
}
public void setActionTag(String actionTag) {
this.actionTag = actionTag;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
@Override
public String toString() {
return "AlarmNotifyDTO{" +
"alarmRuleId=" + alarmRuleId +
", actionTag='" + actionTag + '\'' +
", message='" + message + '\'' +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto.alarm;
import java.util.Map;
/**
* @author zengqiao
* @date 19/12/16
*/
public class AlarmRuleDTO {
/**
* 告警ID
*/
private Long id;
/**
* 告警名称
*/
private String name;
/**
* 已持续次数
*/
private Integer duration;
/**
* 集群ID, 过滤条件中必有的, 单独拿出来
*/
private Long clusterId;
/**
* 告警策略表达式
*/
private AlarmStrategyExpressionDTO strategyExpression;
/**
* 告警策略过滤条件
*/
private Map<String, String> strategyFilterMap;
/**
* 告警策略Action方式
*/
private Map<String, AlarmStrategyActionDTO> strategyActionMap;
/**
* 修改时间
*/
private Long gmtModify;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Integer getDuration() {
return duration;
}
public void setDuration(Integer duration) {
this.duration = duration;
}
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public AlarmStrategyExpressionDTO getStrategyExpression() {
return strategyExpression;
}
public void setStrategyExpression(AlarmStrategyExpressionDTO strategyExpression) {
this.strategyExpression = strategyExpression;
}
public Map<String, String> getStrategyFilterMap() {
return strategyFilterMap;
}
public void setStrategyFilterMap(Map<String, String> strategyFilterMap) {
this.strategyFilterMap = strategyFilterMap;
}
public Map<String, AlarmStrategyActionDTO> getStrategyActionMap() {
return strategyActionMap;
}
public void setStrategyActionMap(Map<String, AlarmStrategyActionDTO> strategyActionMap) {
this.strategyActionMap = strategyActionMap;
}
public Long getGmtModify() {
return gmtModify;
}
public void setGmtModify(Long gmtModify) {
this.gmtModify = gmtModify;
}
@Override
public String toString() {
return "AlarmRuleDTO{" +
"id=" + id +
", name='" + name + '\'' +
", duration=" + duration +
", clusterId=" + clusterId +
", strategyExpression=" + strategyExpression +
", strategyFilterMap=" + strategyFilterMap +
", strategyActionMap=" + strategyActionMap +
", gmtModify=" + gmtModify +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto.alarm;
/**
* @author zengqiao
* @date 19/12/16
*/
public class AlarmStrategyActionDTO {
private String actionWay; // 告知方式: kafka
private String actionTag;
public String getActionWay() {
return actionWay;
}
public void setActionWay(String actionWay) {
this.actionWay = actionWay;
}
public String getActionTag() {
return actionTag;
}
public void setActionTag(String actionTag) {
this.actionTag = actionTag;
}
@Override
public String toString() {
return "AlarmStrategyActionDTO{" +
"actionWay='" + actionWay + '\'' +
", actionTag='" + actionTag + '\'' +
'}';
}
public boolean legal() {
if (actionWay == null
|| actionTag == null) {
return false;
}
return true;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto.alarm;
/**
* 策略表达式
* @author zengqiao
* @date 19/12/16
*/
public class AlarmStrategyExpressionDTO {
private String metric;
private String opt;
private Long threshold;
private Integer duration;
public String getMetric() {
return metric;
}
public void setMetric(String metric) {
this.metric = metric;
}
public String getOpt() {
return opt;
}
public void setOpt(String opt) {
this.opt = opt;
}
public Long getThreshold() {
return threshold;
}
public void setThreshold(Long threshold) {
this.threshold = threshold;
}
public Integer getDuration() {
return duration;
}
public void setDuration(Integer duration) {
this.duration = duration;
}
@Override
public String toString() {
return "AlarmStrategyExpressionModel{" +
"metric='" + metric + '\'' +
", opt='" + opt + '\'' +
", threshold=" + threshold +
", duration=" + duration +
'}';
}
public boolean legal() {
if (metric == null
|| opt == null
|| threshold == null
|| duration == null || duration <= 0) {
return false;
}
return true;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto.alarm;
/**
* 告警过滤条件
* @author zengqiao
* @date 19/12/16
*/
public class AlarmStrategyFilterDTO {
private String key;
private String value;
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
@Override
public String toString() {
return "AlarmStrategyFilterModel{" +
"key='" + key + '\'' +
", value='" + value + '\'' +
'}';
}
public boolean legal() {
if (key == null
|| value == null) {
return false;
}
return true;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto.analysis;
import java.util.List;
/**
* @author zengqiao
* @date 19/12/29
*/
public class AnalysisBrokerDTO {
private Long clusterId;
private Integer brokerId;
private Long baseTime;
private Double bytesIn;
private Double bytesOut;
private Double messagesIn;
private Double totalFetchRequests;
private Double totalProduceRequests;
List<AnalysisTopicDTO> topicAnalysisVOList;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public Integer getBrokerId() {
return brokerId;
}
public void setBrokerId(Integer brokerId) {
this.brokerId = brokerId;
}
public Long getBaseTime() {
return baseTime;
}
public void setBaseTime(Long baseTime) {
this.baseTime = baseTime;
}
public Double getBytesIn() {
return bytesIn;
}
public void setBytesIn(Double bytesIn) {
this.bytesIn = bytesIn;
}
public Double getBytesOut() {
return bytesOut;
}
public void setBytesOut(Double bytesOut) {
this.bytesOut = bytesOut;
}
public Double getMessagesIn() {
return messagesIn;
}
public void setMessagesIn(Double messagesIn) {
this.messagesIn = messagesIn;
}
public Double getTotalFetchRequests() {
return totalFetchRequests;
}
public void setTotalFetchRequests(Double totalFetchRequests) {
this.totalFetchRequests = totalFetchRequests;
}
public Double getTotalProduceRequests() {
return totalProduceRequests;
}
public void setTotalProduceRequests(Double totalProduceRequests) {
this.totalProduceRequests = totalProduceRequests;
}
public List<AnalysisTopicDTO> getTopicAnalysisVOList() {
return topicAnalysisVOList;
}
public void setTopicAnalysisVOList(List<AnalysisTopicDTO> topicAnalysisVOList) {
this.topicAnalysisVOList = topicAnalysisVOList;
}
@Override
public String toString() {
return "AnalysisBrokerDTO{" +
"clusterId=" + clusterId +
", brokerId=" + brokerId +
", baseTime=" + baseTime +
", bytesIn=" + bytesIn +
", bytesOut=" + bytesOut +
", messagesIn=" + messagesIn +
", totalFetchRequests=" + totalFetchRequests +
", totalProduceRequests=" + totalProduceRequests +
", topicAnalysisVOList=" + topicAnalysisVOList +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto.analysis;
/**
* @author zengqiao
* @date 19/12/29
*/
public class AnalysisTopicDTO {
private String topicName;
private Double bytesIn;
private Double bytesInRate;
private Double bytesOut;
private Double bytesOutRate;
private Double messagesIn;
private Double messagesInRate;
private Double totalFetchRequests;
private Double totalFetchRequestsRate;
private Double totalProduceRequests;
private Double totalProduceRequestsRate;
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Double getBytesIn() {
return bytesIn;
}
public void setBytesIn(Double bytesIn) {
this.bytesIn = bytesIn;
}
public Double getBytesInRate() {
return bytesInRate;
}
public void setBytesInRate(Double bytesInRate) {
this.bytesInRate = bytesInRate;
}
public Double getBytesOut() {
return bytesOut;
}
public void setBytesOut(Double bytesOut) {
this.bytesOut = bytesOut;
}
public Double getBytesOutRate() {
return bytesOutRate;
}
public void setBytesOutRate(Double bytesOutRate) {
this.bytesOutRate = bytesOutRate;
}
public Double getMessagesIn() {
return messagesIn;
}
public void setMessagesIn(Double messagesIn) {
this.messagesIn = messagesIn;
}
public Double getMessagesInRate() {
return messagesInRate;
}
public void setMessagesInRate(Double messagesInRate) {
this.messagesInRate = messagesInRate;
}
public Double getTotalFetchRequests() {
return totalFetchRequests;
}
public void setTotalFetchRequests(Double totalFetchRequests) {
this.totalFetchRequests = totalFetchRequests;
}
public Double getTotalFetchRequestsRate() {
return totalFetchRequestsRate;
}
public void setTotalFetchRequestsRate(Double totalFetchRequestsRate) {
this.totalFetchRequestsRate = totalFetchRequestsRate;
}
public Double getTotalProduceRequests() {
return totalProduceRequests;
}
public void setTotalProduceRequests(Double totalProduceRequests) {
this.totalProduceRequests = totalProduceRequests;
}
public Double getTotalProduceRequestsRate() {
return totalProduceRequestsRate;
}
public void setTotalProduceRequestsRate(Double totalProduceRequestsRate) {
this.totalProduceRequestsRate = totalProduceRequestsRate;
}
@Override
public String toString() {
return "AnalysisTopicDTO{" +
"topicName='" + topicName + '\'' +
", bytesIn=" + bytesIn +
", bytesInRate=" + bytesInRate +
", bytesOut=" + bytesOut +
", bytesOutRate=" + bytesOutRate +
", messagesIn=" + messagesIn +
", messagesInRate=" + messagesInRate +
", totalFetchRequests=" + totalFetchRequests +
", totalFetchRequestsRate=" + totalFetchRequestsRate +
", totalProduceRequests=" + totalProduceRequests +
", totalProduceRequestsRate=" + totalProduceRequestsRate +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto.consumer;
/**
* @author zengqiao
* @date 20/1/9
*/
public class ConsumeDetailDTO {
private Integer partitionId;
private Long offset;
private Long consumeOffset;
private String consumerId;
public Integer getPartitionId() {
return partitionId;
}
public void setPartitionId(Integer partitionId) {
this.partitionId = partitionId;
}
public Long getOffset() {
return offset;
}
public void setOffset(Long offset) {
this.offset = offset;
}
public Long getConsumeOffset() {
return consumeOffset;
}
public void setConsumeOffset(Long consumeOffset) {
this.consumeOffset = consumeOffset;
}
public String getConsumerId() {
return consumerId;
}
public void setConsumerId(String consumerId) {
this.consumerId = consumerId;
}
@Override
public String toString() {
return "ConsumeDetailDTO{" +
"partitionId=" + partitionId +
", offset=" + offset +
", consumeOffset=" + consumeOffset +
", consumerId='" + consumerId + '\'' +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto.consumer;
import com.xiaojukeji.kafka.manager.common.entity.zookeeper.PartitionState;
import java.util.List;
import java.util.Map;
/**
* Consumer实体类
* @author tukun
* @date 2015/11/12
*/
public class ConsumerDTO {
/**
* 消费group名
*/
private String consumerGroup;
/**
* 消费类型,一般为static
*/
private String location;
/**
* 订阅的每个topic的partition状态列表
*/
private Map<String, List<PartitionState>> topicPartitionMap;
public String getConsumerGroup() {
return consumerGroup;
}
public void setConsumerGroup(String consumerGroup) {
this.consumerGroup = consumerGroup;
}
public String getLocation() {
return location;
}
public void setLocation(String location) {
this.location = location;
}
public Map<String, List<PartitionState>> getTopicPartitionMap() {
return topicPartitionMap;
}
public void setTopicPartitionMap(Map<String, List<PartitionState>> topicPartitionMap) {
this.topicPartitionMap = topicPartitionMap;
}
@Override
public String toString() {
return "Consumer{" +
"consumerGroup='" + consumerGroup + '\'' +
", location='" + location + '\'' +
", topicPartitionMap=" + topicPartitionMap +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.dto.consumer;
import com.xiaojukeji.kafka.manager.common.constant.OffsetStoreLocation;
import java.util.Objects;
/**
* 消费组信息
* @author zengqiao
* @date 19/4/18
*/
public class ConsumerGroupDTO {
private Long clusterId;
private String consumerGroup;
private OffsetStoreLocation offsetStoreLocation;
public ConsumerGroupDTO(Long clusterId, String consumerGroup, OffsetStoreLocation offsetStoreLocation) {
this.clusterId = clusterId;
this.consumerGroup = consumerGroup;
this.offsetStoreLocation = offsetStoreLocation;
}
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getConsumerGroup() {
return consumerGroup;
}
public void setConsumerGroup(String consumerGroup) {
this.consumerGroup = consumerGroup;
}
public OffsetStoreLocation getOffsetStoreLocation() {
return offsetStoreLocation;
}
public void setOffsetStoreLocation(OffsetStoreLocation offsetStoreLocation) {
this.offsetStoreLocation = offsetStoreLocation;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
ConsumerGroupDTO that = (ConsumerGroupDTO) o;
return clusterId.equals(that.clusterId)
&& consumerGroup.equals(that.consumerGroup)
&& offsetStoreLocation == that.offsetStoreLocation;
}
@Override
public int hashCode() {
return Objects.hash(clusterId, consumerGroup, offsetStoreLocation);
}
@Override
public String toString() {
return "ConsumerGroupDTO{" +
"clusterId=" + clusterId +
", consumerGroup='" + consumerGroup + '\'' +
", offsetStoreLocation=" + offsetStoreLocation +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.metrics;
import com.xiaojukeji.kafka.manager.common.constant.MetricsType;
import com.xiaojukeji.kafka.manager.common.entity.annotations.FieldSelector;
import com.xiaojukeji.kafka.manager.common.entity.po.BaseEntryDO;
/**
* @author zengqiao
* @date 19/11/25
*/
public class BaseMetrics extends BaseEntryDO {
/**
* 每秒流入的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS,
MetricsType.BROKER_OVER_VIEW_METRICS,
MetricsType.BROKER_ANALYSIS_METRICS,
MetricsType.BROKER_TOPIC_ANALYSIS_METRICS,
MetricsType.TOPIC_FLOW_DETAIL,
MetricsType.TOPIC_FLOW_OVERVIEW,
MetricsType.TOPIC_METRICS_TO_DB
})
protected Double bytesInPerSec = 0.0;
protected Double bytesInPerSecMeanRate = 0.0;
protected Double bytesInPerSecFiveMinuteRate = 0.0;
protected Double bytesInPerSecFifteenMinuteRate = 0.0;
/**
* 每秒流出的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS,
MetricsType.BROKER_OVER_VIEW_METRICS,
MetricsType.BROKER_ANALYSIS_METRICS,
MetricsType.BROKER_TOPIC_ANALYSIS_METRICS,
MetricsType.TOPIC_FLOW_DETAIL,
MetricsType.TOPIC_METRICS_TO_DB
})
protected Double bytesOutPerSec = 0.0;
protected Double bytesOutPerSecMeanRate = 0.0;
protected Double bytesOutPerSecFiveMinuteRate = 0.0;
protected Double bytesOutPerSecFifteenMinuteRate = 0.0;
/**
* 每秒流入的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS,
MetricsType.BROKER_ANALYSIS_METRICS,
MetricsType.BROKER_TOPIC_ANALYSIS_METRICS,
MetricsType.TOPIC_FLOW_DETAIL,
MetricsType.TOPIC_METRICS_TO_DB
})
protected Double messagesInPerSec = 0.0;
protected Double messagesInPerSecMeanRate = 0.0;
protected Double messagesInPerSecFiveMinuteRate = 0.0;
protected Double messagesInPerSecFifteenMinuteRate = 0.0;
/**
* 每秒拒绝的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS,
MetricsType.TOPIC_FLOW_DETAIL,
MetricsType.TOPIC_METRICS_TO_DB
})
protected Double bytesRejectedPerSec = 0.0;
protected Double bytesRejectedPerSecMeanRate = 0.0;
protected Double bytesRejectedPerSecFiveMinuteRate = 0.0;
protected Double bytesRejectedPerSecFifteenMinuteRate = 0.0;
/**
* 每秒失败的Produce请求数的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS,
MetricsType.TOPIC_FLOW_DETAIL
})
protected Double failProduceRequestPerSec = 0.0;
protected Double failProduceRequestPerSecMeanRate = 0.0;
protected Double failProduceRequestPerSecFiveMinuteRate = 0.0;
protected Double failProduceRequestPerSecFifteenMinuteRate = 0.0;
/**
* 每秒失败的Fetch请求数的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS,
MetricsType.TOPIC_FLOW_DETAIL
})
protected Double failFetchRequestPerSec = 0.0;
protected Double failFetchRequestPerSecMeanRate = 0.0;
protected Double failFetchRequestPerSecFiveMinuteRate = 0.0;
protected Double failFetchRequestPerSecFifteenMinuteRate = 0.0;
/**
* 每秒总Produce请求数的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_ANALYSIS_METRICS,
MetricsType.BROKER_TOPIC_ANALYSIS_METRICS,
MetricsType.TOPIC_FLOW_DETAIL,
MetricsType.TOPIC_METRICS_TO_DB,
MetricsType.TOPIC_FLOW_OVERVIEW
})
protected Double totalProduceRequestsPerSec = 0.0;
protected Double totalProduceRequestsPerSecMeanRate = 0.0;
protected Double totalProduceRequestsPerSecFiveMinuteRate = 0.0;
protected Double totalProduceRequestsPerSecFifteenMinuteRate = 0.0;
/**
* 每秒总Fetch请求数的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_ANALYSIS_METRICS,
MetricsType.BROKER_TOPIC_ANALYSIS_METRICS,
MetricsType.TOPIC_FLOW_DETAIL
})
protected Double totalFetchRequestsPerSec = 0.0;
protected Double totalFetchRequestsPerSecMeanRate = 0.0;
protected Double totalFetchRequestsPerSecFiveMinuteRate = 0.0;
protected Double totalFetchRequestsPerSecFifteenMinuteRate = 0.0;
public Double getBytesInPerSec() {
return bytesInPerSec;
}
public void setBytesInPerSec(Double bytesInPerSec) {
this.bytesInPerSec = bytesInPerSec;
}
public Double getBytesInPerSecMeanRate() {
return bytesInPerSecMeanRate;
}
public void setBytesInPerSecMeanRate(Double bytesInPerSecMeanRate) {
this.bytesInPerSecMeanRate = bytesInPerSecMeanRate;
}
public Double getBytesInPerSecFiveMinuteRate() {
return bytesInPerSecFiveMinuteRate;
}
public void setBytesInPerSecFiveMinuteRate(Double bytesInPerSecFiveMinuteRate) {
this.bytesInPerSecFiveMinuteRate = bytesInPerSecFiveMinuteRate;
}
public Double getBytesInPerSecFifteenMinuteRate() {
return bytesInPerSecFifteenMinuteRate;
}
public void setBytesInPerSecFifteenMinuteRate(Double bytesInPerSecFifteenMinuteRate) {
this.bytesInPerSecFifteenMinuteRate = bytesInPerSecFifteenMinuteRate;
}
public Double getBytesOutPerSec() {
return bytesOutPerSec;
}
public void setBytesOutPerSec(Double bytesOutPerSec) {
this.bytesOutPerSec = bytesOutPerSec;
}
public Double getBytesOutPerSecMeanRate() {
return bytesOutPerSecMeanRate;
}
public void setBytesOutPerSecMeanRate(Double bytesOutPerSecMeanRate) {
this.bytesOutPerSecMeanRate = bytesOutPerSecMeanRate;
}
public Double getBytesOutPerSecFiveMinuteRate() {
return bytesOutPerSecFiveMinuteRate;
}
public void setBytesOutPerSecFiveMinuteRate(Double bytesOutPerSecFiveMinuteRate) {
this.bytesOutPerSecFiveMinuteRate = bytesOutPerSecFiveMinuteRate;
}
public Double getBytesOutPerSecFifteenMinuteRate() {
return bytesOutPerSecFifteenMinuteRate;
}
public void setBytesOutPerSecFifteenMinuteRate(Double bytesOutPerSecFifteenMinuteRate) {
this.bytesOutPerSecFifteenMinuteRate = bytesOutPerSecFifteenMinuteRate;
}
public Double getMessagesInPerSec() {
return messagesInPerSec;
}
public void setMessagesInPerSec(Double messagesInPerSec) {
this.messagesInPerSec = messagesInPerSec;
}
public Double getMessagesInPerSecMeanRate() {
return messagesInPerSecMeanRate;
}
public void setMessagesInPerSecMeanRate(Double messagesInPerSecMeanRate) {
this.messagesInPerSecMeanRate = messagesInPerSecMeanRate;
}
public Double getMessagesInPerSecFiveMinuteRate() {
return messagesInPerSecFiveMinuteRate;
}
public void setMessagesInPerSecFiveMinuteRate(Double messagesInPerSecFiveMinuteRate) {
this.messagesInPerSecFiveMinuteRate = messagesInPerSecFiveMinuteRate;
}
public Double getMessagesInPerSecFifteenMinuteRate() {
return messagesInPerSecFifteenMinuteRate;
}
public void setMessagesInPerSecFifteenMinuteRate(Double messagesInPerSecFifteenMinuteRate) {
this.messagesInPerSecFifteenMinuteRate = messagesInPerSecFifteenMinuteRate;
}
public Double getBytesRejectedPerSec() {
return bytesRejectedPerSec;
}
public void setBytesRejectedPerSec(Double bytesRejectedPerSec) {
this.bytesRejectedPerSec = bytesRejectedPerSec;
}
public Double getBytesRejectedPerSecMeanRate() {
return bytesRejectedPerSecMeanRate;
}
public void setBytesRejectedPerSecMeanRate(Double bytesRejectedPerSecMeanRate) {
this.bytesRejectedPerSecMeanRate = bytesRejectedPerSecMeanRate;
}
public Double getBytesRejectedPerSecFiveMinuteRate() {
return bytesRejectedPerSecFiveMinuteRate;
}
public void setBytesRejectedPerSecFiveMinuteRate(Double bytesRejectedPerSecFiveMinuteRate) {
this.bytesRejectedPerSecFiveMinuteRate = bytesRejectedPerSecFiveMinuteRate;
}
public Double getBytesRejectedPerSecFifteenMinuteRate() {
return bytesRejectedPerSecFifteenMinuteRate;
}
public void setBytesRejectedPerSecFifteenMinuteRate(Double bytesRejectedPerSecFifteenMinuteRate) {
this.bytesRejectedPerSecFifteenMinuteRate = bytesRejectedPerSecFifteenMinuteRate;
}
public Double getFailProduceRequestPerSec() {
return failProduceRequestPerSec;
}
public void setFailProduceRequestPerSec(Double failProduceRequestPerSec) {
this.failProduceRequestPerSec = failProduceRequestPerSec;
}
public Double getFailProduceRequestPerSecMeanRate() {
return failProduceRequestPerSecMeanRate;
}
public void setFailProduceRequestPerSecMeanRate(Double failProduceRequestPerSecMeanRate) {
this.failProduceRequestPerSecMeanRate = failProduceRequestPerSecMeanRate;
}
public Double getFailProduceRequestPerSecFiveMinuteRate() {
return failProduceRequestPerSecFiveMinuteRate;
}
public void setFailProduceRequestPerSecFiveMinuteRate(Double failProduceRequestPerSecFiveMinuteRate) {
this.failProduceRequestPerSecFiveMinuteRate = failProduceRequestPerSecFiveMinuteRate;
}
public Double getFailProduceRequestPerSecFifteenMinuteRate() {
return failProduceRequestPerSecFifteenMinuteRate;
}
public void setFailProduceRequestPerSecFifteenMinuteRate(Double failProduceRequestPerSecFifteenMinuteRate) {
this.failProduceRequestPerSecFifteenMinuteRate = failProduceRequestPerSecFifteenMinuteRate;
}
public Double getFailFetchRequestPerSec() {
return failFetchRequestPerSec;
}
public void setFailFetchRequestPerSec(Double failFetchRequestPerSec) {
this.failFetchRequestPerSec = failFetchRequestPerSec;
}
public Double getFailFetchRequestPerSecMeanRate() {
return failFetchRequestPerSecMeanRate;
}
public void setFailFetchRequestPerSecMeanRate(Double failFetchRequestPerSecMeanRate) {
this.failFetchRequestPerSecMeanRate = failFetchRequestPerSecMeanRate;
}
public Double getFailFetchRequestPerSecFiveMinuteRate() {
return failFetchRequestPerSecFiveMinuteRate;
}
public void setFailFetchRequestPerSecFiveMinuteRate(Double failFetchRequestPerSecFiveMinuteRate) {
this.failFetchRequestPerSecFiveMinuteRate = failFetchRequestPerSecFiveMinuteRate;
}
public Double getFailFetchRequestPerSecFifteenMinuteRate() {
return failFetchRequestPerSecFifteenMinuteRate;
}
public void setFailFetchRequestPerSecFifteenMinuteRate(Double failFetchRequestPerSecFifteenMinuteRate) {
this.failFetchRequestPerSecFifteenMinuteRate = failFetchRequestPerSecFifteenMinuteRate;
}
public Double getTotalProduceRequestsPerSec() {
return totalProduceRequestsPerSec;
}
public void setTotalProduceRequestsPerSec(Double totalProduceRequestsPerSec) {
this.totalProduceRequestsPerSec = totalProduceRequestsPerSec;
}
public Double getTotalProduceRequestsPerSecMeanRate() {
return totalProduceRequestsPerSecMeanRate;
}
public void setTotalProduceRequestsPerSecMeanRate(Double totalProduceRequestsPerSecMeanRate) {
this.totalProduceRequestsPerSecMeanRate = totalProduceRequestsPerSecMeanRate;
}
public Double getTotalProduceRequestsPerSecFiveMinuteRate() {
return totalProduceRequestsPerSecFiveMinuteRate;
}
public void setTotalProduceRequestsPerSecFiveMinuteRate(Double totalProduceRequestsPerSecFiveMinuteRate) {
this.totalProduceRequestsPerSecFiveMinuteRate = totalProduceRequestsPerSecFiveMinuteRate;
}
public Double getTotalProduceRequestsPerSecFifteenMinuteRate() {
return totalProduceRequestsPerSecFifteenMinuteRate;
}
public void setTotalProduceRequestsPerSecFifteenMinuteRate(Double totalProduceRequestsPerSecFifteenMinuteRate) {
this.totalProduceRequestsPerSecFifteenMinuteRate = totalProduceRequestsPerSecFifteenMinuteRate;
}
public Double getTotalFetchRequestsPerSec() {
return totalFetchRequestsPerSec;
}
public void setTotalFetchRequestsPerSec(Double totalFetchRequestsPerSec) {
this.totalFetchRequestsPerSec = totalFetchRequestsPerSec;
}
public Double getTotalFetchRequestsPerSecMeanRate() {
return totalFetchRequestsPerSecMeanRate;
}
public void setTotalFetchRequestsPerSecMeanRate(Double totalFetchRequestsPerSecMeanRate) {
this.totalFetchRequestsPerSecMeanRate = totalFetchRequestsPerSecMeanRate;
}
public Double getTotalFetchRequestsPerSecFiveMinuteRate() {
return totalFetchRequestsPerSecFiveMinuteRate;
}
public void setTotalFetchRequestsPerSecFiveMinuteRate(Double totalFetchRequestsPerSecFiveMinuteRate) {
this.totalFetchRequestsPerSecFiveMinuteRate = totalFetchRequestsPerSecFiveMinuteRate;
}
public Double getTotalFetchRequestsPerSecFifteenMinuteRate() {
return totalFetchRequestsPerSecFifteenMinuteRate;
}
public void setTotalFetchRequestsPerSecFifteenMinuteRate(Double totalFetchRequestsPerSecFifteenMinuteRate) {
this.totalFetchRequestsPerSecFifteenMinuteRate = totalFetchRequestsPerSecFifteenMinuteRate;
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.metrics;
import com.xiaojukeji.kafka.manager.common.constant.Constant;
import com.xiaojukeji.kafka.manager.common.constant.MetricsType;
import com.xiaojukeji.kafka.manager.common.entity.annotations.FieldSelector;
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.List;
/**
* 需要定时拉取的broker数据
* @author tukun
* @date 2015/11/6.
*/
public class BrokerMetrics extends BaseMetrics {
/**
* 集群ID
*/
private Long clusterId;
/**
* Topic名称
*/
private Integer brokerId;
/**
* 每秒Produce请求数的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS
})
private Double produceRequestPerSec = 0.0;
private Double produceRequestPerSecMeanRate = 0.0;
private Double produceRequestPerSecFiveMinuteRate = 0.0;
private Double produceRequestPerSecFifteenMinuteRate = 0.0;
/**
* 每秒Fetch请求数的近一分钟的均值、平均字节数、近五分钟均值、近十五分钟均值
*/
@FieldSelector(types = {
MetricsType.BROKER_FLOW_DETAIL,
MetricsType.BROKER_TO_DB_METRICS,
MetricsType.BROKER_REAL_TIME_METRICS
})
private Double fetchConsumerRequestPerSec = 0.0;
private Double fetchConsumerRequestPerSecMeanRate = 0.0;
private Double fetchConsumerRequestPerSecFiveMinuteRate = 0.0;
private Double fetchConsumerRequestPerSecFifteenMinuteRate = 0.0;
/**
* Broker分区数量
*/
@FieldSelector(types = {MetricsType.BROKER_OVER_ALL_METRICS, 5})
private int partitionCount;
/**
* Broker已同步分区数量
*/
@FieldSelector(types = {MetricsType.BROKER_OVER_ALL_METRICS})
private int underReplicatedPartitions;
/**
* Broker Leader的数量
*/
@FieldSelector(types = {MetricsType.BROKER_OVER_ALL_METRICS, 5})
private int leaderCount;
/**
* Broker请求处理器空闲百分比
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Double requestHandlerAvgIdlePercent = 0.0;
/**
* 网络处理器空闲百分比
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Double networkProcessorAvgIdlePercent = 0.0;
/**
* 请求列表大小
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Integer requestQueueSize = 0;
/**
* 响应列表大小
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Integer responseQueueSize = 0;
/**
* 刷日志时间
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Double logFlushRateAndTimeMs = 0.0;
/**
* produce请求总时间-平均值
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Double totalTimeProduceMean = 0.0;
/**
* produce请求总时间-99th
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Double totalTimeProduce99Th = 0.0;
/**
* fetch consumer请求总时间-平均值
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Double totalTimeFetchConsumerMean = 0.0;
/**
* fetch consumer请求总时间-99th
*/
@FieldSelector(types = {MetricsType.BROKER_TO_DB_METRICS})
private Double totalTimeFetchConsumer99Th = 0.0;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public Integer getBrokerId() {
return brokerId;
}
public void setBrokerId(Integer brokerId) {
this.brokerId = brokerId;
}
public Double getProduceRequestPerSec() {
return produceRequestPerSec;
}
public void setProduceRequestPerSec(Double produceRequestPerSec) {
this.produceRequestPerSec = produceRequestPerSec;
}
public Double getProduceRequestPerSecMeanRate() {
return produceRequestPerSecMeanRate;
}
public void setProduceRequestPerSecMeanRate(Double produceRequestPerSecMeanRate) {
this.produceRequestPerSecMeanRate = produceRequestPerSecMeanRate;
}
public Double getProduceRequestPerSecFiveMinuteRate() {
return produceRequestPerSecFiveMinuteRate;
}
public void setProduceRequestPerSecFiveMinuteRate(Double produceRequestPerSecFiveMinuteRate) {
this.produceRequestPerSecFiveMinuteRate = produceRequestPerSecFiveMinuteRate;
}
public Double getProduceRequestPerSecFifteenMinuteRate() {
return produceRequestPerSecFifteenMinuteRate;
}
public void setProduceRequestPerSecFifteenMinuteRate(Double produceRequestPerSecFifteenMinuteRate) {
this.produceRequestPerSecFifteenMinuteRate = produceRequestPerSecFifteenMinuteRate;
}
public Double getFetchConsumerRequestPerSec() {
return fetchConsumerRequestPerSec;
}
public void setFetchConsumerRequestPerSec(Double fetchConsumerRequestPerSec) {
this.fetchConsumerRequestPerSec = fetchConsumerRequestPerSec;
}
public Double getFetchConsumerRequestPerSecMeanRate() {
return fetchConsumerRequestPerSecMeanRate;
}
public void setFetchConsumerRequestPerSecMeanRate(Double fetchConsumerRequestPerSecMeanRate) {
this.fetchConsumerRequestPerSecMeanRate = fetchConsumerRequestPerSecMeanRate;
}
public Double getFetchConsumerRequestPerSecFiveMinuteRate() {
return fetchConsumerRequestPerSecFiveMinuteRate;
}
public void setFetchConsumerRequestPerSecFiveMinuteRate(Double fetchConsumerRequestPerSecFiveMinuteRate) {
this.fetchConsumerRequestPerSecFiveMinuteRate = fetchConsumerRequestPerSecFiveMinuteRate;
}
public Double getFetchConsumerRequestPerSecFifteenMinuteRate() {
return fetchConsumerRequestPerSecFifteenMinuteRate;
}
public void setFetchConsumerRequestPerSecFifteenMinuteRate(Double fetchConsumerRequestPerSecFifteenMinuteRate) {
this.fetchConsumerRequestPerSecFifteenMinuteRate = fetchConsumerRequestPerSecFifteenMinuteRate;
}
public int getPartitionCount() {
return partitionCount;
}
public void setPartitionCount(int partitionCount) {
this.partitionCount = partitionCount;
}
public int getUnderReplicatedPartitions() {
return underReplicatedPartitions;
}
public void setUnderReplicatedPartitions(int underReplicatedPartitions) {
this.underReplicatedPartitions = underReplicatedPartitions;
}
public int getLeaderCount() {
return leaderCount;
}
public void setLeaderCount(int leaderCount) {
this.leaderCount = leaderCount;
}
public Double getRequestHandlerAvgIdlePercent() {
return requestHandlerAvgIdlePercent;
}
public void setRequestHandlerAvgIdlePercent(Double requestHandlerAvgIdlePercent) {
this.requestHandlerAvgIdlePercent = requestHandlerAvgIdlePercent;
}
public Double getNetworkProcessorAvgIdlePercent() {
return networkProcessorAvgIdlePercent;
}
public void setNetworkProcessorAvgIdlePercent(Double networkProcessorAvgIdlePercent) {
this.networkProcessorAvgIdlePercent = networkProcessorAvgIdlePercent;
}
public Integer getRequestQueueSize() {
return requestQueueSize;
}
public void setRequestQueueSize(Integer requestQueueSize) {
this.requestQueueSize = requestQueueSize;
}
public Integer getResponseQueueSize() {
return responseQueueSize;
}
public void setResponseQueueSize(Integer responseQueueSize) {
this.responseQueueSize = responseQueueSize;
}
public Double getLogFlushRateAndTimeMs() {
return logFlushRateAndTimeMs;
}
public void setLogFlushRateAndTimeMs(Double logFlushRateAndTimeMs) {
this.logFlushRateAndTimeMs = logFlushRateAndTimeMs;
}
public Double getTotalTimeProduceMean() {
return totalTimeProduceMean;
}
public void setTotalTimeProduceMean(Double totalTimeProduceMean) {
this.totalTimeProduceMean = totalTimeProduceMean;
}
public Double getTotalTimeProduce99Th() {
return totalTimeProduce99Th;
}
public void setTotalTimeProduce99Th(Double totalTimeProduce99Th) {
this.totalTimeProduce99Th = totalTimeProduce99Th;
}
public Double getTotalTimeFetchConsumerMean() {
return totalTimeFetchConsumerMean;
}
public void setTotalTimeFetchConsumerMean(Double totalTimeFetchConsumerMean) {
this.totalTimeFetchConsumerMean = totalTimeFetchConsumerMean;
}
public Double getTotalTimeFetchConsumer99Th() {
return totalTimeFetchConsumer99Th;
}
public void setTotalTimeFetchConsumer99Th(Double totalTimeFetchConsumer99Th) {
this.totalTimeFetchConsumer99Th = totalTimeFetchConsumer99Th;
}
private static void initialization(Field[] fields){
for(Field field : fields){
FieldSelector annotation = field.getAnnotation(FieldSelector.class);
if(annotation ==null){
continue;
}
String fieldName;
if("".equals(annotation.name())) {
fieldName = field.getName().substring(0,1).toUpperCase() + field.getName().substring(1);
} else{
fieldName = annotation.name();
}
for(int type: annotation.types()){
List<String> list = Constant.BROKER_METRICS_TYPE_MBEAN_NAME_MAP.getOrDefault(type, new ArrayList<>());
list.add(fieldName);
Constant.BROKER_METRICS_TYPE_MBEAN_NAME_MAP.put(type, list);
}
}
}
public static List<String> getFieldNameList(int metricsType){
synchronized (BrokerMetrics.class) {
if (Constant.BROKER_METRICS_TYPE_MBEAN_NAME_MAP.isEmpty()) {
initialization(BrokerMetrics.class.getDeclaredFields());
initialization(BaseMetrics.class.getDeclaredFields());
}
}
return Constant.BROKER_METRICS_TYPE_MBEAN_NAME_MAP.getOrDefault(metricsType, new ArrayList<>());
}
}
package com.xiaojukeji.kafka.manager.common.entity.metrics;
import com.xiaojukeji.kafka.manager.common.constant.Constant;
import com.xiaojukeji.kafka.manager.common.entity.annotations.FieldSelector;
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.List;
public class TopicMetrics extends BaseMetrics {
/**
* 集群ID
*/
private Long clusterId;
/**
* Topic名称
*/
private String topicName;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
private static void initialization(Field[] fields){
for(Field field : fields){
FieldSelector annotation = field.getAnnotation(FieldSelector.class);
if(annotation ==null){
continue;
}
String fieldName;
if("".equals(annotation.name())){
String name = field.getName();
fieldName = name.substring(0,1).toUpperCase()+name.substring(1);
}else{
fieldName = annotation.name();
}
for(int type: annotation.types()){
List<String> list = Constant.TOPIC_METRICS_TYPE_MBEAN_NAME_MAP.getOrDefault(type, new ArrayList<>());
list.add(fieldName);
Constant.TOPIC_METRICS_TYPE_MBEAN_NAME_MAP.put(type, list);
}
}
}
public static List<String> getFieldNameList(int type){
synchronized (TopicMetrics.class) {
if (Constant.TOPIC_METRICS_TYPE_MBEAN_NAME_MAP.isEmpty()) {
initialization(TopicMetrics.class.getDeclaredFields());
initialization(BaseMetrics.class.getDeclaredFields());
}
}
return Constant.TOPIC_METRICS_TYPE_MBEAN_NAME_MAP.get(type);
}
}
package com.xiaojukeji.kafka.manager.common.entity.po;
/**
* @author zengqiao
* @date 19/5/3
*/
public class AccountDO extends BaseDO {
private String username;
private String password;
private Integer role;
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public Integer getRole() {
return role;
}
public void setRole(Integer role) {
this.role = role;
}
@Override
public String toString() {
return "AccountDO{" +
"username='" + username + '\'' +
", password='" + password + '\'' +
", role=" + role +
", id=" + id +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.po;
public class AlarmRuleDO extends BaseDO {
private String alarmName;
private String strategyExpressions;
private String strategyFilters;
private String strategyActions;
private String principals;
public String getAlarmName() {
return alarmName;
}
public void setAlarmName(String alarmName) {
this.alarmName = alarmName;
}
public String getStrategyExpressions() {
return strategyExpressions;
}
public void setStrategyExpressions(String strategyExpressions) {
this.strategyExpressions = strategyExpressions;
}
public String getStrategyFilters() {
return strategyFilters;
}
public void setStrategyFilters(String strategyFilters) {
this.strategyFilters = strategyFilters;
}
public String getStrategyActions() {
return strategyActions;
}
public void setStrategyActions(String strategyActions) {
this.strategyActions = strategyActions;
}
public String getPrincipals() {
return principals;
}
public void setPrincipals(String principals) {
this.principals = principals;
}
@Override
public String toString() {
return "AlarmRuleDO{" +
"alarmName='" + alarmName + '\'' +
", strategyExpressions='" + strategyExpressions + '\'' +
", strategyFilters='" + strategyFilters + '\'' +
", strategyActions='" + strategyActions + '\'' +
", principals='" + principals + '\'' +
", id=" + id +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.po;
import java.util.Date;
/**
* @author arthur
* @date 2017/7/25.
*/
public class BaseDO {
protected Long id;
protected Integer status;
protected Date gmtCreate;
protected Date gmtModify;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Integer getStatus() {
return status;
}
public void setStatus(Integer status) {
this.status = status;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
public Date getGmtModify() {
return gmtModify;
}
public void setGmtModify(Date gmtModify) {
this.gmtModify = gmtModify;
}
@Override
public String toString() {
return "BaseDO{" +
"id=" + id +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.po;
import java.util.Date;
/**
* @author zengqiao
* @date 19/11/25
*/
public abstract class BaseEntryDO {
protected Long id;
protected Date gmtCreate;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public Date getGmtCreate() {
return gmtCreate;
}
public void setGmtCreate(Date gmtCreate) {
this.gmtCreate = gmtCreate;
}
@Override
public String toString() {
return "BaseEntryDO{" +
"id=" + id +
", gmtCreate=" + gmtCreate +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.po;
/**
* @author zengqiao
* @date 19/4/3
*/
public class BrokerDO extends BaseDO {
private Long clusterId;
private Integer brokerId;
private String host;
private Integer port;
private Long timestamp;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public Integer getBrokerId() {
return brokerId;
}
public void setBrokerId(Integer brokerId) {
this.brokerId = brokerId;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public Integer getPort() {
return port;
}
public void setPort(Integer port) {
this.port = port;
}
public Long getTimestamp() {
return timestamp;
}
public void setTimestamp(Long timestamp) {
this.timestamp = timestamp;
}
@Override
public String toString() {
return "BrokerDO{" +
"clusterId=" + clusterId +
", brokerId=" + brokerId +
", host='" + host + '\'' +
", port=" + port +
", timestamp=" + timestamp +
", id=" + id +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
'}';
}
}
package com.xiaojukeji.kafka.manager.common.entity.po;
public class TopicFavoriteDO extends BaseDO{
private String username;
private Long clusterId;
private String topicName;
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
@Override
public String toString() {
return "TopicFavoriteDO{" +
"username='" + username + '\'' +
", clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", id=" + id +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
'}';
}
}
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册