未验证 提交 f70f4348 编写于 作者: 李民 提交者: GitHub

Merge branch 'master' into v2.2.1_ldap

......@@ -5,66 +5,89 @@
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## 主要功能特性
### 快速体验
- 体验地址 http://117.51.146.109:8080 账号密码 admin/admin
阅读本README文档,您可以了解到滴滴Logi-KafkaManager的用户群体、产品定位等信息,并通过体验地址,快速体验Kafka集群指标监控与运维管控的全流程。<br>若滴滴Logi-KafkaManager已在贵司的生产环境进行使用,并想要获得官方更好地支持和指导,可以通过[`OCE认证`](http://obsuite.didiyun.com/open/openAuth),加入官方交流平台。
### 集群监控维度
- 多版本集群管控,支持从`0.10.2``2.x`版本;
- 集群Topic、Broker等多维度历史与实时关键指标查看;
## 1 产品简介
滴滴Logi-KafkaManager脱胎于滴滴内部多年的Kafka运营实践经验,是面向Kafka用户、Kafka运维人员打造的共享多租户Kafka云平台。专注于Kafka运维管控、监控告警、资源治理等核心场景,经历过大规模集群、海量大数据的考验。内部满意度高达90%的同时,还与多家知名企业达成商业化合作。
### 1.1 快速体验地址
- 体验地址 http://117.51.146.109:8080 账号密码 admin/admin
### 集群管控维度
### 1.2 体验地图
相比较于同类产品的用户视角单一(大多为管理员视角),滴滴Logi-KafkaManager建立了基于分角色、多场景视角的体验地图。分别是:**用户体验地图、运维体验地图、运营体验地图**
- 集群运维,包括逻辑Region方式管理集群
- Broker运维,包括优先副本选举
- Topic运维,包括创建、查询、扩容、修改属性、数据采样及迁移等;
- 消费组运维,包括指定时间或指定偏移两种方式进行重置消费偏移
#### 1.2.1 用户体验地图
- 平台租户申请&nbsp;&nbsp;:申请应用(App)作为Kafka中的用户名,并用 AppID+password作为身份验证
- 集群资源申请&nbsp;&nbsp;:按需申请、按需使用。可使用平台提供的共享集群,也可为应用申请独立的集群
- Topic&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:可根据应用(App)创建Topic,或者申请其他topic的读写权限
- Topic&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:Topic数据采样、调整配额、申请分区等操作
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:基于Topic生产消费各环节耗时统计,监控不同分位数性能指标
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:支持将消费偏移重置至指定时间或指定位置
#### 1.2.2 运维体验地图
- 多版本集群管控&nbsp;&nbsp;:支持从`0.10.2``2.x`版本
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:集群Topic、Broker等多维度历史与实时关键指标查看,建立健康分体系
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:划分部分Broker作为Region,使用Region定义资源划分单位,并按照业务、保障能力区分逻辑集群
- Broker&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:包括优先副本选举等操作
- Topic&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:包括创建、查询、扩容、修改属性、迁移、下线等
### 用户使用维度
- Kafka用户、Kafka研发、Kafka运维 视角区分
- Kafka用户、Kafka研发、Kafka运维 权限区分
#### 1.2.3 运营体验地图
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:沉淀资源治理方法。针对Topic分区热点、分区不足等高频常见问题,沉淀资源治理方法,实现资源治理专家化
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:工单体系。Topic创建、调整配额、申请分区等操作,由专业运维人员审批,规范资源使用,保障平台平稳运行
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:成本控制。Topic资源、集群资源按需申请、按需使用。根据流量核算费用,帮助企业建设大数据成本核算体系
### 1.3 核心优势
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:监控多项核心指标,统计不同分位数据,提供种类丰富的指标监控报表,帮助用户、运维人员快速高效定位问题
- 便&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:按照Region定义集群资源划分单位,将逻辑集群根据保障等级划分。在方便资源隔离、提高扩展能力的同时,实现对服务端的强管控
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:基于滴滴内部多年运营实践,沉淀资源治理方法,建立健康分体系。针对Topic分区热点、分区不足等高频常见问题,实现资源治理专家化
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;:与滴滴夜莺监控告警系统打通,集成监控告警、集群部署、集群升级等能力。形成运维生态,凝练专家服务,使运维更高效
## kafka-manager架构图
### 1.4 滴滴Logi-KafkaManager架构图
![kafka-manager-arch](https://img-ys011.didistatic.com/static/dicloudpub/do1_xgDHNDLj2ChKxctSuf72)
## 相关文档
## 2 相关文档
- [kafka-manager 安装手册](docs/install_guide/install_guide_cn.md)
- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md)
- [kafka-manager 用户使用手册](docs/user_guide/user_guide_cn.md)
- [kafka-manager FAQ](docs/user_guide/faq.md)
### 2.1 产品文档
- [滴滴Logi-KafkaManager 安装手册](docs/install_guide/install_guide_cn.md)
- [滴滴Logi-KafkaManager 接入集群](docs/user_guide/add_cluster/add_cluster.md)
- [滴滴Logi-KafkaManager 用户使用手册](docs/user_guide/user_guide_cn.md)
- [滴滴Logi-KafkaManager FAQ](docs/user_guide/faq.md)
## 钉钉交流群
### 2.2 社区文章
- [滴滴云官网产品介绍](https://www.didiyun.com/production/logi-KafkaManager.html)
- [7年沉淀之作--滴滴Logi日志服务套件](https://mp.weixin.qq.com/s/-KQp-Qo3WKEOc9wIR2iFnw)
- [滴滴Logi-KafkaManager 一站式Kafka监控与管控平台](https://mp.weixin.qq.com/s/9qSZIkqCnU6u9nLMvOOjIQ)
- [滴滴Logi-KafkaManager 开源之路](https://xie.infoq.cn/article/0223091a99e697412073c0d64)
- [滴滴Logi-KafkaManager 系列视频教程](https://mp.weixin.qq.com/s/9X7gH0tptHPtfjPPSdGO8g)
- [kafka实践(十五):滴滴开源Kafka管控平台 Logi-KafkaManager研究--A叶子叶来](https://blog.csdn.net/yezonggang/article/details/113106244)
![dingding_group](./docs/assets/images/common/dingding_group.jpg)
## 3 滴滴Logi开源用户钉钉交流群
![dingding_group](./docs/assets/images/common/dingding_group.jpg)
钉钉群ID:32821440
## OCE认证
OCE是一个认证机制和交流平台,为Logi-KafkaManager生产用户量身打造,我们会为OCE企业提供更好的技术支持,比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等,如果贵司Logi-KafkaManager上了生产,[快来加入吧](http://obsuite.didiyun.com/open/openAuth)
## 4 OCE认证
OCE是一个认证机制和交流平台,为滴滴Logi-KafkaManager生产用户量身打造,我们会为OCE企业提供更好的技术支持,比如专属的技术沙龙、企业一对一的交流机会、专属的答疑群等,如果贵司Logi-KafkaManager上了生产,[快来加入吧](http://obsuite.didiyun.com/open/openAuth)
## 项目成员
## 5 项目成员
### 内部核心人员
### 5.1 内部核心人员
`iceyuhui``liuyaguang``limengmonty``zhangliangmike``nullhuangyiming``zengqiao``eilenexuzhe``huangjiaweihjw``zhaoyinrui``marzkonglingxu``joysunchao`
### 外部贡献者
### 5.2 外部贡献者
`fangjunyu``zhoutaiyang`
## 协议
## 6 协议
`kafka-manager`基于`Apache-2.0`协议进行分发和使用,更多信息参见[协议文件](./LICENSE)
......@@ -4,7 +4,7 @@ cd $workspace
## constant
OUTPUT_DIR=./output
KM_VERSION=2.2.0
KM_VERSION=2.3.0
APP_NAME=kafka-manager
APP_DIR=${APP_NAME}-${KM_VERSION}
......
FROM openjdk:8-jdk-alpine3.9
LABEL author="yangvipguang"
ENV VERSION 2.1.0
ENV JAR_PATH kafka-manager-web/target
COPY $JAR_PATH/kafka-manager-web-$VERSION-SNAPSHOT.jar /tmp/app.jar
COPY $JAR_PATH/application.yml /km/
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories
RUN apk add --no-cache --virtual .build-deps \
font-adobe-100dpi \
ttf-dejavu \
fontconfig \
curl \
apr \
apr-util \
apr-dev \
tomcat-native \
&& apk del .build-deps
ENV AGENT_HOME /opt/agent/
WORKDIR /tmp
COPY docker-depends/config.yaml $AGENT_HOME
COPY docker-depends/jmx_prometheus_javaagent-0.14.0.jar $AGENT_HOME
ENV JAVA_AGENT="-javaagent:$AGENT_HOME/jmx_prometheus_javaagent-0.14.0.jar=9999:$AGENT_HOME/config.yaml"
ENV JAVA_HEAP_OPTS="-Xms1024M -Xmx1024M -Xmn100M "
ENV JAVA_OPTS="-verbose:gc \
-XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:/tmp/gc.log -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps \
-XX:MaxMetaspaceSize=256M -XX:+DisableExplicitGC -XX:+UseStringDeduplication \
-XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:-UseContainerSupport"
#-Xlog:gc -Xlog:gc* -Xlog:gc+heap=trace -Xlog:safepoint
EXPOSE 8080 9999
ENTRYPOINT ["sh","-c","java -jar $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"]
## 默认不带Prometheus JMX监控,需要可以自行取消以下注释并注释上面一行默认Entrypoint 命令。
## ENTRYPOINT ["sh","-c","java -jar $JAVA_AGENT $JAVA_HEAP_OPTS $JAVA_OPTS /tmp/app.jar --spring.config.location=/km/application.yml"]
---
startDelaySeconds: 0
ssl: false
lowercaseOutputName: false
lowercaseOutputLabelNames: false
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
apiVersion: v2
name: didi-km
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "didi-km.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "didi-km.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "didi-km.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "didi-km.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}
{{/*
Expand the name of the chart.
*/}}
{{- define "didi-km.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "didi-km.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "didi-km.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "didi-km.labels" -}}
helm.sh/chart: {{ include "didi-km.chart" . }}
{{ include "didi-km.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "didi-km.selectorLabels" -}}
app.kubernetes.io/name: {{ include "didi-km.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "didi-km.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "didi-km.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: km-cm
data:
application.yml: |
server:
port: 8080
tomcat:
accept-count: 1000
max-connections: 10000
max-threads: 800
min-spare-threads: 100
spring:
application:
name: kafkamanager
datasource:
kafka-manager:
jdbc-url: jdbc:mysql://xxxxx:3306/kafka-manager?characterEncoding=UTF-8&serverTimezone=GMT%2B8&useSSL=false
username: admin
password: admin
driver-class-name: com.mysql.jdbc.Driver
main:
allow-bean-definition-overriding: true
profiles:
active: dev
servlet:
multipart:
max-file-size: 100MB
max-request-size: 100MB
logging:
config: classpath:logback-spring.xml
custom:
idc: cn
jmx:
max-conn: 20
store-metrics-task:
community:
broker-metrics-enabled: true
topic-metrics-enabled: true
didi:
app-topic-metrics-enabled: false
topic-request-time-metrics-enabled: false
topic-throttled-metrics: false
save-days: 7
# 任务相关的开关
task:
op:
sync-topic-enabled: false # 未落盘的Topic定期同步到DB中
account:
ldap:
kcm:
enabled: false
storage:
base-url: http://127.0.0.1
n9e:
base-url: http://127.0.0.1:8004
user-token: 12345678
timeout: 300
account: root
script-file: kcm_script.sh
monitor:
enabled: false
n9e:
nid: 2
user-token: 1234567890
mon:
base-url: http://127.0.0.1:8032
sink:
base-url: http://127.0.0.1:8006
rdb:
base-url: http://127.0.0.1:80
notify:
kafka:
cluster-id: 95
topic-name: didi-kafka-notify
order:
detail-url: http://127.0.0.1
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "didi-km.fullname" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "didi-km.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "didi-km.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "didi-km.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: jmx-metrics
containerPort: 9999
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "didi-km.fullname" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "didi-km.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "didi-km.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "didi-km.fullname" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "didi-km.selectorLabels" . | nindent 4 }}
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "didi-km.serviceAccountName" . }}
labels:
{{- include "didi-km.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "didi-km.fullname" . }}-test-connection"
labels:
{{- include "didi-km.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "didi-km.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
# Default values for didi-km.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: docker.io/yangvipguang/km
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "v18"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: "km"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 8080
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 50m
memory: 2048Mi
requests:
cpu: 10m
memory: 200Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
## JMX-连接失败问题解决
集群正常接入Logi-KafkaManager之后,即可以看到集群的Broker列表,此时如果查看不了Topic的实时流量,或者是Broker的实时流量信息时,那么大概率就是JMX连接的问题了。
下面我们按照步骤来一步一步的检查。
### 1、问题&说明
**类型一:JMX配置未开启**
未开启时,直接到`2、解决方法`查看如何开启即可。
![check_jmx_opened](./assets/connect_jmx_failed/check_jmx_opened.jpg)
**类型二:配置错误**
`JMX`端口已经开启的情况下,有的时候开启的配置不正确,此时也会导致出现连接失败的问题。这里大概列举几种原因:
- `JMX`配置错误:见`2、解决方法`
- 存在防火墙或者网络限制:网络通的另外一台机器`telnet`试一下看是否可以连接上。
- 需要进行用户名及密码的认证:见`3、解决方法 —— 认证的JMX`
错误日志例子:
```
# 错误一: 错误提示的是真实的IP,这样的话基本就是JMX配置的有问题了。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:192.168.0.1 port:9999.
java.rmi.ConnectException: Connection refused to host: 192.168.0.1; nested exception is:
# 错误二:错误提示的是127.0.0.1这个IP,这个是机器的hostname配置的可能有问题。
2021-01-27 10:06:20.730 ERROR 50901 --- [ics-Thread-1-62] c.x.k.m.c.utils.jmx.JmxConnectorWrap : JMX connect exception, host:127.0.0.1 port:9999.
java.rmi.ConnectException: Connection refused to host: 127.0.0.1;; nested exception is:
```
### 2、解决方法
这里仅介绍一下比较通用的解决方式,如若有更好的方式,欢迎大家指导告知一下。
修改`kafka-server-start.sh`文件:
```
# 在这个下面增加JMX端口的配置
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT=9999 # 增加这个配置, 这里的数值并不一定是要9999
fi
```
&nbsp;
修改`kafka-run-class.sh`文件
```
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${当前机器的IP}"
fi
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
fi
```
### 3、解决方法 —— 认证的JMX
如果您是直接看的这个部分,建议先看一下上一节:`2、解决方法`以确保`JMX`的配置没有问题了。
在JMX的配置等都没有问题的情况下,如果是因为认证的原因导致连接不了的,此时可以使用下面介绍的方法进行解决。
**当前这块后端刚刚开发完成,可能还不够完善,有问题随时沟通。**
`Logi-KafkaManager 2.2.0+`之后的版本后端已经支持`JMX`认证方式的连接,但是还没有界面,此时我们可以往`cluster`表的`jmx_properties`字段写入`JMX`的认证信息。
这个数据是`json`格式的字符串,例子如下所示:
```json
{
"maxConn": 10, # KM对单台Broker的最大JMX连接数
"username": "xxxxx", # 用户名
"password": "xxxx", # 密码
"openSSL": true, # 开启SSL, true表示开启ssl, false表示关闭
}
```
&nbsp;
SQL的例子:
```sql
UPDATE cluster SET jmx_properties='{ "maxConn": 10, "username": "xxxxx", "password": "xxxx", "openSSL": false }' where id={xxx};
```
\ No newline at end of file
---
![kafka-manager-logo](../../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 升级至`2.3.0`版本
`2.3.0`版本在`gateway_config`表增加了一个描述说明的字段,因此需要执行下面的sql进行字段的增加。
```sql
ALTER TABLE `gateway_config`
ADD COLUMN `description` TEXT NULL COMMENT '描述信息' AFTER `version`;
```
......@@ -15,7 +15,7 @@
当前因为无法同时兼容`MySQL 8``MySQL 5.7`,因此代码中默认的版本还是`MySQL 5.7`
当前如需使用`MySQL 8`,则按照下述流程进行简单修改代码。
当前如需使用`MySQL 8`,则按照下述流程进行简单修改代码。
- Step1. 修改application.yml中的MySQL驱动类
......
......@@ -203,7 +203,8 @@ CREATE TABLE `gateway_config` (
`type` varchar(128) NOT NULL DEFAULT '' COMMENT '配置类型',
`name` varchar(128) NOT NULL DEFAULT '' COMMENT '配置名称',
`value` text COMMENT '配置值',
`version` bigint(20) unsigned NOT NULL DEFAULT '0' COMMENT '版本信息',
`version` bigint(20) unsigned NOT NULL DEFAULT '1' COMMENT '版本信息',
`description` text COMMENT '描述信息',
`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
`modify_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
PRIMARY KEY (`id`),
......
......@@ -5,16 +5,26 @@
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 集群接入
## 主要概念讲解
面对大规模集群、业务场景复杂的情况,引入Region、逻辑集群的概念
- Region:划分部分Broker作为一个 Region,用Region定义资源划分的单位,提高扩展性和隔离性。如果部分Topic异常也不会影响大面积的Broker
- 逻辑集群:逻辑集群由部分Region组成,便于对大规模集群按照业务划分、保障能力进行管理
![op_cluster_arch](assets/op_cluster_arch.png)
集群的接入总共需要三个步骤,分别是:
1. 接入物理集群
2. 创建Region
3. 创建逻辑集群
1. 接入物理集群:填写机器地址、安全协议等配置信息,接入真实的物理集群
2. 创建Region:将部分Broker划分为一个Region
3. 创建逻辑集群:逻辑集群由部分Region组成,可根据业务划分、保障等级来创建相应的逻辑集群
![op_cluster_flow](assets/op_cluster_flow.png)
备注:接入集群需要2、3两步是因为普通用户的视角下,看到的都是逻辑集群,如果没有2、3两步,那么普通用户看不到任何信息。
**备注:接入集群需要2、3两步是因为普通用户的视角下,看到的都是逻辑集群,如果没有2、3两步,那么普通用户看不到任何信息。**
## 1、接入物理集群
......@@ -36,4 +46,4 @@
![op_add_logical_cluster](assets/op_add_logical_cluster.jpg)
如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。
\ No newline at end of file
如上图所示,填写逻辑集群信息,然后点击确定,即可完成逻辑集群的创建。
![kafka-manager-logo](../assets/images/common/logo_name.png))
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
......
......@@ -9,18 +9,41 @@
# FAQ
- 1、Topic申请时没有可选择的集群?
- 0、Github图裂问题解决
- 1、Topic申请、新建监控告警等操作时没有可选择的集群?
- 2、逻辑集群 & Region的用途?
- 3、登录失败?
- 4、页面流量信息等无数据?
- 5、如何对接夜莺的监控告警功能?
- 6、如何使用`MySQL 8`
- 7、`Jmx`连接失败如何解决?
- 8、`topic biz data not exist`错误及处理方式
---
### 1、Topic申请时没有可选择的集群?
### 0、Github图裂问题解决
- 参看 [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册,这里的Region和逻辑集群都必须添加。
可以在本地机器`ping github.com`这个地址,获取到`github.com`地址的IP地址。
然后将IP绑定到`/etc/hosts`文件中。
例如
```shell
# 在 /etc/hosts文件中增加如下信息
140.82.113.3 github.com
```
---
### 1、Topic申请、新建监控告警等操作时没有可选择的集群?
缺少逻辑集群导致的,在Topic管理、监控告警、集群管理这三个Tab下面都是普通用户视角,普通用户看到的集群都是逻辑集群,因此在这三个Tab下进行操作时,都需要有逻辑集群。
逻辑集群的创建参看:
- [kafka-manager 接入集群](docs/user_guide/add_cluster/add_cluster.md) 手册,这里的Region和逻辑集群都必须添加。
---
......@@ -29,7 +52,7 @@
主要用途是进行大集群的管理 & 集群细节的屏蔽。
- 逻辑集群:通过逻辑集群概念,将集群Broker按业务进行归类,方便管理;
- Region:通过引入Region,同时Topic按Region度创建,减少Broker间的连接;
- Region:通过引入Region,同时Topic按Region度创建,减少Broker间的连接;
---
......@@ -43,7 +66,7 @@
- 1、检查`Broker JMX`是否正确开启。
如若还未开启,具体可百度一下看如何开启
如若还未开启,具体可百度一下看如何开启,或者参看:[Jmx连接配置&问题解决说明文档](../dev_guide/connect_jmx_failed.md)
![helpcenter](./assets/faq/jmx_check.jpg)
......@@ -53,7 +76,7 @@
- 3、数据库时区问题。
检查MySQL的topic_metrics、broker_metrics表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。
检查MySQL的topic表,查看是否有数据,如果有数据,那么再检查设置的时区是否正确。
---
......@@ -66,3 +89,23 @@
### 6、如何使用`MySQL 8`?
- 参看 [kafka-manager 使用`MySQL 8`](../dev_guide/use_mysql_8.md) 说明。
---
### 7、`Jmx`连接失败如何解决?
- 参看 [Jmx连接配置&问题解决](../dev_guide/connect_jmx_failed.md) 说明。
---
### 8、`topic biz data not exist`错误及处理方式
**错误原因**
在进行权限审批的时候,可能会出现这个错误,出现这个错误的原因是因为Topic相关的业务信息没有在DB中存储,或者更具体的说就是该Topic不属于任何应用导致的,只需要将这些无主的Topic挂在某个应用下面即可。
**解决方式**
可以在`运维管控->集群列表->Topic信息`下面,编辑申请权限的Topic,为Topic选择一个应用即可。
以上仅仅只是针对单个Topic的场景,如果你有非常多的Topic需要进行初始化的,那么此时可以在配置管理中增加一个配置,来定时的对无主的Topic进行同步,具体见:[动态配置管理 - 1、Topic定时同步任务](../dev_guide/dynamic_config_manager.md)
---
![kafka-manager-logo](../assets/images/common/logo_name.png)
**一站式`Apache Kafka`集群指标监控与运维管控平台**
---
# 资源申请文档
## 主要名词解释
- 应用(App):作为Kafka中的账户,使用AppID+password作为身份标识
- 集群:可使用平台提供的共享集群,也可为某一应用申请单独的集群
- Topic:可申请创建Topic或申请其他Topic的生产/消费权限。进行生产/消费时通过Topic+AppID进行身份鉴权
![production_consumption_flow](assets/resource_apply/production_consumption_flow.png)
## 应用申请
应用(App)作为Kafka中的账户,使用AppID+password作为身份标识。对Topic进行生产/消费时通过Topic+AppID进行身份鉴权。
用户申请应用,经由运维人员审批,审批通过后获得AppID和密钥
## 集群申请
可使用平台提供的共享集群,若对隔离性、稳定性、生产消费速率有更高的需求,可对某一应用申请单独的集群
## Topic申请
- 用户可根据已申请的应用创建Topic。创建后,应用负责人默认拥有该Topic的生产/消费权限和管理权限
- 也可申请其他Topic的生产、消费权限。经由Topic所属应用的负责人审批后,即可拥有相应权限。
......@@ -104,5 +104,10 @@
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
</dependency>
</dependencies>
</project>
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.bizenum;
/**
* 过期Topic状态
* @author zengqiao
* @date 21/01/25
*/
public enum TopicExpiredStatusEnum {
ALREADY_NOTIFIED_AND_DELETED(-2, "已通知, 已下线"),
ALREADY_NOTIFIED_AND_CAN_DELETE(-1, "已通知, 可下线"),
ALREADY_EXPIRED_AND_WAIT_NOTIFY(0, "已过期, 待通知"),
ALREADY_NOTIFIED_AND_WAIT_RESPONSE(1, "已通知, 待反馈"),
;
private int status;
private String message;
TopicExpiredStatusEnum(int status, String message) {
this.status = status;
this.message = message;
}
public int getStatus() {
return status;
}
public String getMessage() {
return message;
}
}
......@@ -97,7 +97,7 @@ public class Result<T> implements Serializable {
return result;
}
public static <T> Result<T> buildFailure(String message) {
public static <T> Result<T> buildGatewayFailure(String message) {
Result<T> result = new Result<T>();
result.setCode(ResultStatus.GATEWAY_INVALID_REQUEST.getCode());
result.setMessage(message);
......@@ -105,6 +105,14 @@ public class Result<T> implements Serializable {
return result;
}
public static <T> Result<T> buildFailure(String message) {
Result<T> result = new Result<T>();
result.setCode(ResultStatus.FAIL.getCode());
result.setMessage(message);
result.setData(null);
return result;
}
public static Result buildFrom(ResultStatus resultStatus) {
Result result = new Result();
result.setCode(resultStatus.getCode());
......
......@@ -12,125 +12,101 @@ public enum ResultStatus {
SUCCESS(Constant.SUCCESS, "success"),
LOGIN_FAILED(1, "login failed, please check username and password"),
FAIL(1, "操作失败"),
/**
* 内部依赖错误, [1000, 1200)
* 操作错误[1000, 2000)
* ------------------------------------------------------------------------------------------
*/
MYSQL_ERROR(1000, "operate database failed"),
CONNECT_ZOOKEEPER_FAILED(1000, "connect zookeeper failed"),
READ_ZOOKEEPER_FAILED(1000, "read zookeeper failed"),
READ_JMX_FAILED(1000, "read jmx failed"),
OPERATION_FAILED(1401, "operation failed"),
OPERATION_FORBIDDEN(1402, "operation forbidden"),
API_CALL_EXCEED_LIMIT(1403, "api call exceed limit"),
USER_WITHOUT_AUTHORITY(1404, "user without authority"),
CHANGE_ZOOKEEPER_FORBIDDEN(1405, "change zookeeper forbidden"),
// 内部依赖错误 —— Kafka特定错误, [1000, 1100)
BROKER_NUM_NOT_ENOUGH(1000, "broker not enough"),
CONTROLLER_NOT_ALIVE(1000, "controller not alive"),
CLUSTER_METADATA_ERROR(1000, "cluster metadata error"),
TOPIC_CONFIG_ERROR(1000, "topic config error"),
TOPIC_OPERATION_PARAM_NULL_POINTER(1450, "参数错误"),
TOPIC_OPERATION_PARTITION_NUM_ILLEGAL(1451, "分区数错误"),
TOPIC_OPERATION_BROKER_NUM_NOT_ENOUGH(1452, "Broker数不足错误"),
TOPIC_OPERATION_TOPIC_NAME_ILLEGAL(1453, "Topic名称非法"),
TOPIC_OPERATION_TOPIC_EXISTED(1454, "Topic已存在"),
TOPIC_OPERATION_UNKNOWN_TOPIC_PARTITION(1455, "Topic未知"),
TOPIC_OPERATION_TOPIC_CONFIG_ILLEGAL(1456, "Topic配置错误"),
TOPIC_OPERATION_TOPIC_IN_DELETING(1457, "Topic正在删除"),
TOPIC_OPERATION_UNKNOWN_ERROR(1458, "未知错误"),
/**
* 外部依赖错误, [1200, 1400)
* 参数错误[2000, 3000)
* ------------------------------------------------------------------------------------------
*/
CALL_CLUSTER_TASK_AGENT_FAILED(1000, " call cluster task agent failed"),
CALL_MONITOR_SYSTEM_ERROR(1000, " call monitor-system failed"),
PARAM_ILLEGAL(2000, "param illegal"),
CG_LOCATION_ILLEGAL(2001, "consumer group location illegal"),
ORDER_ALREADY_HANDLED(2002, "order already handled"),
APP_ID_OR_PASSWORD_ILLEGAL(2003, "app or password illegal"),
SYSTEM_CODE_ILLEGAL(2004, "system code illegal"),
CLUSTER_TASK_HOST_LIST_ILLEGAL(2005, "主机列表错误,请检查主机列表"),
JSON_PARSER_ERROR(2006, "json parser error"),
BROKER_NUM_NOT_ENOUGH(2050, "broker not enough"),
CONTROLLER_NOT_ALIVE(2051, "controller not alive"),
CLUSTER_METADATA_ERROR(2052, "cluster metadata error"),
TOPIC_CONFIG_ERROR(2053, "topic config error"),
/**
* 外部用户操作错误, [1400, 1600)
* 参数错误 - 资源检查错误
* 因为外部系统的问题, 操作时引起的错误, [7000, 8000)
* ------------------------------------------------------------------------------------------
*/
PARAM_ILLEGAL(1400, "param illegal"),
OPERATION_FAILED(1401, "operation failed"),
OPERATION_FORBIDDEN(1402, "operation forbidden"),
API_CALL_EXCEED_LIMIT(1403, "api call exceed limit"),
// 资源不存在
CLUSTER_NOT_EXIST(10000, "cluster not exist"),
BROKER_NOT_EXIST(10000, "broker not exist"),
TOPIC_NOT_EXIST(10000, "topic not exist"),
PARTITION_NOT_EXIST(10000, "partition not exist"),
ACCOUNT_NOT_EXIST(10000, "account not exist"),
APP_NOT_EXIST(1000, "app not exist"),
ORDER_NOT_EXIST(1000, "order not exist"),
CONFIG_NOT_EXIST(1000, "config not exist"),
IDC_NOT_EXIST(1000, "idc not exist"),
TASK_NOT_EXIST(1110, "task not exist"),
AUTHORITY_NOT_EXIST(1000, "authority not exist"),
MONITOR_NOT_EXIST(1110, "monitor not exist"),
QUOTA_NOT_EXIST(1000, "quota not exist, please check clusterId, topicName and appId"),
// 资源不存在, 已存在, 已被使用
RESOURCE_NOT_EXIST(1200, "资源不存在"),
RESOURCE_ALREADY_EXISTED(1200, "资源已经存在"),
RESOURCE_NAME_DUPLICATED(1200, "资源名称重复"),
RESOURCE_ALREADY_USED(1000, "资源早已被使用"),
RESOURCE_NOT_EXIST(7100, "资源不存在"),
CLUSTER_NOT_EXIST(7101, "cluster not exist"),
BROKER_NOT_EXIST(7102, "broker not exist"),
TOPIC_NOT_EXIST(7103, "topic not exist"),
PARTITION_NOT_EXIST(7104, "partition not exist"),
ACCOUNT_NOT_EXIST(7105, "account not exist"),
APP_NOT_EXIST(7106, "app not exist"),
ORDER_NOT_EXIST(7107, "order not exist"),
CONFIG_NOT_EXIST(7108, "config not exist"),
IDC_NOT_EXIST(7109, "idc not exist"),
TASK_NOT_EXIST(7110, "task not exist"),
AUTHORITY_NOT_EXIST(7111, "authority not exist"),
MONITOR_NOT_EXIST(7112, "monitor not exist"),
QUOTA_NOT_EXIST(7113, "quota not exist, please check clusterId, topicName and appId"),
CONSUMER_GROUP_NOT_EXIST(7114, "consumerGroup not exist"),
TOPIC_BIZ_DATA_NOT_EXIST(7115, "topic biz data not exist, please sync topic to db"),
// 资源已存在
RESOURCE_ALREADY_EXISTED(7200, "资源已经存在"),
TOPIC_ALREADY_EXIST(7201, "topic already existed"),
// 资源重名
RESOURCE_NAME_DUPLICATED(7300, "资源名称重复"),
// 资源已被使用
RESOURCE_ALREADY_USED(7400, "资源早已被使用"),
/**
* 资源参数错误
* 因为外部系统的问题, 操作时引起的错误, [8000, 9000)
* ------------------------------------------------------------------------------------------
*/
CG_LOCATION_ILLEGAL(10000, "consumer group location illegal"),
ORDER_ALREADY_HANDLED(1000, "order already handled"),
APP_ID_OR_PASSWORD_ILLEGAL(1000, "app or password illegal"),
SYSTEM_CODE_ILLEGAL(1000, "system code illegal"),
MYSQL_ERROR(8010, "operate database failed"),
CLUSTER_TASK_HOST_LIST_ILLEGAL(1000, "主机列表错误,请检查主机列表"),
ZOOKEEPER_CONNECT_FAILED(8020, "zookeeper connect failed"),
ZOOKEEPER_READ_FAILED(8021, "zookeeper read failed"),
ZOOKEEPER_WRITE_FAILED(8022, "zookeeper write failed"),
ZOOKEEPER_DELETE_FAILED(8023, "zookeeper delete failed"),
// 调用集群任务里面的agent失败
CALL_CLUSTER_TASK_AGENT_FAILED(8030, " call cluster task agent failed"),
// 调用监控系统失败
CALL_MONITOR_SYSTEM_ERROR(8040, " call monitor-system failed"),
// 存储相关的调用失败
STORAGE_UPLOAD_FILE_FAILED(8050, "upload file failed"),
STORAGE_FILE_TYPE_NOT_SUPPORT(8051, "File type not support"),
STORAGE_DOWNLOAD_FILE_FAILED(8052, "download file failed"),
///////////////////////////////////////////////////////////////
USER_WITHOUT_AUTHORITY(1000, "user without authority"),
JSON_PARSER_ERROR(1000, "json parser error"),
TOPIC_OPERATION_PARAM_NULL_POINTER(2, "参数错误"),
TOPIC_OPERATION_PARTITION_NUM_ILLEGAL(3, "分区数错误"),
TOPIC_OPERATION_BROKER_NUM_NOT_ENOUGH(4, "Broker数不足错误"),
TOPIC_OPERATION_TOPIC_NAME_ILLEGAL(5, "Topic名称非法"),
TOPIC_OPERATION_TOPIC_EXISTED(6, "Topic已存在"),
TOPIC_OPERATION_UNKNOWN_TOPIC_PARTITION(7, "Topic未知"),
TOPIC_OPERATION_TOPIC_CONFIG_ILLEGAL(8, "Topic配置错误"),
TOPIC_OPERATION_TOPIC_IN_DELETING(9, "Topic正在删除"),
TOPIC_OPERATION_UNKNOWN_ERROR(10, "未知错误"),
TOPIC_EXIST_CONNECT_CANNOT_DELETE(10, "topic exist connect cannot delete"),
EXIST_TOPIC_CANNOT_DELETE(10, "exist topic cannot delete"),
/**
* 工单
*/
CHANGE_ZOOKEEPER_FORBIDEN(100, "change zookeeper forbiden"),
// APP_EXIST_TOPIC_AUTHORITY_CANNOT_DELETE(1000, "app exist topic authority cannot delete"),
UPLOAD_FILE_FAIL(1000, "upload file fail"),
FILE_TYPE_NOT_SUPPORT(1000, "File type not support"),
DOWNLOAD_FILE_FAIL(1000, "download file fail"),
TOPIC_ALREADY_EXIST(17400, "topic already existed"),
CONSUMER_GROUP_NOT_EXIST(17411, "consumerGroup not exist"),
;
private int code;
......
......@@ -23,6 +23,8 @@ public class ClusterDetailDTO {
private String securityProperties;
private String jmxProperties;
private Integer status;
private Date gmtCreate;
......@@ -103,6 +105,14 @@ public class ClusterDetailDTO {
this.securityProperties = securityProperties;
}
public String getJmxProperties() {
return jmxProperties;
}
public void setJmxProperties(String jmxProperties) {
this.jmxProperties = jmxProperties;
}
public Integer getStatus() {
return status;
}
......@@ -176,8 +186,9 @@ public class ClusterDetailDTO {
", bootstrapServers='" + bootstrapServers + '\'' +
", kafkaVersion='" + kafkaVersion + '\'' +
", idc='" + idc + '\'' +
", mode='" + mode + '\'' +
", mode=" + mode +
", securityProperties='" + securityProperties + '\'' +
", jmxProperties='" + jmxProperties + '\'' +
", status=" + status +
", gmtCreate=" + gmtCreate +
", gmtModify=" + gmtModify +
......
package com.xiaojukeji.kafka.manager.common.entity.ao.config;
/**
* @author zengqiao
* @date 20/9/7
*/
public class SinkTopicRequestTimeMetricsConfig {
private Long clusterId;
private String topicName;
private Long startId;
private Long step;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public String getTopicName() {
return topicName;
}
public void setTopicName(String topicName) {
this.topicName = topicName;
}
public Long getStartId() {
return startId;
}
public void setStartId(Long startId) {
this.startId = startId;
}
public Long getStep() {
return step;
}
public void setStep(Long step) {
this.step = step;
}
@Override
public String toString() {
return "SinkTopicRequestTimeMetricsConfig{" +
"clusterId=" + clusterId +
", topicName='" + topicName + '\'' +
", startId=" + startId +
", step=" + step +
'}';
}
}
\ No newline at end of file
package com.xiaojukeji.kafka.manager.common.entity.dto.op;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import io.swagger.annotations.ApiModel;
import io.swagger.annotations.ApiModelProperty;
import java.util.List;
/**
* @author zengqiao
* @date 21/01/24
*/
@JsonIgnoreProperties(ignoreUnknown = true)
@ApiModel(description="优选为Controller的候选者")
public class ControllerPreferredCandidateDTO {
@ApiModelProperty(value="集群ID")
private Long clusterId;
@ApiModelProperty(value="优选为controller的BrokerId")
private List<Integer> brokerIdList;
public Long getClusterId() {
return clusterId;
}
public void setClusterId(Long clusterId) {
this.clusterId = clusterId;
}
public List<Integer> getBrokerIdList() {
return brokerIdList;
}
public void setBrokerIdList(List<Integer> brokerIdList) {
this.brokerIdList = brokerIdList;
}
@Override
public String toString() {
return "ControllerPreferredCandidateDTO{" +
"clusterId=" + clusterId +
", brokerIdList=" + brokerIdList +
'}';
}
}
......@@ -102,12 +102,11 @@ public class ClusterDTO {
'}';
}
public Boolean legal() {
public boolean legal() {
if (ValidateUtils.isNull(clusterName)
|| ValidateUtils.isNull(zookeeper)
|| ValidateUtils.isNull(idc)
|| ValidateUtils.isNull(bootstrapServers)
) {
|| ValidateUtils.isNull(bootstrapServers)) {
return false;
}
return true;
......
......@@ -17,6 +17,8 @@ public class GatewayConfigDO {
private Long version;
private String description;
private Date createTime;
private Date modifyTime;
......@@ -61,6 +63,14 @@ public class GatewayConfigDO {
this.version = version;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Date getCreateTime() {
return createTime;
}
......@@ -85,6 +95,7 @@ public class GatewayConfigDO {
", name='" + name + '\'' +
", value='" + value + '\'' +
", version=" + version +
", description='" + description + '\'' +
", createTime=" + createTime +
", modifyTime=" + modifyTime +
'}';
......
......@@ -28,7 +28,7 @@ public class ExpiredTopicVO {
@ApiModelProperty(value = "负责人")
private String principals;
@ApiModelProperty(value = "状态, -1:可下线, 0:过期待通知, 1+:已通知待反馈")
@ApiModelProperty(value = "状态, -1:已通知可下线, 0:过期待通知, 1+:已通知待反馈")
private Integer status;
public Long getClusterId() {
......
......@@ -26,6 +26,9 @@ public class GatewayConfigVO {
@ApiModelProperty(value="版本")
private Long version;
@ApiModelProperty(value="描述说明")
private String description;
@ApiModelProperty(value="创建时间")
private Date createTime;
......@@ -72,6 +75,14 @@ public class GatewayConfigVO {
this.version = version;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Date getCreateTime() {
return createTime;
}
......@@ -96,6 +107,7 @@ public class GatewayConfigVO {
", name='" + name + '\'' +
", value='" + value + '\'' +
", version=" + version +
", description='" + description + '\'' +
", createTime=" + createTime +
", modifyTime=" + modifyTime +
'}';
......
......@@ -60,6 +60,13 @@ public class JsonUtils {
return JSON.parseObject(src, clazz);
}
public static <T> List<T> stringToArrObj(String src, Class<T> clazz) {
if (ValidateUtils.isBlank(src)) {
return null;
}
return JSON.parseArray(src, clazz);
}
public static List<TopicConnectionDO> parseTopicConnections(Long clusterId, JSONObject jsonObject, long postTime) {
List<TopicConnectionDO> connectionDOList = new ArrayList<>();
for (String clientType: jsonObject.keySet()) {
......
......@@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.common.utils;
import org.apache.commons.lang.StringUtils;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Set;
......@@ -11,6 +12,20 @@ import java.util.Set;
* @date 20/4/16
*/
public class ValidateUtils {
/**
* 任意一个为空, 则返回true
*/
public static boolean anyNull(Object... objects) {
return Arrays.stream(objects).anyMatch(ValidateUtils::isNull);
}
/**
* 是空字符串或者空
*/
public static boolean anyBlank(String... strings) {
return Arrays.stream(strings).anyMatch(StringUtils::isBlank);
}
/**
* 为空
*/
......
......@@ -79,7 +79,7 @@ public class JmxConnectorWrap {
try {
Map<String, Object> environment = new HashMap<String, Object>();
if (!ValidateUtils.isBlank(this.jmxConfig.getUsername()) && !ValidateUtils.isBlank(this.jmxConfig.getPassword())) {
environment.put(javax.management.remote.JMXConnector.CREDENTIALS, Arrays.asList(this.jmxConfig.getUsername(), this.jmxConfig.getPassword()));
environment.put(JMXConnector.CREDENTIALS, Arrays.asList(this.jmxConfig.getUsername(), this.jmxConfig.getPassword()));
}
if (jmxConfig.isOpenSSL() != null && this.jmxConfig.isOpenSSL()) {
environment.put(Context.SECURITY_PROTOCOL, "ssl");
......
......@@ -33,7 +33,9 @@ public class ZkPathUtil {
private static final String D_METRICS_CONFIG_ROOT_NODE = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "KafkaExMetrics";
public static final String D_CONTROLLER_CANDIDATES = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "extension/candidates";
public static final String D_CONFIG_EXTENSION_ROOT_NODE = CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + "extension";
public static final String D_CONTROLLER_CANDIDATES = D_CONFIG_EXTENSION_ROOT_NODE + ZOOKEEPER_SEPARATOR + "candidates";
public static String getBrokerIdNodePath(Integer brokerId) {
return BROKER_IDS_ROOT + ZOOKEEPER_SEPARATOR + String.valueOf(brokerId);
......@@ -111,6 +113,10 @@ public class ZkPathUtil {
}
public static String getKafkaExtraMetricsPath(Integer brokerId) {
return D_METRICS_CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + String.valueOf(brokerId);
return D_METRICS_CONFIG_ROOT_NODE + ZOOKEEPER_SEPARATOR + brokerId;
}
public static String getControllerCandidatePath(Integer brokerId) {
return D_CONTROLLER_CANDIDATES + ZOOKEEPER_SEPARATOR + brokerId;
}
}
package com.xiaojukeji.kafka.manager.common.utils;
import org.junit.Assert;
import org.junit.Test;
import java.util.HashMap;
import java.util.Map;
public class JsonUtilsTest {
@Test
public void testMapToJsonString() {
Map<String, Object> map = new HashMap<>();
map.put("key", "value");
map.put("int", 1);
String expectRes = "{\"key\":\"value\",\"int\":1}";
Assert.assertEquals(expectRes, JsonUtils.toJSONString(map));
}
}
......@@ -94,6 +94,9 @@ import 'antd/es/divider/style';
import Upload from 'antd/es/upload';
import 'antd/es/upload/style';
import Transfer from 'antd/es/transfer';
import 'antd/es/transfer/style';
import TimePicker from 'antd/es/time-picker';
import 'antd/es/time-picker/style';
......@@ -142,5 +145,6 @@ export {
TimePicker,
RangePickerValue,
Badge,
Popover
Popover,
Transfer
};
......@@ -25,7 +25,7 @@
.editor{
height: 100%;
position: absolute;
left: -14%;
left: -12%;
width: 120%;
}
}
......@@ -21,24 +21,12 @@ class Monacoeditor extends React.Component<IEditorProps> {
public state = {
placeholder: '',
};
// public arr = '{"clusterId":95,"startId":37397856,"step":100,"topicName":"kmo_topic_metrics_tempory_zq"}';
// public Ars(a: string) {
// const obj = JSON.parse(a);
// const newobj: any = {};
// for (const item in obj) {
// if (typeof obj[item] === 'object') {
// this.Ars(obj[item]);
// } else {
// newobj[item] = obj[item];
// }
// }
// return JSON.stringify(newobj);
// }
public async componentDidMount() {
const { value, onChange } = this.props;
const format: any = await format2json(value);
this.editor = monaco.editor.create(this.ref, {
value: format.result,
value: format.result || value,
language: 'json',
lineNumbers: 'off',
scrollBeyondLastLine: false,
......@@ -48,7 +36,7 @@ class Monacoeditor extends React.Component<IEditorProps> {
minimap: {
enabled: false,
},
// automaticLayout: true, // 自动布局
automaticLayout: true, // 自动布局
glyphMargin: true, // 字形边缘 {},[]
// useTabStops: false,
// formatOnPaste: true,
......
......@@ -130,6 +130,8 @@ class XForm extends React.Component<IXFormProps> {
this.renderFormItem(formItem),
)}
{formItem.renderExtraElement ? formItem.renderExtraElement() : null}
{/* 添加保存时间提示文案 */}
{formItem.attrs?.prompttype ? <span style={{ color: "#cccccc", fontSize: '12px', lineHeight: '20px', display: 'block' }}>{formItem.attrs.prompttype}</span> : null}
</Form.Item>
);
})}
......
......@@ -67,7 +67,7 @@ export const timeMonthStr = 'YYYY/MM';
// tslint:disable-next-line:max-line-length
export const indexUrl ={
indexUrl:'https://github.com/didi/kafka-manager',
indexUrl:'https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/kafka_metrics_desc.md', // 指标说明
cagUrl:'https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/add_cluster/add_cluster.md', // 集群接入指南 Cluster access Guide
}
......
......@@ -100,7 +100,7 @@ export class ClusterConsumer extends SearchAndFilterContainer {
<div className="k-row">
<ul className="k-tab">
<li>{this.props.tab}</li>
{this.renderSearch()}
{this.renderSearch('', '请输入消费组名称')}
</ul>
<Table
columns={this.columns}
......
......@@ -2,7 +2,8 @@
import * as React from 'react';
import { SearchAndFilterContainer } from 'container/search-filter';
import { Table } from 'component/antd';
import { Table, Button, Popconfirm, Modal, Transfer, notification } from 'component/antd';
// import { Transfer } from 'antd'
import { observer } from 'mobx-react';
import { pagination } from 'constants/table';
import Url from 'lib/url-parser';
......@@ -16,8 +17,12 @@ import { timeFormat } from 'constants/strategy';
export class ClusterController extends SearchAndFilterContainer {
public clusterId: number;
public state = {
public state: any = {
searchKey: '',
searchCandidateKey: '',
isCandidateModel: false,
mockData: [],
targetKeys: [],
};
constructor(props: any) {
......@@ -37,14 +42,25 @@ export class ClusterController extends SearchAndFilterContainer {
return data;
}
public renderController() {
public getCandidateData<T extends IController>(origin: T[]) {
let data: T[] = origin;
let { searchCandidateKey } = this.state;
searchCandidateKey = (searchCandidateKey + '').trim().toLowerCase();
data = searchCandidateKey ? origin.filter((item: IController) =>
(item.host !== undefined && item.host !== null) && item.host.toLowerCase().includes(searchCandidateKey as string),
) : origin;
return data;
}
// 候选controller
public renderCandidateController() {
const columns = [
{
title: 'BrokerId',
dataIndex: 'brokerId',
key: 'brokerId',
width: '30%',
width: '20%',
sorter: (a: IController, b: IController) => b.brokerId - a.brokerId,
render: (r: string, t: IController) => {
return (
......@@ -57,7 +73,7 @@ export class ClusterController extends SearchAndFilterContainer {
title: 'BrokerHost',
key: 'host',
dataIndex: 'host',
width: '30%',
width: '20%',
// render: (r: string, t: IController) => {
// return (
// <a href={`${this.urlPrefix}/admin/broker-detail?clusterId=${this.clusterId}&brokerId=${t.brokerId}`}>{r}
......@@ -65,6 +81,77 @@ export class ClusterController extends SearchAndFilterContainer {
// );
// },
},
{
title: 'Broker状态',
key: 'status',
dataIndex: 'status',
width: '20%',
render: (r: number, t: IController) => {
return (
<span>{r === 1 ? '不在线' : '在线'}</span>
);
},
},
{
title: '创建时间',
dataIndex: 'startTime',
key: 'startTime',
width: '25%',
sorter: (a: IController, b: IController) => b.timestamp - a.timestamp,
render: (t: number) => moment(t).format(timeFormat),
},
{
title: '操作',
dataIndex: 'operation',
key: 'operation',
width: '15%',
render: (r: string, t: IController) => {
return (
<Popconfirm
title="确定删除?"
onConfirm={() => this.deleteCandidateCancel(t)}
cancelText="取消"
okText="确认"
>
<a>删除</a>
</Popconfirm>
);
},
},
];
return (
<Table
columns={columns}
dataSource={this.getCandidateData(admin.controllerCandidate)}
pagination={pagination}
rowKey="key"
/>
);
}
public renderController() {
const columns = [
{
title: 'BrokerId',
dataIndex: 'brokerId',
key: 'brokerId',
width: '30%',
sorter: (a: IController, b: IController) => b.brokerId - a.brokerId,
render: (r: string, t: IController) => {
return (
<a href={`${this.urlPrefix}/admin/broker-detail?clusterId=${this.clusterId}&brokerId=${t.brokerId}`}>{r}
</a>
);
},
},
{
title: 'BrokerHost',
key: 'host',
dataIndex: 'host',
width: '30%',
},
{
title: '变更时间',
dataIndex: 'timestamp',
......@@ -87,16 +174,104 @@ export class ClusterController extends SearchAndFilterContainer {
public componentDidMount() {
admin.getControllerHistory(this.clusterId);
admin.getCandidateController(this.clusterId);
admin.getBrokersMetadata(this.clusterId);
}
public addController = () => {
this.setState({ isCandidateModel: true, targetKeys: [] })
}
public addCandidateChange = (targetKeys: any) => {
this.setState({ targetKeys })
}
public handleCandidateCancel = () => {
this.setState({ isCandidateModel: false });
}
public handleCandidateOk = () => {
let brokerIdList = this.state.targetKeys.map((item: any) => {
return admin.brokersMetadata[item].brokerId
})
admin.addCandidateController(this.clusterId, brokerIdList).then(data => {
notification.success({ message: '新增成功' });
admin.getCandidateController(this.clusterId);
}).catch(err => {
notification.error({ message: '新增失败' });
})
this.setState({ isCandidateModel: false, targetKeys: [] });
}
public deleteCandidateCancel = (target: any) => {
admin.deleteCandidateCancel(this.clusterId, [target.brokerId]).then(() => {
notification.success({ message: '删除成功' });
});
this.setState({ isCandidateModel: false });
}
public renderAddCandidateController() {
let filterControllerCandidate = admin.brokersMetadata.filter((item: any) => {
return !admin.filtercontrollerCandidate.includes(item.brokerId)
})
return (
<Modal
title="新增候选Controller"
visible={this.state.isCandidateModel}
// okText="确认"
// cancelText="取消"
maskClosable={false}
// onOk={() => this.handleCandidateOk()}
onCancel={() => this.handleCandidateCancel()}
footer={<>
<Button style={{ width: '60px' }} onClick={() => this.handleCandidateCancel()}>取消</Button>
<Button disabled={this.state.targetKeys.length > 0 ? false : true} style={{ width: '60px' }} type="primary" onClick={() => this.handleCandidateOk()}>确定</Button>
</>
}
>
<Transfer
dataSource={filterControllerCandidate}
targetKeys={this.state.targetKeys}
render={item => item.host}
onChange={(targetKeys) => this.addCandidateChange(targetKeys)}
titles={['未选', '已选']}
locale={{
itemUnit: '',
itemsUnit: '',
}}
listStyle={{
width: "45%",
}}
/>
</Modal>
);
}
public render() {
return (
<div className="k-row">
<ul className="k-tab">
<li>
<span>候选Controller</span>
<span style={{ display: 'inline-block', color: "#a7a8a9", fontSize: '12px', marginLeft: '15px' }}>Controller将会优先从以下Broker中选举</span>
</li>
<div style={{ display: 'flex' }}>
<div style={{ marginRight: '15px' }}>
<Button onClick={() => this.addController()} type='primary'>新增候选Controller</Button>
</div>
{this.renderSearch('', '请查找Host', 'searchCandidateKey')}
</div>
</ul>
{this.renderCandidateController()}
<ul className="k-tab" style={{ marginTop: '10px' }}>
<li>{this.props.tab}</li>
{this.renderSearch('', '请输入Host')}
</ul>
{this.renderController()}
{this.renderAddCandidateController()}
</div>
);
}
......
......@@ -94,4 +94,10 @@
.region-prompt{
font-weight: bold;
text-align: center;
}
.asd{
display: flex;
justify-content: space-around;
align-items: center;
}
\ No newline at end of file
......@@ -32,9 +32,9 @@ export class ClusterDetail extends React.Component {
}
public render() {
let content = {} as IMetaData;
content = admin.basicInfo ? admin.basicInfo : content;
return (
let content = {} as IMetaData;
content = admin.basicInfo ? admin.basicInfo : content;
return (
<>
<PageHeader
className="detail topic-detail-header"
......@@ -46,7 +46,7 @@ export class ClusterDetail extends React.Component {
<ClusterOverview basicInfo={content} />
</TabPane>
<TabPane tab="Topic信息" key="2">
<ClusterTopic tab={'Topic信息'}/>
<ClusterTopic tab={'Topic信息'} />
</TabPane>
<TabPane tab="Broker信息" key="3">
<ClusterBroker tab={'Broker信息'} basicInfo={content} />
......@@ -60,11 +60,11 @@ export class ClusterDetail extends React.Component {
<TabPane tab="逻辑集群信息" key="6">
<LogicalCluster tab={'逻辑集群信息'} basicInfo={content} />
</TabPane>
<TabPane tab="Controller变更历史" key="7">
<TabPane tab="Controller信息" key="7">
<ClusterController tab={'Controller变更历史'} />
</TabPane>
<TabPane tab="限流信息" key="8">
<CurrentLimiting tab={'限流信息'}/>
<TabPane tab="限流信息" key="8">
<CurrentLimiting tab={'限流信息'} />
</TabPane>
</Tabs>
</>
......
......@@ -12,6 +12,7 @@ import { urlPrefix } from 'constants/left-menu';
import { indexUrl } from 'constants/strategy'
import { region } from 'store';
import './index.less';
import Monacoeditor from 'component/editor/monacoEditor';
import { getAdminClusterColumns } from '../config';
const { confirm } = Modal;
......@@ -132,6 +133,25 @@ export class ClusterList extends SearchAndFilterContainer {
"security.protocol": "SASL_PLAINTEXT",
"sasl.mechanism": "PLAIN",
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\\"xxxxxx\\" password=\\"xxxxxx\\";"
}`,
rows: 8,
},
},
{
key: 'jmxProperties',
label: 'JMX认证',
type: 'text_area',
rules: [{
required: false,
message: '请输入JMX认证',
}],
attrs: {
placeholder: `请输入JMX认证,例如:
{
"maxConn": 10, #KM对单台Broker对最大连接数
"username": "xxxxx", #用户名
"password": "xxxxx", #密码
"openSSL": true, #开启SSL,true表示开启SSL,false表示关闭
}`,
rows: 8,
},
......
import * as React from 'react';
import { IUser, IUploadFile, IConfigure, IMetaData, IBrokersPartitions } from 'types/base-type';
import { IUser, IUploadFile, IConfigure, IConfigGateway, IMetaData } from 'types/base-type';
import { users } from 'store/users';
import { version } from 'store/version';
import { showApplyModal, showModifyModal, showConfigureModal } from 'container/modal/admin';
import { showApplyModal, showApplyModalModifyPassword, showModifyModal, showConfigureModal, showConfigGatewayModal } from 'container/modal/admin';
import { Popconfirm, Tooltip } from 'component/antd';
import { admin } from 'store/admin';
import { cellStyle } from 'constants/table';
......@@ -27,6 +27,7 @@ export const getUserColumns = () => {
return (
<span className="table-operation">
<a onClick={() => showApplyModal(record)}>编辑</a>
<a onClick={() => showApplyModalModifyPassword(record)}>修改密码</a>
<Popconfirm
title="确定删除?"
onConfirm={() => users.deleteUser(record.username)}
......@@ -184,6 +185,87 @@ export const getConfigureColumns = () => {
return columns;
};
// 网关配置
export const getConfigColumns = () => {
const columns = [
{
title: '配置类型',
dataIndex: 'type',
key: 'type',
width: '25%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => a.type.charCodeAt(0) - b.type.charCodeAt(0),
},
{
title: '配置键',
dataIndex: 'name',
key: 'name',
width: '15%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => a.name.charCodeAt(0) - b.name.charCodeAt(0),
},
{
title: '配置值',
dataIndex: 'value',
key: 'value',
width: '20%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => a.value.charCodeAt(0) - b.value.charCodeAt(0),
render: (t: string) => {
return t.substr(0, 1) === '{' && t.substr(0, -1) === '}' ? JSON.stringify(JSON.parse(t), null, 4) : t;
},
},
{
title: '修改时间',
dataIndex: 'modifyTime',
key: 'modifyTime',
width: '15%',
sorter: (a: IConfigGateway, b: IConfigGateway) => b.modifyTime - a.modifyTime,
render: (t: number) => moment(t).format(timeFormat),
},
{
title: '版本号',
dataIndex: 'version',
key: 'version',
width: '10%',
ellipsis: true,
sorter: (a: IConfigGateway, b: IConfigGateway) => b.version.charCodeAt(0) - a.version.charCodeAt(0),
},
{
title: '描述信息',
dataIndex: 'description',
key: 'description',
width: '20%',
ellipsis: true,
onCell: () => ({
style: {
maxWidth: 180,
...cellStyle,
},
}),
},
{
title: '操作',
width: '10%',
render: (text: string, record: IConfigGateway) => {
return (
<span className="table-operation">
<a onClick={() => showConfigGatewayModal(record)}>编辑</a>
<Popconfirm
title="确定删除?"
onConfirm={() => admin.deleteConfigGateway({ id: record.id })}
cancelText="取消"
okText="确认"
>
<a>删除</a>
</Popconfirm>
</span>);
},
},
];
return columns;
};
const renderClusterHref = (value: number | string, item: IMetaData, key: number) => {
return ( // 0 暂停监控--不可点击 1 监控中---可正常点击
<>
......
......@@ -3,11 +3,11 @@ import { SearchAndFilterContainer } from 'container/search-filter';
import { Table, Button, Spin } from 'component/antd';
import { admin } from 'store/admin';
import { observer } from 'mobx-react';
import { IConfigure } from 'types/base-type';
import { IConfigure, IConfigGateway } from 'types/base-type';
import { users } from 'store/users';
import { pagination } from 'constants/table';
import { getConfigureColumns } from './config';
import { showConfigureModal } from 'container/modal/admin';
import { getConfigureColumns, getConfigColumns } from './config';
import { showConfigureModal, showConfigGatewayModal } from 'container/modal/admin';
@observer
export class ConfigureManagement extends SearchAndFilterContainer {
......@@ -17,7 +17,12 @@ export class ConfigureManagement extends SearchAndFilterContainer {
};
public componentDidMount() {
admin.getConfigure();
if (this.props.isShow) {
admin.getGatewayList();
admin.getGatewayType();
} else {
admin.getConfigure();
}
}
public getData<T extends IConfigure>(origin: T[]) {
......@@ -34,15 +39,34 @@ export class ConfigureManagement extends SearchAndFilterContainer {
return data;
}
public getGatewayData<T extends IConfigGateway>(origin: T[]) {
let data: T[] = origin;
let { searchKey } = this.state;
searchKey = (searchKey + '').trim().toLowerCase();
data = searchKey ? origin.filter((item: IConfigGateway) =>
((item.name !== undefined && item.name !== null) && item.name.toLowerCase().includes(searchKey as string))
|| ((item.value !== undefined && item.value !== null) && item.value.toLowerCase().includes(searchKey as string))
|| ((item.description !== undefined && item.description !== null) &&
item.description.toLowerCase().includes(searchKey as string)),
) : origin;
return data;
}
public renderTable() {
return (
<Spin spinning={users.loading}>
<Table
{this.props.isShow ? <Table
rowKey="key"
columns={getConfigColumns()}
dataSource={this.getGatewayData(admin.configGatewayList)}
pagination={pagination}
/> : <Table
rowKey="key"
columns={getConfigureColumns()}
dataSource={this.getData(admin.configureList)}
pagination={pagination}
/>
/>}
</Spin>
);
......@@ -53,7 +77,7 @@ export class ConfigureManagement extends SearchAndFilterContainer {
<ul>
{this.renderSearch('', '请输入配置键、值或描述')}
<li className="right-btn-1">
<Button type="primary" onClick={() => showConfigureModal()}>增加配置</Button>
<Button type="primary" onClick={() => this.props.isShow ? showConfigGatewayModal() : showConfigureModal()}>增加配置</Button>
</li>
</ul>
);
......
......@@ -6,6 +6,7 @@ import { curveKeys, CURVE_KEY_MAP, PERIOD_RADIO_MAP, PERIOD_RADIO } from './conf
import moment = require('moment');
import { observer } from 'mobx-react';
import { timeStampStr } from 'constants/strategy';
import { adminMonitor } from 'store/admin-monitor';
@observer
export class DataCurveFilter extends React.Component {
......@@ -21,6 +22,7 @@ export class DataCurveFilter extends React.Component {
}
public refreshAll = () => {
adminMonitor.setRequestId(null);
Object.keys(curveKeys).forEach((c: curveKeys) => {
const { typeInfo, curveInfo: option } = CURVE_KEY_MAP.get(c);
const { parser } = typeInfo;
......@@ -32,7 +34,7 @@ export class DataCurveFilter extends React.Component {
return (
<>
<Radio.Group onChange={this.radioChange} defaultValue={curveInfo.periodKey}>
{PERIOD_RADIO.map(p => <Radio.Button key={p.key} value={p.key}>{p.label}</Radio.Button>)}
{PERIOD_RADIO.map(p => <Radio.Button key={p.key} value={p.key}>{p.label}</Radio.Button>)}
</Radio.Group>
<DatePicker.RangePicker
format={timeStampStr}
......
......@@ -13,17 +13,20 @@ export class PlatformManagement extends React.Component {
public render() {
return (
<>
<Tabs activeKey={location.hash.substr(1) || '1'} type="card" onChange={handleTabKey}>
<TabPane tab="应用管理" key="1">
<AdminAppList />
</TabPane>
<TabPane tab="用户管理" key="2">
<UserManagement />
</TabPane>
<TabPane tab="配置管理" key="3">
<ConfigureManagement />
</TabPane>
</Tabs>
<Tabs activeKey={location.hash.substr(1) || '1'} type="card" onChange={handleTabKey}>
<TabPane tab="应用管理" key="1">
<AdminAppList />
</TabPane>
<TabPane tab="用户管理" key="2">
<UserManagement />
</TabPane>
<TabPane tab="平台配置" key="3">
<ConfigureManagement isShow={false} />
</TabPane>
<TabPane tab="网关配置" key="4">
<ConfigureManagement isShow={true} />
</TabPane>
</Tabs>
</>
);
}
......
......@@ -29,7 +29,7 @@ export class UserManagement extends SearchAndFilterContainer {
searchKey = (searchKey + '').trim().toLowerCase();
data = searchKey ? origin.filter((item: IUser) =>
(item.username !== undefined && item.username !== null) && item.username.toLowerCase().includes(searchKey as string)) : origin ;
(item.username !== undefined && item.username !== null) && item.username.toLowerCase().includes(searchKey as string)) : origin;
return data;
}
......
import * as React from 'react';
import { alarm } from 'store/alarm';
import { IMonitorGroups } from 'types/base-type';
import { getValueFromLocalStorage, setValueToLocalStorage } from 'lib/local-storage';
import { getValueFromLocalStorage, setValueToLocalStorage, deleteValueFromLocalStorage } from 'lib/local-storage';
import { VirtualScrollSelect } from '../../../component/virtual-scroll-select';
interface IAlarmSelectProps {
......@@ -36,6 +36,10 @@ export class AlarmSelect extends React.Component<IAlarmSelectProps> {
onChange && onChange(params);
}
public componentWillUnmount() {
deleteValueFromLocalStorage('monitorGroups');
}
public render() {
const { value, isDisabled } = this.props;
return (
......
......@@ -9,6 +9,7 @@ import { pagination } from 'constants/table';
import { urlPrefix } from 'constants/left-menu';
import { alarm } from 'store/alarm';
import 'styles/table-filter.less';
import { Link } from 'react-router-dom';
@observer
export class AlarmList extends SearchAndFilterContainer {
......@@ -24,7 +25,7 @@ export class AlarmList extends SearchAndFilterContainer {
if (app.active !== '-1' || searchKey !== '') {
data = origin.filter(d =>
((d.name !== undefined && d.name !== null) && d.name.toLowerCase().includes(searchKey as string)
|| ((d.operator !== undefined && d.operator !== null) && d.operator.toLowerCase().includes(searchKey as string)))
|| ((d.operator !== undefined && d.operator !== null) && d.operator.toLowerCase().includes(searchKey as string)))
&& (app.active === '-1' || d.appId === (app.active + '')),
);
} else {
......@@ -55,9 +56,7 @@ export class AlarmList extends SearchAndFilterContainer {
{this.renderSearch('名称:', '请输入告警规则或者操作人')}
<li className="right-btn-1">
<Button type="primary">
<a href={`${urlPrefix}/alarm/add`}>
新增规则
</a>
<Link to={`/alarm/add`}>新增规则</Link>
</Button>
</li>
</>
......@@ -68,6 +67,9 @@ export class AlarmList extends SearchAndFilterContainer {
if (!alarm.monitorStrategies.length) {
alarm.getMonitorStrategies();
}
if (!app.data.length) {
app.getAppList();
}
}
public render() {
......
......@@ -91,7 +91,7 @@ export class MyCluster extends SearchAndFilterContainer {
],
formData: {},
visible: true,
title: '申请集群',
title: <div><span>申请集群</span><a className='applicationDocument' href="https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/resource_apply.md" target='_blank'>资源申请文档</a></div>,
okText: '确认',
onSubmit: (value: any) => {
value.idc = region.currentRegion;
......
......@@ -117,12 +117,12 @@ class DataMigrationFormTable extends React.Component<IFormTableProps> {
key: 'maxThrottle',
editable: true,
}, {
title: '迁移保存时间(h)',
title: '迁移后Topic保存时间(h)',
dataIndex: 'reassignRetentionTime',
key: 'reassignRetentionTime',
editable: true,
}, {
title: '保存时间(h)',
title: 'Topic保存时间(h)',
dataIndex: 'retentionTime',
key: 'retentionTime', // originalRetentionTime
width: '132px',
......
......@@ -61,6 +61,7 @@ export const showEditClusterTopic = (item: IClusterTopics) => {
attrs: {
placeholder: '请输入保存时间',
suffix: '小时',
prompttype:'修改保存时间,预计一分钟左右生效!'
},
},
{
......
......@@ -158,26 +158,26 @@ export const createMigrationTasks = () => {
},
{
key: 'originalRetentionTime',
label: '保存时间',
label: 'Topic保存时间',
rules: [{
required: true,
message: '请输入原保存时间',
message: '请输入原Topic保存时间',
}],
attrs: {
disabled: true,
placeholder: '请输入原保存时间',
placeholder: '请输入原Topic保存时间',
suffix: '小时',
},
},
{
key: 'reassignRetentionTime',
label: '迁移保存时间',
label: '迁移后Topic保存时间',
rules: [{
required: true,
message: '请输入迁移保存时间',
message: '请输入迁移后Topic保存时间',
}],
attrs: {
placeholder: '请输入迁移保存时间',
placeholder: '请输入迁移后Topic保存时间',
suffix: '小时',
},
},
......
......@@ -24,26 +24,111 @@ export const showApplyModal = (record?: IUser) => {
value: +item,
})),
rules: [{ required: true, message: '请选择角色' }],
}, {
key: 'password',
label: '密码',
type: FormItemType.inputPassword,
rules: [{ required: !record, message: '请输入密码' }],
},
},
// {
// key: 'password',
// label: '密码',
// type: FormItemType.inputPassword,
// rules: [{ required: !record, message: '请输入密码' }],
// },
],
formData: record || {},
visible: true,
title: record ? '修改用户' : '新增用户',
onSubmit: (value: IUser) => {
if (record) {
return users.modfiyUser(value).then(() => {
message.success('操作成功');
});
return users.modfiyUser(value)
}
return users.addUser(value).then(() => {
message.success('操作成功');
});
},
};
if(!record){
let formMap: any = xFormModal.formMap
formMap.splice(2, 0,{
key: 'password',
label: '密码',
type: FormItemType.inputPassword,
rules: [{ required: !record, message: '请输入密码' }],
},)
}
wrapper.open(xFormModal);
};
// const handleCfPassword = (rule:any, value:any, callback:any)=>{
// if()
// }
export const showApplyModalModifyPassword = (record: IUser) => {
const xFormModal:any = {
formMap: [
// {
// key: 'oldPassword',
// label: '旧密码',
// type: FormItemType.inputPassword,
// rules: [{
// required: true,
// message: '请输入旧密码',
// }]
// },
{
key: 'newPassword',
label: '新密码',
type: FormItemType.inputPassword,
rules: [
{
required: true,
message: '请输入新密码',
}
],
attrs:{
onChange:(e:any)=>{
users.setNewPassWord(e.target.value)
}
}
},
{
key: 'confirmPassword',
label: '确认密码',
type: FormItemType.inputPassword,
rules: [
{
required: true,
message: '请确认密码',
validator:(rule:any, value:any, callback:any) => {
// 验证新密码的一致性
if(users.newPassWord){
if(value!==users.newPassWord){
rule.message = "两次密码输入不一致";
callback('两次密码输入不一致')
}else{
callback()
}
}else if(!value){
rule.message = "请确认密码";
callback('请确认密码');
}else{
callback()
}
},
}
],
},
],
formData: record || {},
visible: true,
title: '修改密码',
onSubmit: (value: IUser) => {
let params:any = {
username:record?.username,
password:value.confirmPassword,
role:record?.role,
}
return users.modfiyUser(params).then(() => {
message.success('操作成功');
});
},
}
wrapper.open(xFormModal);
};
import * as React from 'react';
import { notification } from 'component/antd';
import { IUploadFile, IConfigure } from 'types/base-type';
import { notification, Select } from 'component/antd';
import { IUploadFile, IConfigure, IConfigGateway } from 'types/base-type';
import { version } from 'store/version';
import { admin } from 'store/admin';
import { wrapper } from 'store';
......@@ -97,8 +97,8 @@ const updateFormModal = (type: number) => {
formMap[2].attrs = {
accept: version.fileSuffix,
},
// tslint:disable-next-line:no-unused-expression
wrapper.ref && wrapper.ref.updateFormMap$(formMap, wrapper.xFormWrapper.formData, true);
// tslint:disable-next-line:no-unused-expression
wrapper.ref && wrapper.ref.updateFormMap$(formMap, wrapper.xFormWrapper.formData, true);
}
};
......@@ -157,8 +157,8 @@ export const showModifyModal = (record: IUploadFile) => {
export const showConfigureModal = async (record?: IConfigure) => {
if (record) {
const result:any = await format2json(record.configValue);
record.configValue = result.result;
const result: any = await format2json(record.configValue);
record.configValue = result.result || record.configValue;
}
const xFormModal = {
formMap: [
......@@ -193,10 +193,69 @@ export const showConfigureModal = async (record?: IConfigure) => {
return admin.editConfigure(value).then(data => {
notification.success({ message: '编辑配置成功' });
});
} else {
return admin.addNewConfigure(value).then(data => {
notification.success({ message: '新建配置成功' });
});
}
},
};
wrapper.open(xFormModal);
};
export const showConfigGatewayModal = async (record?: IConfigGateway) => {
const xFormModal = {
formMap: [
{
key: 'type',
label: '配置类型',
rules: [{ required: true, message: '请选择配置类型' }],
type: "select",
options: admin.gatewayType.map((item: any, index: number) => ({
key: index,
label: item.configName,
value: item.configType,
})),
attrs: {
disabled: record ? true : false,
}
}, {
key: 'name',
label: '配置键',
rules: [{ required: true, message: '请输入配置键' }],
attrs: {
disabled: record ? true : false,
},
}, {
key: 'value',
label: '配置值',
type: 'text_area',
rules: [{
required: true,
message: '请输入配置值',
}],
}, {
key: 'description',
label: '描述',
type: 'text_area',
rules: [{ required: true, message: '请输入备注' }],
},
],
formData: record || {},
visible: true,
isWaitting: true,
title: `${record ? '编辑配置' : '新建配置'}`,
onSubmit: async (parmas: IConfigGateway) => {
if (record) {
parmas.id = record.id;
return admin.editConfigGateway(parmas).then(data => {
notification.success({ message: '编辑配置成功' });
});
} else {
return admin.addNewConfigGateway(parmas).then(data => {
notification.success({ message: '新建配置成功' });
});
}
return admin.addNewConfigure(value).then(data => {
notification.success({ message: '新建配置成功' });
});
},
};
wrapper.open(xFormModal);
......
......@@ -85,7 +85,7 @@ export const showEditModal = (record?: IAppItem, from?: string, isDisabled?: boo
],
formData: record,
visible: true,
title: isDisabled ? '详情' : record ? '编辑' : <div><span>应用申请</span><a className='applicationDocument' href="###" target='_blank'>应用申请文档</a></div>,
title: isDisabled ? '详情' : record ? '编辑' : <div><span>应用申请</span><a className='applicationDocument' href="https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/resource_apply.md" target='_blank'>资源申请文档</a></div>,
// customRenderElement: isDisabled ? '' : record ? '' : <span className="tips">集群资源充足时,预计1分钟自动审批通过</span>,
isWaitting: true,
onSubmit: (value: IAppItem) => {
......
......@@ -20,14 +20,14 @@ export interface IRenderData {
}
export const migrationModal = (renderData: IRenderData[]) => {
const xFormWrapper = {
const xFormWrapper = {
type: 'drawer',
visible: true,
width: 1000,
title: '新建迁移任务',
customRenderElement: <WrappedDataMigrationFormTable data={renderData}/>,
customRenderElement: <WrappedDataMigrationFormTable data={renderData} />,
nofooter: true,
noform: true,
};
wrapper.open(xFormWrapper as IXFormWrapper);
wrapper.open(xFormWrapper as IXFormWrapper);
};
......@@ -75,8 +75,8 @@ export const showApprovalModal = (info: IOrderInfo, status: number, from?: strin
// }],
rules: [{
required: true,
message: '请输入大于12小于999的整数',
pattern: /^([1-9]{1}[0-9]{2})$|^([2-9]{1}[0-9]{1})$|^(1[2-9]{1})$/,
message: '请输入大于0小于10000的整数',
pattern: /^\+?[1-9]\d{0,3}(\.\d*)?$/,
}],
}, {
key: 'species',
......
......@@ -88,7 +88,7 @@ export const applyTopic = () => {
],
formData: {},
visible: true,
title: '申请Topic',
title: <div><span>申请Topic</span><a className='applicationDocument' href="https://github.com/didi/Logi-KafkaManager/blob/master/docs/user_guide/resource_apply.md" target='_blank'>资源申请文档</a></div>,
okText: '确认',
// customRenderElement: <span className="tips">集群资源充足时,预计1分钟自动审批通过</span>,
isWaitting: true,
......
......@@ -126,7 +126,7 @@ export class SearchAndFilterContainer extends React.Component<any, ISearchAndFil
);
}
public renderSearch(text?: string, placeholder?: string, keyName: string = 'searchKey') {
public renderSearch(text?: string, placeholder?: string, keyName: string = 'searchKey',) {
const value = this.state[keyName] as string;
return (
<li className="render-box">
......
......@@ -101,7 +101,9 @@ export class ConnectInformation extends SearchAndFilterContainer {
<>
<div className="k-row" >
<ul className="k-tab">
<li>连接信息</li>
<li>
连接信息 <span style={{ color: '#a7a8a9', fontSize: '12px', padding: '0 15px' }}>展示近20分钟的连接信息</span>
</li>
{this.renderSearch('', '请输入连接信息', 'searchKey')}
</ul>
{this.renderConnectionInfo(this.getData(topic.connectionInfo))}
......
......@@ -138,7 +138,7 @@ export class GroupID extends SearchAndFilterContainer {
public renderConsumerDetails() {
const consumerGroup = this.consumerGroup;
const columns = [{
const columns: any = [{
title: 'Partition ID',
dataIndex: 'partitionId',
key: 'partitionId',
......@@ -179,7 +179,8 @@ export class GroupID extends SearchAndFilterContainer {
<>
<div className="details-box">
<b>{consumerGroup}</b>
<div>
<div style={{ display: 'flex' }}>
{this.renderSearch('', '请输入Consumer ID')}
<Button onClick={this.backToPage}>返回</Button>
<Button onClick={this.updateDetailsStatus}>刷新</Button>
<Button onClick={() => this.showResetOffset()}>重置Offset</Button>
......@@ -187,7 +188,7 @@ export class GroupID extends SearchAndFilterContainer {
</div>
<Table
columns={columns}
dataSource={topic.consumeDetails}
dataSource={this.getDetailData(topic.consumeDetails)}
rowKey="key"
pagination={pagination}
/>
......@@ -214,7 +215,12 @@ export class GroupID extends SearchAndFilterContainer {
dataIndex: 'location',
key: 'location',
width: '34%',
},
}, {
title: '状态',
dataIndex: 'state',
key: 'state',
width: '34%',
}
];
return (
<>
......@@ -236,7 +242,17 @@ export class GroupID extends SearchAndFilterContainer {
data = searchKey ? origin.filter((item: IConsumerGroups) =>
(item.consumerGroup !== undefined && item.consumerGroup !== null) && item.consumerGroup.toLowerCase().includes(searchKey as string),
) : origin ;
) : origin;
return data;
}
public getDetailData<T extends IConsumeDetails>(origin: T[]) {
let data: T[] = origin;
let { searchKey } = this.state;
searchKey = (searchKey + '').trim().toLowerCase();
data = searchKey ? origin.filter((item: IConsumeDetails) =>
(item.clientId !== undefined && item.clientId !== null) && item.clientId.toLowerCase().includes(searchKey as string),
) : origin;
return data;
}
......
......@@ -71,32 +71,32 @@ class ResetOffset extends React.Component<any> {
const { getFieldDecorator } = this.props.form;
const { typeValue, offsetValue } = this.state;
return (
<>
<Alert message="重置之前一定要关闭消费客户端!!!" type="warning" showIcon={true} />
<Alert message="重置之前一定要关闭消费客户端!!!" type="warning" showIcon={true} />
<Alert message="重置之前一定要关闭消费客户端!!!" type="warning" showIcon={true} className="mb-30"/>
<>
<Alert message="重置消费偏移前,请先关闭客户端,否则会重置失败 !!!" type="warning" showIcon={true} />
<Alert message="关闭客户端后,请等待一分钟之后再重置消费偏移 !!!" type="warning" showIcon={true} />
{/* <Alert message="重置之前一定要关闭消费客户端!!!" type="warning" showIcon={true} className="mb-30" /> */}
<div className="o-container">
<Form labelAlign="left" onSubmit={this.handleSubmit} >
<Radio.Group onChange={this.onChangeType} value={typeValue}>
<Radio.Group onChange={this.onChangeType} value={typeValue}>
<Radio value="time"><span className="title-con">重置到指定时间</span></Radio>
<Row>
<Col span={26}>
<Form.Item label="" >
<Radio.Group
onChange={this.onChangeOffset}
value={offsetValue}
disabled={typeValue === 'partition'}
defaultValue="offset"
className="mr-10"
>
<Radio.Button value="offset">最新offset</Radio.Button>
<Radio.Button value="custom">自定义</Radio.Button>
</Radio.Group>
<Radio.Group
onChange={this.onChangeOffset}
value={offsetValue}
disabled={typeValue === 'partition'}
defaultValue="offset"
className="mr-10"
>
<Radio.Button value="offset">最新offset</Radio.Button>
<Radio.Button value="custom">自定义</Radio.Button>
</Radio.Group>
{typeValue === 'time' && offsetValue === 'custom' &&
getFieldDecorator('timestamp', {
rules: [{ required: false, message: '' }],
initialValue: moment(),
})(
})(
<DatePicker
showTime={true}
format={timeMinute}
......@@ -109,7 +109,7 @@ class ResetOffset extends React.Component<any> {
</Col>
</Row>
<Radio value="partition"><span className="title-con">重置指定分区及偏移</span></Radio>
</Radio.Group>
</Radio.Group>
<Row>
<Form.Item>
<Row>
......
import fetch, { formFetch } from './fetch';
import { IUploadFile, IUser, IQuotaModelItem, ILimitsItem, ITopic, IOrderParams, ISample, IMigration, IExecute, IEepand, IUtils, ITopicMetriceParams, IRegister, IEditTopic, IExpand, IDeleteTopic, INewRegions, INewLogical, IRebalance, INewBulidEnums, ITrigger, IApprovalOrder, IMonitorSilences, IConfigure, IBatchApproval } from 'types/base-type';
import { IUploadFile, IUser, IQuotaModelItem, ILimitsItem, ITopic, IOrderParams, ISample, IMigration, IExecute, IEepand, IUtils, ITopicMetriceParams, IRegister, IEditTopic, IExpand, IDeleteTopic, INewRegions, INewLogical, IRebalance, INewBulidEnums, ITrigger, IApprovalOrder, IMonitorSilences, IConfigure, IConfigGateway, IBatchApproval } from 'types/base-type';
import { IRequestParams } from 'types/alarm';
import { apiCache } from 'lib/api-cache';
......@@ -442,6 +442,34 @@ export const deleteConfigure = (configKey: string) => {
});
};
export const getGatewayList = () => {
return fetch(`/rd/gateway-configs`);
};
export const getGatewayType = () => {
return fetch(`/op/gateway-configs/type-enums`);
};
export const addNewConfigGateway = (params: IConfigGateway) => {
return fetch(`/op/gateway-configs`, {
method: 'POST',
body: JSON.stringify(params),
});
};
export const editConfigGateway = (params: IConfigGateway) => {
return fetch(`/op/gateway-configs`, {
method: 'PUT',
body: JSON.stringify(params),
});
};
export const deleteConfigGateway = (params: IConfigure) => {
return fetch(`/op/gateway-configs`, {
method: 'DELETE',
body: JSON.stringify(params),
});
};
export const getDataCenter = () => {
return fetch(`/normal/configs/idc`);
};
......@@ -530,6 +558,23 @@ export const getControllerHistory = (clusterId: number) => {
return fetch(`/rd/clusters/${clusterId}/controller-history`);
};
export const getCandidateController = (clusterId: number) => {
return fetch(`/rd/clusters/${clusterId}/controller-preferred-candidates`);
};
export const addCandidateController = (params:any) => {
return fetch(`/op/cluster-controller/preferred-candidates`, {
method: 'POST',
body: JSON.stringify(params),
});
};
export const deleteCandidateCancel = (params:any)=>{
return fetch(`/op/cluster-controller/preferred-candidates`, {
method: 'DELETE',
body: JSON.stringify(params),
});
}
/**
* 运维管控 broker
*/
......
......@@ -77,7 +77,7 @@ export const getControlMetricOption = (type: IOptionType, data: IClusterMetrics[
name = '';
data.map(item => {
item.messagesInPerSec = item.messagesInPerSec !== null ? Number(item.messagesInPerSec.toFixed(2)) : null;
});
});
break;
case 'brokerNum':
case 'topicNum':
......@@ -224,7 +224,7 @@ export const getClusterMetricOption = (type: IOptionType, record: IClusterMetric
name = '';
data.map(item => {
item.messagesInPerSec = item.messagesInPerSec !== null ? Number(item.messagesInPerSec.toFixed(2)) : null;
});
});
break;
default:
const { name: unitName, data: xData } = dealFlowData(metricTypeMap[type], data);
......@@ -248,8 +248,8 @@ export const getClusterMetricOption = (type: IOptionType, record: IClusterMetric
const unitSeries = item.data[item.seriesName] !== null ? Number(item.data[item.seriesName]) : null;
// tslint:disable-next-line:max-line-length
result += '<span style="display:inline-block;margin-right:0px;border-radius:10px;width:9px;height:9px;background-color:' + item.color + '"></span>';
if ( (item.data.produceThrottled && item.seriesName === 'appIdBytesInPerSec')
|| (item.data.consumeThrottled && item.seriesName === 'appIdBytesOutPerSec') ) {
if ((item.data.produceThrottled && item.seriesName === 'appIdBytesInPerSec')
|| (item.data.consumeThrottled && item.seriesName === 'appIdBytesOutPerSec')) {
return result += item.seriesName + ': ' + unitSeries + '(被限流)' + '<br>';
}
return result += item.seriesName + ': ' + unitSeries + '<br>';
......@@ -317,7 +317,7 @@ export const getMonitorMetricOption = (seriesName: string, data: IMetricPoint[])
if (ele.name === item.seriesName) {
// tslint:disable-next-line:max-line-length
result += '<span style="display:inline-block;margin-right:0px;border-radius:10px;width:9px;height:9px;background-color:' + item.color + '"></span>';
return result += item.seriesName + ': ' + (item.data.value === null ? '' : item.data.value.toFixed(2)) + '<br>';
return result += item.seriesName + ': ' + (item.data.value === null ? '' : item.data.value.toFixed(2)) + '<br>';
}
});
});
......
......@@ -3,6 +3,11 @@ import { observable, action } from 'mobx';
import { getBrokersMetricsHistory } from 'lib/api';
import { IClusterMetrics } from 'types/base-type';
const STATUS = {
PENDING: 'pending',
REJECT: 'reject',
FULLFILLED: 'fullfilled'
}
class AdminMonitor {
@observable
public currentClusterId = null as number;
......@@ -33,33 +38,42 @@ class AdminMonitor {
@action.bound
public setBrokersChartsData(data: IClusterMetrics[]) {
this.brokersMetricsHistory = data;
this.setRequestId(null);
this.setRequestId(STATUS.FULLFILLED);
Promise.all(this.taskQueue).then(() => {
this.setRequestId(null);
this.taskQueue = [];
})
return data;
}
public taskQueue = [] as any[];
public getBrokersMetricsList = async (startTime: string, endTime: string) => {
if (this.requestId && this.requestId !== 'error') {
return new Promise((res, rej) => {
window.setTimeout(() => {
if (this.requestId === 'error') {
rej();
} else {
if (this.requestId) {
//逐条定时查询任务状态
const p = new Promise((res, rej) => {
const timer = window.setInterval(() => {
if (this.requestId === STATUS.REJECT) {
rej(this.brokersMetricsHistory);
window.clearInterval(timer);
} else if (this.requestId === STATUS.FULLFILLED) {
res(this.brokersMetricsHistory);
window.clearInterval(timer);
}
}, 800); // TODO: 该实现方式待优化
}, (this.taskQueue.length + 1) * 100);
});
this.taskQueue.push(p);
return p;
}
this.setRequestId('requesting');
this.setRequestId(STATUS.PENDING);
return getBrokersMetricsHistory(this.currentClusterId, this.currentBrokerId, startTime, endTime)
.then(this.setBrokersChartsData).catch(() => this.setRequestId('error'));
.then(this.setBrokersChartsData).catch(() => this.setRequestId(STATUS.REJECT));
}
public getBrokersChartsData = async (startTime: string, endTime: string, reload?: boolean) => {
if (this.brokersMetricsHistory && !reload) {
return new Promise(res => res(this.brokersMetricsHistory));
}
return this.getBrokersMetricsList(startTime, endTime);
}
}
......
import { observable, action } from 'mobx';
import { INewBulidEnums, ILabelValue, IClusterReal, IOptionType, IClusterMetrics, IClusterTopics, IKafkaFiles, IMetaData, IConfigure, IBrokerData, IOffset, IController, IBrokersBasicInfo, IBrokersStatus, IBrokersTopics, IBrokersPartitions, IBrokersAnalysis, IAnalysisTopicVO, IBrokersMetadata, IBrokersRegions, IThrottles, ILogicalCluster, INewRegions, INewLogical, ITaskManage, IPartitionsLocation, ITaskType, ITasksEnums, ITasksMetaData, ITaskStatusDetails, IKafkaRoles, IEnumsMap, IStaffSummary, IBill, IBillDetail } from 'types/base-type';
import { INewBulidEnums, ILabelValue, IClusterReal, IOptionType, IClusterMetrics, IClusterTopics, IKafkaFiles, IMetaData, IConfigure, IConfigGateway, IBrokerData, IOffset, IController, IBrokersBasicInfo, IBrokersStatus, IBrokersTopics, IBrokersPartitions, IBrokersAnalysis, IAnalysisTopicVO, IBrokersMetadata, IBrokersRegions, IThrottles, ILogicalCluster, INewRegions, INewLogical, ITaskManage, IPartitionsLocation, ITaskType, ITasksEnums, ITasksMetaData, ITaskStatusDetails, IKafkaRoles, IEnumsMap, IStaffSummary, IBill, IBillDetail } from 'types/base-type';
import {
deleteCluster,
getBasicInfo,
......@@ -12,7 +12,12 @@ import {
getConfigure,
addNewConfigure,
editConfigure,
addNewConfigGateway,
deleteConfigure,
getGatewayList,
getGatewayType,
editConfigGateway,
deleteConfigGateway,
getDataCenter,
getClusterBroker,
getClusterConsumer,
......@@ -49,6 +54,9 @@ import {
getStaffSummary,
getBillStaffSummary,
getBillStaffDetail,
getCandidateController,
addCandidateController,
deleteCandidateCancel
} from 'lib/api';
import { getControlMetricOption, getClusterMetricOption } from 'lib/line-charts-config';
......@@ -59,6 +67,7 @@ import { transBToMB } from 'lib/utils';
import moment from 'moment';
import { timestore } from './time';
import { message } from 'component/antd';
class Admin {
@observable
......@@ -97,6 +106,12 @@ class Admin {
@observable
public configureList: IConfigure[] = [];
@observable
public configGatewayList: IConfigGateway[] = [];
@observable
public gatewayType: [];
@observable
public dataCenterList: string[] = [];
......@@ -142,6 +157,12 @@ class Admin {
@observable
public controllerHistory: IController[] = [];
@observable
public controllerCandidate: IController[] = [];
@observable
public filtercontrollerCandidate: string = '';
@observable
public brokersPartitions: IBrokersPartitions[] = [];
......@@ -152,7 +173,7 @@ class Admin {
public brokersAnalysisTopic: IAnalysisTopicVO[] = [];
@observable
public brokersMetadata: IBrokersMetadata[] = [];
public brokersMetadata: IBrokersMetadata[] | any = [];
@observable
public brokersRegions: IBrokersRegions[] = [];
......@@ -206,10 +227,10 @@ class Admin {
public kafkaRoles: IKafkaRoles[];
@observable
public controlType: IOptionType = 'byteIn/byteOut' ;
public controlType: IOptionType = 'byteIn/byteOut';
@observable
public type: IOptionType = 'byteIn/byteOut' ;
public type: IOptionType = 'byteIn/byteOut';
@observable
public currentClusterId = null as number;
......@@ -241,7 +262,7 @@ class Admin {
@action.bound
public setClusterRealTime(data: IClusterReal) {
this.clusterRealData = data;
this.clusterRealData = data;
this.getRealClusterLoading(false);
}
......@@ -284,7 +305,7 @@ class Admin {
return {
...item,
label: item.fileName,
value: item.fileName + ',' + item.fileMd5,
value: item.fileName + ',' + item.fileMd5,
};
}));
}
......@@ -306,6 +327,20 @@ class Admin {
}) : [];
}
@action.bound
public setConfigGatewayList(data: IConfigGateway[]) {
this.configGatewayList = data ? data.map((item, index) => {
item.key = index;
return item;
}) : [];
}
@action.bound
public setConfigGatewayType(data: any) {
this.setLoading(false);
this.gatewayType = data || [];
}
@action.bound
public setDataCenter(data: string[]) {
this.dataCenterList = data || [];
......@@ -335,6 +370,17 @@ class Admin {
}) : [];
}
@action.bound
public setCandidateController(data: IController[]) {
this.controllerCandidate = data ? data.map((item, index) => {
item.key = index;
return item;
}) : [];
this.filtercontrollerCandidate = data?data.map((item,index)=>{
return item.brokerId
}).join(','):''
}
@action.bound
public setBrokersBasicInfo(data: IBrokersBasicInfo) {
this.brokersBasicInfo = data;
......@@ -356,10 +402,10 @@ class Admin {
this.replicaStatus = data.brokerReplicaStatusList.slice(1);
this.bytesInStatus.forEach((item, index) => {
this.peakValueList.push({ name: peakValueMap[index], value: item});
this.peakValueList.push({ name: peakValueMap[index], value: item });
});
this.replicaStatus.forEach((item, index) => {
this.copyValueList.push({name: copyValueMap[index], value: item});
this.copyValueList.push({ name: copyValueMap[index], value: item });
});
}
......@@ -415,16 +461,16 @@ class Admin {
}
@action.bound
public setBrokersMetadata(data: IBrokersMetadata[]) {
this.brokersMetadata = data ? data.map((item, index) => {
item.key = index;
return {
...item,
text: `${item.host} (BrokerID:${item.brokerId})`,
label: item.host,
value: item.brokerId,
};
}) : [];
public setBrokersMetadata(data: IBrokersMetadata[]|any) {
this.brokersMetadata = data ? data.map((item:any, index:any) => {
item.key = index;
return {
...item,
text: `${item.host} (BrokerID:${item.brokerId})`,
label: item.host,
value: item.brokerId,
};
}) : [];
}
@action.bound
......@@ -461,9 +507,9 @@ class Admin {
@action.bound
public setLogicalClusters(data: ILogicalCluster[]) {
this.logicalClusters = data ? data.map((item, index) => {
item.key = index;
return item;
}) : [];
item.key = index;
return item;
}) : [];
}
@action.bound
......@@ -474,25 +520,25 @@ class Admin {
@action.bound
public setClustersThrottles(data: IThrottles[]) {
this.clustersThrottles = data ? data.map((item, index) => {
item.key = index;
return item;
}) : [];
item.key = index;
return item;
}) : [];
}
@action.bound
public setPartitionsLocation(data: IPartitionsLocation[]) {
this.partitionsLocation = data ? data.map((item, index) => {
item.key = index;
return item;
}) : [];
item.key = index;
return item;
}) : [];
}
@action.bound
public setTaskManagement(data: ITaskManage[]) {
this.taskManagement = data ? data.map((item, index) => {
item.key = index;
return item;
}) : [];
item.key = index;
return item;
}) : [];
}
@action.bound
......@@ -568,7 +614,7 @@ class Admin {
return deleteCluster(clusterId).then(() => this.getMetaData(true));
}
public getPeakFlowChartData(value: ILabelValue[], map: string []) {
public getPeakFlowChartData(value: ILabelValue[], map: string[]) {
return getPieChartOption(value, map);
}
......@@ -627,6 +673,30 @@ class Admin {
deleteConfigure(configKey).then(() => this.getConfigure());
}
public getGatewayList() {
getGatewayList().then(this.setConfigGatewayList);
}
public getGatewayType() {
this.setLoading(true);
getGatewayType().then(this.setConfigGatewayType);
}
public addNewConfigGateway(params: IConfigGateway) {
return addNewConfigGateway(params).then(() => this.getGatewayList());
}
public editConfigGateway(params: IConfigGateway) {
return editConfigGateway(params).then(() => this.getGatewayList());
}
public deleteConfigGateway(params: any) {
deleteConfigGateway(params).then(() => {
// message.success('删除成功')
this.getGatewayList()
});
}
public getDataCenter() {
getDataCenter().then(this.setDataCenter);
}
......@@ -643,6 +713,20 @@ class Admin {
return getControllerHistory(clusterId).then(this.setControllerHistory);
}
public getCandidateController(clusterId: number) {
return getCandidateController(clusterId).then(data=>{
return this.setCandidateController(data)
});
}
public addCandidateController(clusterId: number, brokerIdList: any) {
return addCandidateController({clusterId, brokerIdList}).then(()=>this.getCandidateController(clusterId));
}
public deleteCandidateCancel(clusterId: number, brokerIdList: any){
return deleteCandidateCancel({clusterId, brokerIdList}).then(()=>this.getCandidateController(clusterId));
}
public getBrokersBasicInfo(clusterId: number, brokerId: number) {
return getBrokersBasicInfo(clusterId, brokerId).then(this.setBrokersBasicInfo);
}
......
......@@ -181,6 +181,7 @@ class Alarm {
public modifyMonitorStrategy(params: IRequestParams) {
return modifyMonitorStrategy(params).then(() => {
message.success('操作成功');
window.location.href = `${urlPrefix}/alarm`;
}).finally(() => this.setLoading(false));
}
......
......@@ -19,6 +19,9 @@ export class Users {
@observable
public staff: IStaff[] = [];
@observable
public newPassWord: any = null;
@action.bound
public setAccount(data: IUser) {
setCookie([{ key: 'role', value: `${data.role}`, time: 1 }]);
......@@ -42,6 +45,11 @@ export class Users {
this.loading = value;
}
@action.bound
public setNewPassWord(value: boolean) {
this.newPassWord = value;
}
public getAccount() {
getAccount().then(this.setAccount);
}
......
......@@ -190,6 +190,7 @@ export interface IUser {
chineseName?: string;
department?: string;
key?: number;
confirmPassword?:string
}
export interface IOffset {
......@@ -486,6 +487,17 @@ export interface IConfigure {
key?: number;
}
export interface IConfigGateway {
id: number;
key?: number;
modifyTime: number;
name: string;
value: string;
version: string;
type: string;
description: string;
}
export interface IEepand {
brokerIdList: number[];
clusterId: number;
......@@ -650,8 +662,10 @@ export interface IBrokerData {
export interface IController {
brokerId: number;
host: string;
timestamp: number;
version: number;
timestamp?: number;
version?: number;
startTime?: number;
status?: number;
key?: number;
}
......
......@@ -15,10 +15,7 @@ import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
import com.xiaojukeji.kafka.manager.common.utils.jmx.JmxConnectorWrap;
import com.xiaojukeji.kafka.manager.dao.TopicDao;
import com.xiaojukeji.kafka.manager.dao.gateway.AuthorityDao;
import com.xiaojukeji.kafka.manager.service.service.JmxService;
import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils;
import com.xiaojukeji.kafka.manager.service.zookeeper.*;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkPathUtil;
......@@ -49,15 +46,6 @@ public class PhysicalClusterMetadataManager {
@Autowired
private ClusterService clusterService;
@Autowired
private ConfigUtils configUtils;
@Autowired
private TopicDao topicDao;
@Autowired
private AuthorityDao authorityDao;
private final static Map<Long, ClusterDO> CLUSTER_MAP = new ConcurrentHashMap<>();
private final static Map<Long, ControllerData> CONTROLLER_DATA_MAP = new ConcurrentHashMap<>();
......@@ -133,7 +121,7 @@ public class PhysicalClusterMetadataManager {
zkConfig.watchChildren(ZkPathUtil.BROKER_IDS_ROOT, brokerListener);
//增加Topic监控
TopicStateListener topicListener = new TopicStateListener(clusterDO.getId(), zkConfig, topicDao, authorityDao);
TopicStateListener topicListener = new TopicStateListener(clusterDO.getId(), zkConfig);
topicListener.init();
zkConfig.watchChildren(ZkPathUtil.BROKER_TOPICS_ROOT, topicListener);
......
......@@ -4,6 +4,7 @@ import com.xiaojukeji.kafka.manager.common.entity.Result;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO;
import com.xiaojukeji.kafka.manager.common.entity.ao.cluster.ControllerPreferredCandidate;
import com.xiaojukeji.kafka.manager.common.entity.dto.op.ControllerPreferredCandidateDTO;
import com.xiaojukeji.kafka.manager.common.entity.vo.normal.cluster.ClusterNameDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterDO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.ClusterMetricsDO;
......@@ -43,7 +44,7 @@ public interface ClusterService {
ClusterNameDTO getClusterName(Long logicClusterId);
ResultStatus deleteById(Long clusterId);
ResultStatus deleteById(Long clusterId, String operator);
/**
* 获取优先被选举为controller的broker
......@@ -51,4 +52,20 @@ public interface ClusterService {
* @return void
*/
Result<List<ControllerPreferredCandidate>> getControllerPreferredCandidates(Long clusterId);
/**
* 增加优先被选举为controller的broker
* @param clusterId 集群ID
* @param brokerIdList brokerId列表
* @return
*/
Result addControllerPreferredCandidates(Long clusterId, List<Integer> brokerIdList);
/**
* 减少优先被选举为controller的broker
* @param clusterId 集群ID
* @param brokerIdList brokerId列表
* @return
*/
Result deleteControllerPreferredCandidates(Long clusterId, List<Integer> brokerIdList);
}
package com.xiaojukeji.kafka.manager.service.service;
import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum;
import com.xiaojukeji.kafka.manager.common.entity.dto.rd.OperateRecordDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO;
import java.util.List;
import java.util.Map;
/**
* @author zhongyuankai
......@@ -12,5 +15,7 @@ import java.util.List;
public interface OperateRecordService {
int insert(OperateRecordDO operateRecordDO);
int insert(String operator, ModuleEnum module, String resourceName, OperateEnum operate, Map<String, String> content);
List<OperateRecordDO> queryByCondt(OperateRecordDTO dto);
}
......@@ -26,4 +26,20 @@ public interface ZookeeperService {
* @return 操作结果
*/
Result<List<Integer>> getControllerPreferredCandidates(Long clusterId);
/**
* 增加优先被选举为controller的broker
* @param clusterId 集群ID
* @param brokerId brokerId
* @return
*/
Result addControllerPreferredCandidate(Long clusterId, Integer brokerId);
/**
* 减少优先被选举为controller的broker
* @param clusterId 集群ID
* @param brokerId brokerId
* @return
*/
Result deleteControllerPreferredCandidate(Long clusterId, Integer brokerId);
}
......@@ -17,7 +17,7 @@ public interface AppService {
* @param appDO appDO
* @return int
*/
ResultStatus addApp(AppDO appDO);
ResultStatus addApp(AppDO appDO, String operator);
/**
* 删除数据
......
......@@ -60,10 +60,8 @@ public class AppServiceImpl implements AppService {
@Autowired
private OperateRecordService operateRecordService;
@Override
public ResultStatus addApp(AppDO appDO) {
public ResultStatus addApp(AppDO appDO, String operator) {
try {
if (appDao.insert(appDO) < 1) {
LOGGER.warn("class=AppServiceImpl||method=addApp||AppDO={}||msg=add fail,{}",appDO,ResultStatus.MYSQL_ERROR.getMessage());
......@@ -75,6 +73,15 @@ public class AppServiceImpl implements AppService {
kafkaUserDO.setOperation(OperationStatusEnum.CREATE.getCode());
kafkaUserDO.setUserType(0);
kafkaUserDao.insert(kafkaUserDO);
Map<String, String> content = new HashMap<>();
content.put("appId", appDO.getAppId());
content.put("name", appDO.getName());
content.put("applicant", appDO.getApplicant());
content.put("password", appDO.getPassword());
content.put("principals", appDO.getPrincipals());
content.put("description", appDO.getDescription());
operateRecordService.insert(operator, ModuleEnum.APP, appDO.getName(), OperateEnum.ADD, content);
} catch (DuplicateKeyException e) {
LOGGER.error("class=AppServiceImpl||method=addApp||errMsg={}||appDO={}|", e.getMessage(), appDO, e);
return ResultStatus.RESOURCE_ALREADY_EXISTED;
......@@ -141,6 +148,12 @@ public class AppServiceImpl implements AppService {
appDO.setDescription(dto.getDescription());
if (appDao.updateById(appDO) > 0) {
Map<String, String> content = new HashMap<>();
content.put("appId", appDO.getAppId());
content.put("name", appDO.getName());
content.put("principals", appDO.getPrincipals());
content.put("description", appDO.getDescription());
operateRecordService.insert(operator, ModuleEnum.APP, appDO.getName(), OperateEnum.EDIT, content);
return ResultStatus.SUCCESS;
}
} catch (DuplicateKeyException e) {
......
......@@ -221,13 +221,24 @@ public class GatewayConfigServiceImpl implements GatewayConfigService {
if (ValidateUtils.isNull(oldGatewayConfigDO)) {
return Result.buildFrom(ResultStatus.RESOURCE_NOT_EXIST);
}
if (!oldGatewayConfigDO.getName().equals(newGatewayConfigDO.getName())
|| !oldGatewayConfigDO.getType().equals(newGatewayConfigDO.getType())
|| ValidateUtils.isBlank(newGatewayConfigDO.getValue())) {
return Result.buildFrom(ResultStatus.PARAM_ILLEGAL);
}
newGatewayConfigDO.setVersion(oldGatewayConfigDO.getVersion() + 1);
if (gatewayConfigDao.updateById(oldGatewayConfigDO) > 0) {
// 获取当前同类配置, 插入之后需要增大这个version
List<GatewayConfigDO> gatewayConfigDOList = gatewayConfigDao.getByConfigType(newGatewayConfigDO.getType());
Long version = 1L;
for (GatewayConfigDO elem: gatewayConfigDOList) {
if (elem.getVersion() > version) {
version = elem.getVersion() + 1L;
}
}
newGatewayConfigDO.setVersion(version);
if (gatewayConfigDao.updateById(newGatewayConfigDO) > 0) {
return Result.buildSuc();
}
return Result.buildFrom(ResultStatus.MYSQL_ERROR);
......
package com.xiaojukeji.kafka.manager.service.service.impl;
import com.xiaojukeji.kafka.manager.common.bizenum.DBStatusEnum;
import com.xiaojukeji.kafka.manager.common.constant.Constant;
import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum;
import com.xiaojukeji.kafka.manager.common.entity.Result;
import com.xiaojukeji.kafka.manager.common.entity.ResultStatus;
import com.xiaojukeji.kafka.manager.common.entity.ao.ClusterDetailDTO;
......@@ -16,10 +17,7 @@ import com.xiaojukeji.kafka.manager.dao.ClusterMetricsDao;
import com.xiaojukeji.kafka.manager.dao.ControllerDao;
import com.xiaojukeji.kafka.manager.service.cache.LogicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.service.ClusterService;
import com.xiaojukeji.kafka.manager.service.service.ConsumerService;
import com.xiaojukeji.kafka.manager.service.service.RegionService;
import com.xiaojukeji.kafka.manager.service.service.ZookeeperService;
import com.xiaojukeji.kafka.manager.service.service.*;
import com.xiaojukeji.kafka.manager.service.utils.ConfigUtils;
import org.apache.zookeeper.ZooKeeper;
import org.slf4j.Logger;
......@@ -66,15 +64,24 @@ public class ClusterServiceImpl implements ClusterService {
@Autowired
private ZookeeperService zookeeperService;
@Autowired
private OperateRecordService operateRecordService;
@Override
public ResultStatus addNew(ClusterDO clusterDO, String operator) {
if (ValidateUtils.isNull(clusterDO) || ValidateUtils.isNull(operator)) {
return ResultStatus.PARAM_ILLEGAL;
}
if (!isZookeeperLegal(clusterDO.getZookeeper())) {
return ResultStatus.CONNECT_ZOOKEEPER_FAILED;
return ResultStatus.ZOOKEEPER_CONNECT_FAILED;
}
try {
Map<String, String> content = new HashMap<>();
content.put("zk address", clusterDO.getZookeeper());
content.put("bootstrap servers", clusterDO.getBootstrapServers());
content.put("security properties", clusterDO.getSecurityProperties());
content.put("jmx properties", clusterDO.getJmxProperties());
operateRecordService.insert(operator, ModuleEnum.CLUSTER, clusterDO.getClusterName(), OperateEnum.ADD, content);
if (clusterDao.insert(clusterDO) <= 0) {
LOGGER.error("add new cluster failed, clusterDO:{}.", clusterDO);
return ResultStatus.MYSQL_ERROR;
......@@ -102,8 +109,14 @@ public class ClusterServiceImpl implements ClusterService {
if (!originClusterDO.getZookeeper().equals(clusterDO.getZookeeper())) {
// 不允许修改zk地址
return ResultStatus.CHANGE_ZOOKEEPER_FORBIDEN;
return ResultStatus.CHANGE_ZOOKEEPER_FORBIDDEN;
}
Map<String, String> content = new HashMap<>();
content.put("cluster id", clusterDO.getId().toString());
content.put("security properties", clusterDO.getSecurityProperties());
content.put("jmx properties", clusterDO.getJmxProperties());
operateRecordService.insert(operator, ModuleEnum.CLUSTER, clusterDO.getClusterName(), OperateEnum.EDIT, content);
clusterDO.setStatus(originClusterDO.getStatus());
return updateById(clusterDO);
}
......@@ -202,7 +215,7 @@ public class ClusterServiceImpl implements ClusterService {
if (zk != null) {
zk.close();
}
} catch (Throwable t) {
} catch (Exception e) {
return false;
}
}
......@@ -255,12 +268,15 @@ public class ClusterServiceImpl implements ClusterService {
}
@Override
public ResultStatus deleteById(Long clusterId) {
public ResultStatus deleteById(Long clusterId, String operator) {
List<RegionDO> regionDOList = regionService.getByClusterId(clusterId);
if (!ValidateUtils.isEmptyList(regionDOList)) {
return ResultStatus.OPERATION_FORBIDDEN;
}
try {
Map<String, String> content = new HashMap<>();
content.put("cluster id", clusterId.toString());
operateRecordService.insert(operator, ModuleEnum.CLUSTER, String.valueOf(clusterId), OperateEnum.DELETE, content);
if (clusterDao.deleteById(clusterId) <= 0) {
LOGGER.error("delete cluster failed, clusterId:{}.", clusterId);
return ResultStatus.MYSQL_ERROR;
......@@ -274,8 +290,9 @@ public class ClusterServiceImpl implements ClusterService {
private ClusterDetailDTO getClusterDetailDTO(ClusterDO clusterDO, Boolean needDetail) {
if (ValidateUtils.isNull(clusterDO)) {
return null;
return new ClusterDetailDTO();
}
ClusterDetailDTO dto = new ClusterDetailDTO();
dto.setClusterId(clusterDO.getId());
dto.setClusterName(clusterDO.getClusterName());
......@@ -284,6 +301,7 @@ public class ClusterServiceImpl implements ClusterService {
dto.setKafkaVersion(physicalClusterMetadataManager.getKafkaVersionFromCache(clusterDO.getId()));
dto.setIdc(configUtils.getIdc());
dto.setSecurityProperties(clusterDO.getSecurityProperties());
dto.setJmxProperties(clusterDO.getJmxProperties());
dto.setStatus(clusterDO.getStatus());
dto.setGmtCreate(clusterDO.getGmtCreate());
dto.setGmtModify(clusterDO.getGmtModify());
......@@ -322,4 +340,39 @@ public class ClusterServiceImpl implements ClusterService {
}
return Result.buildSuc(controllerPreferredCandidateList);
}
@Override
public Result addControllerPreferredCandidates(Long clusterId, List<Integer> brokerIdList) {
if (ValidateUtils.isNull(clusterId) || ValidateUtils.isEmptyList(brokerIdList)) {
return Result.buildFrom(ResultStatus.PARAM_ILLEGAL);
}
// 增加的BrokerId需要判断是否存活
for (Integer brokerId: brokerIdList) {
if (!PhysicalClusterMetadataManager.isBrokerAlive(clusterId, brokerId)) {
return Result.buildFrom(ResultStatus.BROKER_NOT_EXIST);
}
Result result = zookeeperService.addControllerPreferredCandidate(clusterId, brokerId);
if (result.failed()) {
return result;
}
}
return Result.buildSuc();
}
@Override
public Result deleteControllerPreferredCandidates(Long clusterId, List<Integer> brokerIdList) {
if (ValidateUtils.isNull(clusterId) || ValidateUtils.isEmptyList(brokerIdList)) {
return Result.buildFrom(ResultStatus.PARAM_ILLEGAL);
}
for (Integer brokerId: brokerIdList) {
Result result = zookeeperService.deleteControllerPreferredCandidate(clusterId, brokerId);
if (result.failed()) {
return result;
}
}
return Result.buildSuc();
}
}
......@@ -129,7 +129,7 @@ public class ConsumerServiceImpl implements ConsumerService {
}
summary.setState(consumerGroupSummary.state());
java.util.Iterator<scala.collection.immutable.List<AdminClient.ConsumerSummary>> it = JavaConversions.asJavaIterator(consumerGroupSummary.consumers().iterator());
Iterator<scala.collection.immutable.List<AdminClient.ConsumerSummary>> it = JavaConversions.asJavaIterator(consumerGroupSummary.consumers().iterator());
while (it.hasNext()) {
List<AdminClient.ConsumerSummary> consumerSummaryList = JavaConversions.asJavaList(it.next());
for (AdminClient.ConsumerSummary consumerSummary: consumerSummaryList) {
......
package com.xiaojukeji.kafka.manager.service.service.impl;
import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum;
import com.xiaojukeji.kafka.manager.common.entity.dto.rd.OperateRecordDTO;
import com.xiaojukeji.kafka.manager.common.entity.pojo.OperateRecordDO;
import com.xiaojukeji.kafka.manager.common.utils.JsonUtils;
import com.xiaojukeji.kafka.manager.common.utils.ValidateUtils;
import com.xiaojukeji.kafka.manager.dao.OperateRecordDao;
import com.xiaojukeji.kafka.manager.service.service.OperateRecordService;
......@@ -10,6 +13,7 @@ import org.springframework.stereotype.Service;
import java.util.Date;
import java.util.List;
import java.util.Map;
/**
* @author zhongyuankai
......@@ -25,6 +29,17 @@ public class OperateRecordServiceImpl implements OperateRecordService {
return operateRecordDao.insert(operateRecordDO);
}
@Override
public int insert(String operator, ModuleEnum module, String resourceName, OperateEnum operate, Map<String, String> content) {
OperateRecordDO operateRecordDO = new OperateRecordDO();
operateRecordDO.setOperator(operator);
operateRecordDO.setModuleId(module.getCode());
operateRecordDO.setResource(resourceName);
operateRecordDO.setOperateId(operate.getCode());
operateRecordDO.setContent(JsonUtils.toJSONString(content));
return insert(operateRecordDO);
}
@Override
public List<OperateRecordDO> queryByCondt(OperateRecordDTO dto) {
return operateRecordDao.queryByCondt(
......
package com.xiaojukeji.kafka.manager.service.service.impl;
import com.xiaojukeji.kafka.manager.common.bizenum.KafkaClientEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.ModuleEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.OperateEnum;
import com.xiaojukeji.kafka.manager.common.bizenum.TopicAuthorityEnum;
import com.xiaojukeji.kafka.manager.common.constant.KafkaConstant;
import com.xiaojukeji.kafka.manager.common.constant.KafkaMetricsCollections;
import com.xiaojukeji.kafka.manager.common.constant.TopicCreationConstant;
import com.xiaojukeji.kafka.manager.common.entity.Result;
......@@ -80,6 +83,9 @@ public class TopicManagerServiceImpl implements TopicManagerService {
@Autowired
private RegionService regionService;
@Autowired
private OperateRecordService operateRecordService;
@Override
public List<TopicDO> listAll() {
try {
......@@ -293,6 +299,10 @@ public class TopicManagerServiceImpl implements TopicManagerService {
Map<String, TopicDO> topicMap) {
List<TopicDTO> dtoList = new ArrayList<>();
for (String topicName: PhysicalClusterMetadataManager.getTopicNameList(clusterDO.getId())) {
if (topicName.equals(KafkaConstant.COORDINATOR_TOPIC_NAME) || topicName.equals(KafkaConstant.TRANSACTION_TOPIC_NAME)) {
continue;
}
LogicalClusterDO logicalClusterDO = logicalClusterMetadataManager.getTopicLogicalCluster(
clusterDO.getId(),
topicName
......@@ -336,6 +346,12 @@ public class TopicManagerServiceImpl implements TopicManagerService {
if (ValidateUtils.isNull(topicDO)) {
return ResultStatus.TOPIC_NOT_EXIST;
}
Map<String, Object> content = new HashMap<>(2);
content.put("clusterId", clusterId);
content.put("topicName", topicName);
recordOperation(content, topicName, operator);
topicDO.setDescription(description);
if (topicDao.updateByName(topicDO) > 0) {
return ResultStatus.SUCCESS;
......@@ -359,6 +375,12 @@ public class TopicManagerServiceImpl implements TopicManagerService {
return ResultStatus.APP_NOT_EXIST;
}
Map<String, Object> content = new HashMap<>(4);
content.put("clusterId", clusterId);
content.put("topicName", topicName);
content.put("appId", appId);
recordOperation(content, topicName, operator);
TopicDO topicDO = topicDao.getByTopicName(clusterId, topicName);
if (ValidateUtils.isNull(topicDO)) {
// 不存在, 则需要插入
......@@ -389,6 +411,16 @@ public class TopicManagerServiceImpl implements TopicManagerService {
return ResultStatus.MYSQL_ERROR;
}
private void recordOperation(Map<String, Object> content, String topicName, String operator) {
OperateRecordDO operateRecordDO = new OperateRecordDO();
operateRecordDO.setModuleId(ModuleEnum.TOPIC.getCode());
operateRecordDO.setOperateId(OperateEnum.EDIT.getCode());
operateRecordDO.setResource(topicName);
operateRecordDO.setContent(JsonUtils.toJSONString(content));
operateRecordDO.setOperator(operator);
operateRecordService.insert(operateRecordDO);
}
@Override
public int deleteByTopicName(Long clusterId, String topicName) {
try {
......
......@@ -53,7 +53,7 @@ public class ZookeeperServiceImpl implements ZookeeperService {
}
ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterId);
if (ValidateUtils.isNull(zkConfig)) {
return Result.buildFrom(ResultStatus.CONNECT_ZOOKEEPER_FAILED);
return Result.buildFrom(ResultStatus.ZOOKEEPER_CONNECT_FAILED);
}
try {
......@@ -68,6 +68,60 @@ public class ZookeeperServiceImpl implements ZookeeperService {
} catch (Exception e) {
LOGGER.error("class=ZookeeperServiceImpl||method=getControllerPreferredCandidates||clusterId={}||errMsg={}", clusterId, e.getMessage());
}
return Result.buildFrom(ResultStatus.READ_ZOOKEEPER_FAILED);
return Result.buildFrom(ResultStatus.ZOOKEEPER_READ_FAILED);
}
@Override
public Result addControllerPreferredCandidate(Long clusterId, Integer brokerId) {
if (ValidateUtils.isNull(clusterId)) {
return Result.buildFrom(ResultStatus.PARAM_ILLEGAL);
}
ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterId);
if (ValidateUtils.isNull(zkConfig)) {
return Result.buildFrom(ResultStatus.ZOOKEEPER_CONNECT_FAILED);
}
try {
if (zkConfig.checkPathExists(ZkPathUtil.getControllerCandidatePath(brokerId))) {
// 节点已经存在, 则直接忽略
return Result.buildSuc();
}
if (!zkConfig.checkPathExists(ZkPathUtil.D_CONFIG_EXTENSION_ROOT_NODE)) {
zkConfig.setOrCreatePersistentNodeStat(ZkPathUtil.D_CONFIG_EXTENSION_ROOT_NODE, "");
}
if (!zkConfig.checkPathExists(ZkPathUtil.D_CONTROLLER_CANDIDATES)) {
zkConfig.setOrCreatePersistentNodeStat(ZkPathUtil.D_CONTROLLER_CANDIDATES, "");
}
zkConfig.setOrCreatePersistentNodeStat(ZkPathUtil.getControllerCandidatePath(brokerId), "");
return Result.buildSuc();
} catch (Exception e) {
LOGGER.error("class=ZookeeperServiceImpl||method=addControllerPreferredCandidate||clusterId={}||brokerId={}||errMsg={}||", clusterId, brokerId, e.getMessage());
}
return Result.buildFrom(ResultStatus.ZOOKEEPER_WRITE_FAILED);
}
@Override
public Result deleteControllerPreferredCandidate(Long clusterId, Integer brokerId) {
if (ValidateUtils.isNull(clusterId)) {
return Result.buildFrom(ResultStatus.PARAM_ILLEGAL);
}
ZkConfigImpl zkConfig = PhysicalClusterMetadataManager.getZKConfig(clusterId);
if (ValidateUtils.isNull(zkConfig)) {
return Result.buildFrom(ResultStatus.ZOOKEEPER_CONNECT_FAILED);
}
try {
if (!zkConfig.checkPathExists(ZkPathUtil.getControllerCandidatePath(brokerId))) {
return Result.buildSuc();
}
zkConfig.delete(ZkPathUtil.getControllerCandidatePath(brokerId));
return Result.buildSuc();
} catch (Exception e) {
LOGGER.error("class=ZookeeperServiceImpl||method=deleteControllerPreferredCandidate||clusterId={}||brokerId={}||errMsg={}||", clusterId, brokerId, e.getMessage());
}
return Result.buildFrom(ResultStatus.ZOOKEEPER_DELETE_FAILED);
}
}
\ No newline at end of file
......@@ -44,7 +44,7 @@ public class TopicCommands {
);
// 生成分配策略
scala.collection.Map<Object, scala.collection.Seq<Object>> replicaAssignment =
scala.collection.Map<Object, Seq<Object>> replicaAssignment =
AdminUtils.assignReplicasToBrokers(
convert2BrokerMetadataSeq(brokerIdList),
partitionNum,
......@@ -177,7 +177,7 @@ public class TopicCommands {
)
);
Map<TopicAndPartition, scala.collection.Seq<Object>> existingAssignJavaMap =
Map<TopicAndPartition, Seq<Object>> existingAssignJavaMap =
JavaConversions.asJavaMap(existingAssignScalaMap);
// 新增分区的分配策略和旧的分配策略合并
Map<Object, Seq<Object>> targetMap = new HashMap<>();
......
......@@ -5,8 +5,6 @@ import com.xiaojukeji.kafka.manager.common.zookeeper.znode.brokers.TopicMetadata
import com.xiaojukeji.kafka.manager.common.zookeeper.StateChangeListener;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkConfigImpl;
import com.xiaojukeji.kafka.manager.common.zookeeper.ZkPathUtil;
import com.xiaojukeji.kafka.manager.dao.TopicDao;
import com.xiaojukeji.kafka.manager.dao.gateway.AuthorityDao;
import com.xiaojukeji.kafka.manager.service.cache.PhysicalClusterMetadataManager;
import com.xiaojukeji.kafka.manager.service.cache.ThreadPool;
import org.apache.zookeeper.data.Stat;
......@@ -24,28 +22,17 @@ import java.util.concurrent.*;
* @date 20/5/14
*/
public class TopicStateListener implements StateChangeListener {
private final static Logger LOGGER = LoggerFactory.getLogger(TopicStateListener.class);
private static final Logger LOGGER = LoggerFactory.getLogger(TopicStateListener.class);
private Long clusterId;
private ZkConfigImpl zkConfig;
private TopicDao topicDao;
private AuthorityDao authorityDao;
public TopicStateListener(Long clusterId, ZkConfigImpl zkConfig) {
this.clusterId = clusterId;
this.zkConfig = zkConfig;
}
public TopicStateListener(Long clusterId, ZkConfigImpl zkConfig, TopicDao topicDao, AuthorityDao authorityDao) {
this.clusterId = clusterId;
this.zkConfig = zkConfig;
this.topicDao = topicDao;
this.authorityDao = authorityDao;
}
@Override
public void init() {
try {
......@@ -53,7 +40,7 @@ public class TopicStateListener implements StateChangeListener {
FutureTask[] taskList = new FutureTask[topicNameList.size()];
for (int i = 0; i < topicNameList.size(); i++) {
String topicName = topicNameList.get(i);
taskList[i] = new FutureTask(new Callable() {
taskList[i] = new FutureTask(new Callable<Object>() {
@Override
public Object call() throws Exception {
processTopicAdded(topicName);
......@@ -65,7 +52,6 @@ public class TopicStateListener implements StateChangeListener {
} catch (Exception e) {
LOGGER.error("init topics metadata failed, clusterId:{}.", clusterId, e);
}
return;
}
@Override
......@@ -92,8 +78,6 @@ public class TopicStateListener implements StateChangeListener {
private void processTopicDelete(String topicName) {
LOGGER.warn("delete topic, clusterId:{} topicName:{}.", clusterId, topicName);
PhysicalClusterMetadataManager.removeTopicMetadata(clusterId, topicName);
topicDao.removeTopicInCache(clusterId, topicName);
authorityDao.removeAuthorityInCache(clusterId, topicName);
}
private void processTopicAdded(String topicName) {
......@@ -122,4 +106,4 @@ public class TopicStateListener implements StateChangeListener {
LOGGER.error("add topic failed, clusterId:{} topicMetadata:{}.", clusterId, topicMetadata, e);
}
}
}
\ No newline at end of file
}
......@@ -22,6 +22,4 @@ public interface TopicDao {
List<TopicDO> listAll();
TopicDO getTopic(Long clusterId, String topicName, String appId);
TopicDO removeTopicInCache(Long clusterId, String topicName);
}
\ No newline at end of file
......@@ -16,8 +16,6 @@ public interface AppDao {
*/
int insert(AppDO appDO);
int insertIgnoreGatewayDB(AppDO appDO);
/**
* 删除appId
* @param appName App名称
......@@ -60,6 +58,4 @@ public interface AppDao {
* @return int
*/
int updateById(AppDO appDO);
List<AppDO> listNewAll();
}
\ No newline at end of file
......@@ -15,8 +15,6 @@ public interface AuthorityDao {
*/
int insert(AuthorityDO authorityDO);
int replaceIgnoreGatewayDB(AuthorityDO authorityDO);
/**
* 获取权限
* @param clusterId 集群id
......@@ -38,7 +36,5 @@ public interface AuthorityDao {
Map<String, Map<Long, Map<String, AuthorityDO>>> getAllAuthority();
void removeAuthorityInCache(Long clusterId, String topicName);
int deleteAuthorityByTopic(Long clusterId, String topicName);
}
......@@ -2,6 +2,7 @@ package com.xiaojukeji.kafka.manager.dao.gateway.impl;
import com.xiaojukeji.kafka.manager.common.entity.pojo.gateway.AppDO;
import com.xiaojukeji.kafka.manager.dao.gateway.AppDao;
import com.xiaojukeji.kafka.manager.task.Constant;
import org.mybatis.spring.SqlSessionTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Repository;
......@@ -21,7 +22,7 @@ public class AppDaoImpl implements AppDao {
/**
* APP最近的一次更新时间, 更新之后的缓存
*/
private static Long APP_CACHE_LATEST_UPDATE_TIME = 0L;
private static volatile long APP_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP;
private static final Map<String, AppDO> APP_MAP = new ConcurrentHashMap<>();
@Override
......@@ -29,11 +30,6 @@ public class AppDaoImpl implements AppDao {
return sqlSession.insert("AppDao.insert", appDO);
}
@Override
public int insertIgnoreGatewayDB(AppDO appDO) {
return sqlSession.insert("AppDao.insert", appDO);
}
@Override
public int deleteByName(String appName) {
return sqlSession.delete("AppDao.deleteByName", appName);
......@@ -66,7 +62,12 @@ public class AppDaoImpl implements AppDao {
}
private void updateTopicCache() {
Long timestamp = System.currentTimeMillis();
long timestamp = System.currentTimeMillis();
if (timestamp + 1000 <= APP_CACHE_LATEST_UPDATE_TIME) {
// 近一秒内的请求不走db
return;
}
Date afterTime = new Date(APP_CACHE_LATEST_UPDATE_TIME);
List<AppDO> doList = sqlSession.selectList("AppDao.listAfterTime", afterTime);
......@@ -76,19 +77,22 @@ public class AppDaoImpl implements AppDao {
/**
* 更新APP缓存
*/
synchronized private void updateTopicCache(List<AppDO> doList, Long timestamp) {
private synchronized void updateTopicCache(List<AppDO> doList, long timestamp) {
if (doList == null || doList.isEmpty() || APP_CACHE_LATEST_UPDATE_TIME >= timestamp) {
// 本次无数据更新, 或者本次更新过时 时, 忽略本次更新
return;
}
if (APP_CACHE_LATEST_UPDATE_TIME == Constant.START_TIMESTAMP) {
APP_MAP.clear();
}
for (AppDO elem: doList) {
APP_MAP.put(elem.getAppId(), elem);
}
APP_CACHE_LATEST_UPDATE_TIME = timestamp;
}
@Override
public List<AppDO> listNewAll() {
return sqlSession.selectList("AppDao.listNewAll");
public static void resetCache() {
APP_CACHE_LATEST_UPDATE_TIME = Constant.START_TIMESTAMP;
}
}
\ No newline at end of file
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册