提交 df8a0185 编写于 作者: P peng.xu

(shards): all all missing changes after pick-cherry from xupeng's branch

上级 013566de
.git
.gitignore
.env
.coverage
.dockerignore
cov_html/
.pytest_cache
__pycache__
*/__pycache__
*.md
*.yml
*.yaml
version: "2.3"
services:
milvus_wr:
runtime: nvidia
restart: always
image: milvusdb/milvus:0.5.0-d102119-ede20b
volumes:
- /tmp/milvus/db:/opt/milvus/db
milvus_ro:
runtime: nvidia
restart: always
image: milvusdb/milvus:0.5.0-d102119-ede20b
volumes:
- /tmp/milvus/db:/opt/milvus/db
- ./ro_server.yml:/opt/milvus/conf/server_config.yaml
jaeger:
restart: always
image: jaegertracing/all-in-one:1.14
ports:
- "0.0.0.0:5775:5775/udp"
- "0.0.0.0:16686:16686"
- "0.0.0.0:9441:9441"
environment:
COLLECTOR_ZIPKIN_HTTP_PORT: 9411
mishards:
restart: always
image: milvusdb/mishards
ports:
- "0.0.0.0:19531:19531"
- "0.0.0.0:19532:19532"
volumes:
- /tmp/milvus/db:/tmp/milvus/db
# - /tmp/mishards_env:/source/mishards/.env
command: ["python", "mishards/main.py"]
environment:
FROM_EXAMPLE: 'true'
DEBUG: 'true'
SERVER_PORT: 19531
WOSERVER: tcp://milvus_wr:19530
DISCOVERY_PLUGIN_PATH: static
DISCOVERY_STATIC_HOSTS: milvus_wr,milvus_ro
TRACER_CLASS_NAME: jaeger
TRACING_SERVICE_NAME: mishards-demo
TRACING_REPORTING_HOST: jaeger
TRACING_REPORTING_PORT: 5775
depends_on:
- milvus_wr
- milvus_ro
- jaeger
server_config:
address: 0.0.0.0 # milvus server ip address (IPv4)
port: 19530 # port range: 1025 ~ 65534
deploy_mode: cluster_readonly # deployment type: single, cluster_readonly, cluster_writable
time_zone: UTC+8
db_config:
primary_path: /opt/milvus # path used to store data and meta
secondary_path: # path used to store data only, split by semicolon
backend_url: sqlite://:@:/ # URI format: dialect://username:password@host:port/database
# Keep 'dialect://:@:/', and replace other texts with real values
# Replace 'dialect' with 'mysql' or 'sqlite'
insert_buffer_size: 4 # GB, maximum insert buffer size allowed
# sum of insert_buffer_size and cpu_cache_capacity cannot exceed total memory
preload_table: # preload data at startup, '*' means load all tables, empty value means no preload
# you can specify preload tables like this: table1,table2,table3
metric_config:
enable_monitor: false # enable monitoring or not
collector: prometheus # prometheus
prometheus_config:
port: 8080 # port prometheus uses to fetch metrics
cache_config:
cpu_cache_capacity: 16 # GB, CPU memory used for cache
cpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
gpu_cache_capacity: 4 # GB, GPU memory used for cache
gpu_cache_threshold: 0.85 # percentage of data that will be kept when cache cleanup is triggered
cache_insert_data: false # whether to load inserted data into cache
engine_config:
use_blas_threshold: 20 # if nq < use_blas_threshold, use SSE, faster with fluctuated response times
# if nq >= use_blas_threshold, use OpenBlas, slower with stable response times
resource_config:
search_resources: # define the GPUs used for search computation, valid value: gpux
- gpu0
index_build_device: gpu0 # GPU used for building index
This document is a gentle introduction to Milvus Cluster, that does not use complex to understand distributed systems concepts. It provides instructions about how to setup a cluster, test, and operate it, without going into the details that are covered in the Milvus Cluster specification but just describing how the system behaves from the point of view of the user.
However this tutorial tries to provide information about the availability and consistency characteristics of Milvus Cluster from the point of view of the final user, stated in a simple to understand way.
If you plan to run a serious Milvus Cluster deployment, the more formal specification is a suggested reading, even if not strictly required. However it is a good idea to start from this document, play with Milvus Cluster some time, and only later read the specification.
## Milvus Cluster Introduction
### Infrastructure
* Kubenetes Cluster With Nvida GPU Node
* Install Nvida Docker in Cluster
### Requried Docker Registry
* Milvus Server: ```registry.zilliz.com/milvus/engine:${version>=0.3.1}```
* Milvus Celery Apps: ```registry.zilliz.com/milvus/celery-apps:${version>=v0.2.1}```
### Cluster Ability
* Milvus Cluster provides a way to run a Milvus installation where query requests are automatically sharded across multiple milvus readonly nodes.
* Milvus Cluster provides availability during partitions, that is in pratical terms the ability to continue the operations when some nodes fail or are not able to communicate.
### Metastore
Milvus supports 2 backend databases for deployment:
* Splite3: Single mode only.
* MySQL: Single/Cluster mode
* ETCD: `TODO`
### Storage
Milvus supports 2 backend storage for deployment:
* Local filesystem: Convenient for use and deployment but not reliable.
* S3 OOS: Reliable: Need extra configuration. Need external storage service.
### Message Queue
Milvus supports various MQ backend for deployment:
* Redis
* Rabbitmq
* MySQL/PG/MongoDB
### Cache
* Milvus supports `Redis` as Cache backend for deployment. To reduce the system complexity, we recommend to use `Redis` as MQ backend.
### Workflow
* Milvus Cluster use Celery as workflow scheduler.
* Milvus Cluster workflow calculation node can be scaled.
* Milvus Cluster only contains 1 worflow monitor node. Monitor node detects caculation nodes status and provides decision for work scheduling.
* Milvus Cluster supports different workflow result backend and we recommend to use `Redis` as result backend for performance consideration.
### Writeonly Node
* Milvus can be configured in write-only mode.
* Right now Milvus Cluster only provide 1 write-only node.
### Readonly Node
* Milvus can be configured in readonly mode.
* Milvus Cluster automatically shard incoming query requests across multiple readonly nodes.
* Milvus Cluster supports readonly nodes scaling.
* Milvus Cluster provides pratical solution to avoid performance degradation during cluster rebalance.
### Proxy
* Milvus Cluster communicates with clients by proxy.
* Milvus Cluster supports proxy scaling.
### Monitor
* Milvus Cluster suports metrics monitoring by prometheus.
* Milvus Cluster suports workflow tasks monitoring by flower.
* Milvus Cluster suports cluster monitoring by all kubernetes ecosystem monitoring tools.
## Milvus Cluster Kubernetes Resources
### PersistentVolumeClaim
* LOG PersistentVolume: `milvus-log-disk`
### ConfigMap
* Celery workflow configmap: `milvus-celery-configmap`::`milvus_celery_config.yml`
* Proxy configmap: `milvus-proxy-configmap`::`milvus_proxy_config.yml`
* Readonly nodes configmap: `milvus-roserver-configmap`::`config.yml`, `milvus-roserver-configmap`::`log.conf`
* Write-only nodes configmap: `milvus-woserver-configmap`::`config.yml`, `milvus-woserver-configmap`::`log.conf`
### Services
* Mysql service: `milvus-mysql`
* Redis service: `milvus-redis`
* Rroxy service: `milvus-proxy-servers`
* Write-only servers service: `milvus-wo-servers`
### StatefulSet
* Readonly stateful servers: `milvus-ro-servers`
### Deployment
* Worflow monitor: `milvus-monitor`
* Worflow workers: `milvus-workers`
* Write-only servers: `milvus-wo-servers`
* Proxy: `milvus-proxy`
## Milvus Cluster Configuration
### Write-only server:
```milvus-woserver-configmap::config.yml:
server_config.mode: cluster
db_config.db_backend_url: mysql://${user}:${password}@milvus-mysql/${dbname}
```
### Readonly server:
```milvus-roserver-configmap::config.yml:
server_config.mode: read_only
db_config.db_backend_url: mysql://\${user}:${password}@milvus-mysql/${dbname}
```
### Celery workflow:
```milvus-celery-configmap::milvus_celery_config.yml:
DB_URI=mysql+mysqlconnector://${user}:${password}@milvus-mysql/${dbname}
```
### Proxy workflow:
```milvus-proxy-configmap::milvus_proxy_config.yml:
```
kind: Service
apiVersion: v1
metadata:
name: milvus-mysql
namespace: milvus
spec:
type: ClusterIP
selector:
app: milvus
tier: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
name: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: milvus-mysql
namespace: milvus
spec:
selector:
matchLabels:
app: milvus
tier: mysql
replicas: 1
template:
metadata:
labels:
app: milvus
tier: mysql
spec:
containers:
- name: milvus-mysql
image: mysql:5.7
imagePullPolicy: IfNotPresent
# lifecycle:
# postStart:
# exec:
# command: ["/bin/sh", "-c", "mysql -h milvus-mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e \"CREATE DATABASE IF NOT EXISTS ${DATABASE};\"; \
# mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e \"GRANT ALL PRIVILEGES ON ${DATABASE}.* TO 'root'@'%';\""]
env:
- name: MYSQL_ROOT_PASSWORD
value: milvusroot
- name: DATABASE
value: milvus
ports:
- name: mysql-port
containerPort: 3306
volumeMounts:
- name: milvus-mysql-disk
mountPath: /data
subPath: mysql
- name: milvus-mysql-configmap
mountPath: /etc/mysql/mysql.conf.d/mysqld.cnf
subPath: milvus_mysql_config.yml
volumes:
- name: milvus-mysql-disk
persistentVolumeClaim:
claimName: milvus-mysql-disk
- name: milvus-mysql-configmap
configMap:
name: milvus-mysql-configmap
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: milvus-db-disk
namespace: milvus
spec:
accessModes:
- ReadWriteMany
storageClassName: default
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: milvus-log-disk
namespace: milvus
spec:
accessModes:
- ReadWriteMany
storageClassName: default
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: milvus-mysql-disk
namespace: milvus
spec:
accessModes:
- ReadWriteMany
storageClassName: default
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: milvus-redis-disk
namespace: milvus
spec:
accessModes:
- ReadWriteOnce
storageClassName: default
resources:
requests:
storage: 5Gi
kind: Service
apiVersion: v1
metadata:
name: milvus-proxy-servers
namespace: milvus
spec:
type: LoadBalancer
selector:
app: milvus
tier: proxy
ports:
- name: tcp
protocol: TCP
port: 19530
targetPort: 19530
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: milvus-proxy
namespace: milvus
spec:
selector:
matchLabels:
app: milvus
tier: proxy
replicas: 1
template:
metadata:
labels:
app: milvus
tier: proxy
spec:
containers:
- name: milvus-proxy
image: milvusdb/mishards:0.1.0-rc0
imagePullPolicy: Always
command: ["python", "mishards/main.py"]
resources:
limits:
memory: "3Gi"
cpu: "4"
requests:
memory: "2Gi"
ports:
- name: tcp
containerPort: 5000
env:
# - name: SQL_ECHO
# value: "True"
- name: DEBUG
value: "False"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MILVUS_CLIENT
value: "False"
- name: LOG_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: LOG_PATH
value: /var/log/milvus
- name: SD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SD_ROSERVER_POD_PATT
value: ".*-ro-servers-.*"
volumeMounts:
- name: milvus-proxy-configmap
mountPath: /source/mishards/.env
subPath: milvus_proxy_config.yml
- name: milvus-log-disk
mountPath: /var/log/milvus
subPath: proxylog
# imagePullSecrets:
# - name: regcred
volumes:
- name: milvus-proxy-configmap
configMap:
name: milvus-proxy-configmap
- name: milvus-log-disk
persistentVolumeClaim:
claimName: milvus-log-disk
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pods-list
rules:
- apiGroups: [""]
resources: ["pods", "events"]
verbs: ["list", "get", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pods-list
subjects:
- kind: ServiceAccount
name: default
namespace: milvus
roleRef:
kind: ClusterRole
name: pods-list
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: milvus-ro-servers
namespace: milvus
spec:
type: ClusterIP
selector:
app: milvus
tier: ro-servers
ports:
- protocol: TCP
port: 19530
targetPort: 19530
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: milvus-ro-servers
namespace: milvus
spec:
serviceName: "milvus-ro-servers"
replicas: 1
template:
metadata:
labels:
app: milvus
tier: ro-servers
spec:
terminationGracePeriodSeconds: 11
containers:
- name: milvus-ro-server
image: milvusdb/milvus:0.5.0-d102119-ede20b
imagePullPolicy: Always
ports:
- containerPort: 19530
resources:
limits:
memory: "16Gi"
cpu: "8.0"
requests:
memory: "14Gi"
volumeMounts:
- name: milvus-db-disk
mountPath: /var/milvus
subPath: dbdata
- name: milvus-roserver-configmap
mountPath: /opt/milvus/conf/server_config.yaml
subPath: config.yml
- name: milvus-roserver-configmap
mountPath: /opt/milvus/conf/log_config.conf
subPath: log.conf
# imagePullSecrets:
# - name: regcred
# tolerations:
# - key: "worker"
# operator: "Equal"
# value: "performance"
# effect: "NoSchedule"
volumes:
- name: milvus-roserver-configmap
configMap:
name: milvus-roserver-configmap
- name: milvus-db-disk
persistentVolumeClaim:
claimName: milvus-db-disk
kind: Service
apiVersion: v1
metadata:
name: milvus-wo-servers
namespace: milvus
spec:
type: ClusterIP
selector:
app: milvus
tier: wo-servers
ports:
- protocol: TCP
port: 19530
targetPort: 19530
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: milvus-wo-servers
namespace: milvus
spec:
selector:
matchLabels:
app: milvus
tier: wo-servers
replicas: 1
template:
metadata:
labels:
app: milvus
tier: wo-servers
spec:
containers:
- name: milvus-wo-server
image: milvusdb/milvus:0.5.0-d102119-ede20b
imagePullPolicy: Always
ports:
- containerPort: 19530
resources:
limits:
memory: "5Gi"
cpu: "1.0"
requests:
memory: "4Gi"
volumeMounts:
- name: milvus-db-disk
mountPath: /var/milvus
subPath: dbdata
- name: milvus-woserver-configmap
mountPath: /opt/milvus/conf/server_config.yaml
subPath: config.yml
- name: milvus-woserver-configmap
mountPath: /opt/milvus/conf/log_config.conf
subPath: log.conf
# imagePullSecrets:
# - name: regcred
# tolerations:
# - key: "worker"
# operator: "Equal"
# value: "performance"
# effect: "NoSchedule"
volumes:
- name: milvus-woserver-configmap
configMap:
name: milvus-woserver-configmap
- name: milvus-db-disk
persistentVolumeClaim:
claimName: milvus-db-disk
#!/bin/bash
UL=`tput smul`
NOUL=`tput rmul`
BOLD=`tput bold`
NORMAL=`tput sgr0`
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
ENDC='\033[0m'
function showHelpMessage () {
echo -e "${BOLD}Usage:${NORMAL} ${RED}$0${ENDC} [option...] {cleanup${GREEN}|${ENDC}baseup${GREEN}|${ENDC}appup${GREEN}|${ENDC}appdown${GREEN}|${ENDC}allup}" >&2
echo
echo " -h, --help show help message"
echo " ${BOLD}cleanup, delete all resources${NORMAL}"
echo " ${BOLD}baseup, start all required base resources${NORMAL}"
echo " ${BOLD}appup, start all pods${NORMAL}"
echo " ${BOLD}appdown, remove all pods${NORMAL}"
echo " ${BOLD}allup, start all base resources and pods${NORMAL}"
echo " ${BOLD}scale-proxy, scale proxy${NORMAL}"
echo " ${BOLD}scale-ro-server, scale readonly servers${NORMAL}"
echo " ${BOLD}scale-worker, scale calculation workers${NORMAL}"
}
function showscaleHelpMessage () {
echo -e "${BOLD}Usage:${NORMAL} ${RED}$0 $1${ENDC} [option...] {1|2|3|4|...}" >&2
echo
echo " -h, --help show help message"
echo " ${BOLD}number, (int) target scale number"
}
function PrintScaleSuccessMessage() {
echo -e "${BLUE}${BOLD}Successfully Scaled: ${1} --> ${2}${ENDC}"
}
function PrintPodStatusMessage() {
echo -e "${BOLD}${1}${NORMAL}"
}
timeout=60
function setUpMysql () {
mysqlUserName=$(kubectl describe configmap -n milvus milvus-roserver-configmap |
grep backend_url |
awk '{print $2}' |
awk '{split($0, level1, ":");
split(level1[2], level2, "/");
print level2[3]}')
mysqlPassword=$(kubectl describe configmap -n milvus milvus-roserver-configmap |
grep backend_url |
awk '{print $2}' |
awk '{split($0, level1, ":");
split(level1[3], level3, "@");
print level3[1]}')
mysqlDBName=$(kubectl describe configmap -n milvus milvus-roserver-configmap |
grep backend_url |
awk '{print $2}' |
awk '{split($0, level1, ":");
split(level1[4], level4, "/");
print level4[2]}')
mysqlContainer=$(kubectl get pods -n milvus | grep milvus-mysql | awk '{print $1}')
kubectl exec -n milvus $mysqlContainer -- mysql -h milvus-mysql -u$mysqlUserName -p$mysqlPassword -e "CREATE DATABASE IF NOT EXISTS $mysqlDBName;"
checkDBExists=$(kubectl exec -n milvus $mysqlContainer -- mysql -h milvus-mysql -u$mysqlUserName -p$mysqlPassword -e "SELECT schema_name FROM information_schema.schemata WHERE schema_name = '$mysqlDBName';" | grep -o $mysqlDBName | wc -l)
counter=0
while [ $checkDBExists -lt 1 ]; do
sleep 1
let counter=counter+1
if [ $counter == $timeout ]; then
echo "Creating MySQL database $mysqlDBName timeout"
return 1
fi
checkDBExists=$(kubectl exec -n milvus $mysqlContainer -- mysql -h milvus-mysql -u$mysqlUserName -p$mysqlPassword -e "SELECT schema_name FROM information_schema.schemata WHERE schema_name = '$mysqlDBName';" | grep -o $mysqlDBName | wc -l)
done;
kubectl exec -n milvus $mysqlContainer -- mysql -h milvus-mysql -u$mysqlUserName -p$mysqlPassword -e "GRANT ALL PRIVILEGES ON $mysqlDBName.* TO '$mysqlUserName'@'%';"
kubectl exec -n milvus $mysqlContainer -- mysql -h milvus-mysql -u$mysqlUserName -p$mysqlPassword -e "FLUSH PRIVILEGES;"
checkGrant=$(kubectl exec -n milvus $mysqlContainer -- mysql -h milvus-mysql -u$mysqlUserName -p$mysqlPassword -e "SHOW GRANTS for $mysqlUserName;" | grep -o "GRANT ALL PRIVILEGES ON \`$mysqlDBName\`\.\*" | wc -l)
counter=0
while [ $checkGrant -lt 1 ]; do
sleep 1
let counter=counter+1
if [ $counter == $timeout ]; then
echo "Granting all privileges on $mysqlDBName to $mysqlUserName timeout"
return 1
fi
checkGrant=$(kubectl exec -n milvus $mysqlContainer -- mysql -h milvus-mysql -u$mysqlUserName -p$mysqlPassword -e "SHOW GRANTS for $mysqlUserName;" | grep -o "GRANT ALL PRIVILEGES ON \`$mysqlDBName\`\.\*" | wc -l)
done;
}
function checkStatefulSevers() {
stateful_replicas=$(kubectl describe statefulset -n milvus milvus-ro-servers | grep "Replicas:" | awk '{print $2}')
stateful_running_pods=$(kubectl describe statefulset -n milvus milvus-ro-servers | grep "Pods Status:" | awk '{print $3}')
counter=0
prev=$stateful_running_pods
PrintPodStatusMessage "Running milvus-ro-servers Pods: $stateful_running_pods/$stateful_replicas"
while [ $stateful_replicas != $stateful_running_pods ]; do
echo -e "${YELLOW}Wait another 1 sec --- ${counter}${ENDC}"
sleep 1;
let counter=counter+1
if [ $counter -eq $timeout ]; then
return 1;
fi
stateful_running_pods=$(kubectl describe statefulset -n milvus milvus-ro-servers | grep "Pods Status:" | awk '{print $3}')
if [ $stateful_running_pods -ne $prev ]; then
PrintPodStatusMessage "Running milvus-ro-servers Pods: $stateful_running_pods/$stateful_replicas"
fi
prev=$stateful_running_pods
done;
return 0;
}
function checkDeployment() {
deployment_name=$1
replicas=$(kubectl describe deployment -n milvus $deployment_name | grep "Replicas:" | awk '{print $2}')
running=$(kubectl get pods -n milvus | grep $deployment_name | grep Running | wc -l)
counter=0
prev=$running
PrintPodStatusMessage "Running $deployment_name Pods: $running/$replicas"
while [ $replicas != $running ]; do
echo -e "${YELLOW}Wait another 1 sec --- ${counter}${ENDC}"
sleep 1;
let counter=counter+1
if [ $counter == $timeout ]; then
return 1
fi
running=$(kubectl get pods -n milvus | grep "$deployment_name" | grep Running | wc -l)
if [ $running -ne $prev ]; then
PrintPodStatusMessage "Running $deployment_name Pods: $running/$replicas"
fi
prev=$running
done
}
function startDependencies() {
kubectl apply -f milvus_data_pvc.yaml
kubectl apply -f milvus_configmap.yaml
kubectl apply -f milvus_auxiliary.yaml
counter=0
while [ $(kubectl get pvc -n milvus | grep Bound | wc -l) != 4 ]; do
sleep 1;
let counter=counter+1
if [ $counter == $timeout ]; then
echo "baseup timeout"
return 1
fi
done
checkDeployment "milvus-mysql"
}
function startApps() {
counter=0
errmsg=""
echo -e "${GREEN}${BOLD}Checking required resouces...${NORMAL}${ENDC}"
while [ $counter -lt $timeout ]; do
sleep 1;
if [ $(kubectl get pvc -n milvus 2>/dev/null | grep Bound | wc -l) != 4 ]; then
echo -e "${YELLOW}No pvc. Wait another sec... $counter${ENDC}";
errmsg='No pvc';
let counter=counter+1;
continue
fi
if [ $(kubectl get configmap -n milvus 2>/dev/null | grep milvus | wc -l) != 4 ]; then
echo -e "${YELLOW}No configmap. Wait another sec... $counter${ENDC}";
errmsg='No configmap';
let counter=counter+1;
continue
fi
if [ $(kubectl get ep -n milvus 2>/dev/null | grep milvus-mysql | awk '{print $2}') == "<none>" ]; then
echo -e "${YELLOW}No mysql. Wait another sec... $counter${ENDC}";
errmsg='No mysql';
let counter=counter+1;
continue
fi
# if [ $(kubectl get ep -n milvus 2>/dev/null | grep milvus-redis | awk '{print $2}') == "<none>" ]; then
# echo -e "${NORMAL}${YELLOW}No redis. Wait another sec... $counter${ENDC}";
# errmsg='No redis';
# let counter=counter+1;
# continue
# fi
break;
done
if [ $counter -ge $timeout ]; then
echo -e "${RED}${BOLD}Start APP Error: $errmsg${NORMAL}${ENDC}"
exit 1;
fi
echo -e "${GREEN}${BOLD}Setup requried database ...${NORMAL}${ENDC}"
setUpMysql
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Setup MySQL database timeout${NORMAL}${ENDC}"
exit 1
fi
echo -e "${GREEN}${BOLD}Start servers ...${NORMAL}${ENDC}"
kubectl apply -f milvus_stateful_servers.yaml
kubectl apply -f milvus_write_servers.yaml
checkStatefulSevers
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Starting milvus-ro-servers timeout${NORMAL}${ENDC}"
exit 1
fi
checkDeployment "milvus-wo-servers"
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Starting milvus-wo-servers timeout${NORMAL}${ENDC}"
exit 1
fi
echo -e "${GREEN}${BOLD}Start rolebinding ...${NORMAL}${ENDC}"
kubectl apply -f milvus_rbac.yaml
echo -e "${GREEN}${BOLD}Start proxies ...${NORMAL}${ENDC}"
kubectl apply -f milvus_proxy.yaml
checkDeployment "milvus-proxy"
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Starting milvus-proxy timeout${NORMAL}${ENDC}"
exit 1
fi
# echo -e "${GREEN}${BOLD}Start flower ...${NORMAL}${ENDC}"
# kubectl apply -f milvus_flower.yaml
# checkDeployment "milvus-flower"
# if [ $? -ne 0 ]; then
# echo -e "${RED}${BOLD}Starting milvus-flower timeout${NORMAL}${ENDC}"
# exit 1
# fi
}
function removeApps () {
# kubectl delete -f milvus_flower.yaml 2>/dev/null
kubectl delete -f milvus_proxy.yaml 2>/dev/null
kubectl delete -f milvus_stateful_servers.yaml 2>/dev/null
kubectl delete -f milvus_write_servers.yaml 2>/dev/null
kubectl delete -f milvus_rbac.yaml 2>/dev/null
# kubectl delete -f milvus_monitor.yaml 2>/dev/null
}
function scaleDeployment() {
deployment_name=$1
subcommand=$2
des=$3
case $des in
-h|--help|"")
showscaleHelpMessage $subcommand
exit 3
;;
esac
cur=$(kubectl get deployment -n milvus $deployment_name |grep $deployment_name |awk '{split($2, status, "/"); print status[2];}')
echo -e "${GREEN}Current Running ${BOLD}$cur ${GREEN}${deployment_name}, Scaling to ${BOLD}$des ...${ENDC}";
scalecmd="kubectl scale deployment -n milvus ${deployment_name} --replicas=${des}"
${scalecmd}
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Scale Error: ${GREEN}${scalecmd}${ENDC}"
exit 1
fi
checkDeployment $deployment_name
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Scale ${deployment_name} timeout${NORMAL}${ENDC}"
scalecmd="kubectl scale deployment -n milvus ${deployment_name} --replicas=${cur}"
${scalecmd}
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Scale Rollback Error: ${GREEN}${scalecmd}${ENDC}"
exit 2
fi
echo -e "${BLUE}${BOLD}Scale Rollback to ${cur}${ENDC}"
exit 1
fi
PrintScaleSuccessMessage $cur $des
}
function scaleROServers() {
subcommand=$1
des=$2
case $des in
-h|--help|"")
showscaleHelpMessage $subcommand
exit 3
;;
esac
cur=$(kubectl get statefulset -n milvus milvus-ro-servers |tail -n 1 |awk '{split($2, status, "/"); print status[2];}')
echo -e "${GREEN}Current Running ${BOLD}$cur ${GREEN}Readonly Servers, Scaling to ${BOLD}$des ...${ENDC}";
scalecmd="kubectl scale sts milvus-ro-servers -n milvus --replicas=${des}"
${scalecmd}
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Scale Error: ${GREEN}${scalecmd}${ENDC}"
exit 1
fi
checkStatefulSevers
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Scale milvus-ro-servers timeout${NORMAL}${ENDC}"
scalecmd="kubectl scale sts milvus-ro-servers -n milvus --replicas=${cur}"
${scalecmd}
if [ $? -ne 0 ]; then
echo -e "${RED}${BOLD}Scale Rollback Error: ${GREEN}${scalecmd}${ENDC}"
exit 2
fi
echo -e "${BLUE}${BOLD}Scale Rollback to ${cur}${ENDC}"
exit 1
fi
PrintScaleSuccessMessage $cur $des
}
case "$1" in
cleanup)
kubectl delete -f . 2>/dev/null
echo -e "${BLUE}${BOLD}All resources are removed${NORMAL}${ENDC}"
;;
appdown)
removeApps;
echo -e "${BLUE}${BOLD}All pods are removed${NORMAL}${ENDC}"
;;
baseup)
startDependencies;
echo -e "${BLUE}${BOLD}All pvc, configmap and services up${NORMAL}${ENDC}"
;;
appup)
startApps;
echo -e "${BLUE}${BOLD}All pods up${NORMAL}${ENDC}"
;;
allup)
startDependencies;
sleep 2
startApps;
echo -e "${BLUE}${BOLD}All resources and pods up${NORMAL}${ENDC}"
;;
scale-ro-server)
scaleROServers $1 $2
;;
scale-proxy)
scaleDeployment "milvus-proxy" $1 $2
;;
-h|--help|*)
showHelpMessage
;;
esac
import fire
from sqlalchemy import and_
from mishards import db, settings
......@@ -12,17 +11,6 @@ class DBHandler:
def drop_all(cls):
db.drop_all()
@classmethod
def fun(cls, tid):
from mishards.factories import TablesFactory, TableFilesFactory, Tables
f = db.Session.query(Tables).filter(and_(
Tables.table_id == tid,
Tables.state != Tables.TO_DELETE)
).first()
print(f)
# f1 = TableFilesFactory()
if __name__ == '__main__':
db.init_db(settings.DefaultConfig.SQLALCHEMY_DATABASE_URI)
......
......@@ -6,7 +6,7 @@ SERVER_TEST_PORT=19888
#SQLALCHEMY_DATABASE_URI=mysql+pymysql://root:root@127.0.0.1:3306/milvus?charset=utf8mb4
SQLALCHEMY_DATABASE_URI=sqlite:////tmp/milvus/db/meta.sqlite?check_same_thread=False
SQL_ECHO=True
SQL_ECHO=False
#SQLALCHEMY_DATABASE_TEST_URI=mysql+pymysql://root:root@127.0.0.1:3306/milvus?charset=utf8mb4
SQLALCHEMY_DATABASE_TEST_URI=sqlite:////tmp/milvus/db/meta.sqlite?check_same_thread=False
......
......@@ -13,6 +13,7 @@ else:
DEBUG = env.bool('DEBUG', False)
MAX_RETRY = env.int('MAX_RETRY', 3)
LOG_LEVEL = env.str('LOG_LEVEL', 'DEBUG' if DEBUG else 'INFO')
LOG_PATH = env.str('LOG_PATH', '/tmp/mishards')
......@@ -22,9 +23,6 @@ TIMEZONE = env.str('TIMEZONE', 'UTC')
from utils.logger_helper import config
config(LOG_LEVEL, LOG_PATH, LOG_NAME, TIMEZONE)
TIMEOUT = env.int('TIMEOUT', 60)
MAX_RETRY = env.int('MAX_RETRY', 3)
SERVER_PORT = env.int('SERVER_PORT', 19530)
SERVER_TEST_PORT = env.int('SERVER_TEST_PORT', 19530)
WOSERVER = env.str('WOSERVER')
......@@ -69,12 +67,3 @@ class TestingConfig(DefaultConfig):
SQL_ECHO = env.bool('SQL_TEST_ECHO', False)
TRACER_CLASS_NAME = env.str('TRACER_CLASS_TEST_NAME', '')
ROUTER_CLASS_NAME = env.str('ROUTER_CLASS_TEST_NAME', 'FileBasedHashRingRouter')
if __name__ == '__main__':
import logging
logger = logging.getLogger(__name__)
logger.debug('DEBUG')
logger.info('INFO')
logger.warn('WARN')
logger.error('ERROR')
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册