add: anowflake email kafka, refa: redis connectg

This commit is contained in:
wwweww
2026-02-25 01:16:13 +08:00
parent fdbcde13b2
commit 300058ad01
67 changed files with 3596 additions and 139 deletions
+184
View File
@@ -0,0 +1,184 @@
# Operator 安装与示例使用
本文档提供 Strimzi Operator 与 MongoDB Community Operator 的两种安装方式:
- Helm 安装
- kubectl 安装
> 示例资源文件位于 `deploy/example`,默认使用 `juwan` 命名空间。
> 请先确保你的 Operator 能 watch 到 `juwan`,否则请改 namespace 或调整 Operator watch 范围。
## 1) Strimzi OperatorKafka
### 1.1 使用 Helm 安装
```bash
kubectl create namespace kafka
helm repo add strimzi https://strimzi.io/charts/
helm repo update
helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator -n kafka
```
### 1.2 使用 kubectl 安装
```bash
kubectl create namespace kafka
kubectl apply -f https://strimzi.io/install/latest?namespace=kafka -n kafka
```
### 1.3 安装验证
```bash
kubectl get pods -n kafka
kubectl get crd | grep kafka.strimzi.io
```
### 1.4 应用 Kafka 示例
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/kafka-strimzi-example.yaml
kubectl get kafka,kafkatopic,kafkauser -n juwan
```
## 2) MongoDB Community Operator
### 2.1 使用 Helm 安装
```bash
kubectl create namespace mongodb
helm repo add mongodb https://mongodb.github.io/helm-charts
helm repo update
helm install mongodb-kubernetes-operator mongodb/community-operator -n mongodb
```
### 2.2 使用 kubectl 安装
```bash
kubectl create namespace mongodb
kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
kubectl apply -k https://github.com/mongodb/mongodb-kubernetes-operator/config/rbac/
kubectl apply -k https://github.com/mongodb/mongodb-kubernetes-operator/config/manager/
```
### 2.3 安装验证
```bash
kubectl get pods -n mongodb
kubectl get crd | grep mongodbcommunity.mongodb.com
```
### 2.4 应用 MongoDB 示例
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/mongodb-community-example.yaml
kubectl get mongodbcommunity -n juwan
```
## 3) MongoDB:哨兵集群与分片集群搭建
### 3.1 关于“哨兵集群”的说明
MongoDB 没有 Redis Sentinel 的独立哨兵组件。
MongoDB 的高可用由 **Replica Set(副本集)** 原生完成(自动主从切换、故障恢复)。
因此在 MongoDB 场景里,“哨兵集群”通常对应为“副本集高可用集群”。
### 3.2 MongoDB“哨兵等价”方案:副本集高可用
本仓库提供了高可用副本集 YAML`deploy/example/mongodb-ha-replicaset-example.yaml`
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/mongodb-ha-replicaset-example.yaml
kubectl get mongodbcommunity -n juwan
```
查看副本集状态(任选一个 Pod 进入):
```bash
kubectl get pods -n juwan
kubectl exec -it -n juwan <mongodb-pod-name> -- mongosh --eval "rs.status()"
```
生产建议:
- 成员数保持奇数(3/5/7
- 使用持久化卷(PVC),不要用临时盘
- 跨可用区调度(反亲和)
- 开启备份与监控
### 3.3 MongoDB 分片集群架构(Sharded Cluster
分片集群由三层组成:
- Config Server ReplicaSet(保存分片元数据,建议 3 节点)
- Shard ReplicaSet(每个分片都是副本集,建议每分片 3 节点)
- Mongos(路由层,对业务暴露统一入口)
### 3.4 分片集群搭建步骤(kubectl 方式)
> 说明:MongoDB Community Operator 主要用于副本集管理。分片集群在社区实践中通常采用“手动编排(StatefulSet/Service+ mongosh 初始化”。
本仓库提供了分片集群基础编排 YAML:`deploy/example/mongodb-sharded-cluster-example.yaml`
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/mongodb-sharded-cluster-example.yaml
kubectl get pods,svc -n juwan
```
1) 部署 Config Server 副本集(3 节点)
- 使用 StatefulSet + Headless Service 部署 `mongod --configsvr --replSet cfg-rs`
1) 部署 Shard 副本集(例如 `shard1-rs``shard2-rs`,每个 3 节点)
- 使用 StatefulSet + Headless Service 部署 `mongod --shardsvr --replSet <shard-rs-name>`
1) 部署 Mongos 路由层
- Deployment 部署 `mongos --configdb cfg-rs/<cfg-0>:27019,<cfg-1>:27019,<cfg-2>:27019`
1) 初始化各副本集
```bash
# 初始化 Config Server RS
kubectl exec -it -n juwan <cfg-pod-0> -- mongosh --port 27019 --eval 'rs.initiate({_id:"cfg-rs",configsvr:true,members:[{_id:0,host:"cfg-0.cfg-svc.juwan.svc.cluster.local:27019"},{_id:1,host:"cfg-1.cfg-svc.juwan.svc.cluster.local:27019"},{_id:2,host:"cfg-2.cfg-svc.juwan.svc.cluster.local:27019"}]})'
# 初始化 shard1 RS
kubectl exec -it -n juwan <shard1-pod-0> -- mongosh --port 27018 --eval 'rs.initiate({_id:"shard1-rs",members:[{_id:0,host:"shard1-0.shard1-svc.juwan.svc.cluster.local:27018"},{_id:1,host:"shard1-1.shard1-svc.juwan.svc.cluster.local:27018"},{_id:2,host:"shard1-2.shard1-svc.juwan.svc.cluster.local:27018"}]})'
# 初始化 shard2 RS
kubectl exec -it -n juwan <shard2-pod-0> -- mongosh --port 27018 --eval 'rs.initiate({_id:"shard2-rs",members:[{_id:0,host:"shard2-0.shard2-svc.juwan.svc.cluster.local:27018"},{_id:1,host:"shard2-1.shard2-svc.juwan.svc.cluster.local:27018"},{_id:2,host:"shard2-2.shard2-svc.juwan.svc.cluster.local:27018"}]})'
```
1) 通过 Mongos 注册分片并启用分片
```bash
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.addShard("shard1-rs/shard1-0.shard1-svc.juwan.svc.cluster.local:27018,shard1-1.shard1-svc.juwan.svc.cluster.local:27018,shard1-2.shard1-svc.juwan.svc.cluster.local:27018")'
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.addShard("shard2-rs/shard2-0.shard2-svc.juwan.svc.cluster.local:27018,shard2-1.shard2-svc.juwan.svc.cluster.local:27018,shard2-2.shard2-svc.juwan.svc.cluster.local:27018")'
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.enableSharding("appdb")'
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.shardCollection("appdb.user_events", {"userId": "hashed"})'
```
1) 验证分片状态
```bash
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.status()'
```
## 4) 卸载(可选)
### StrimziHelm 安装场景)
```bash
helm uninstall strimzi-kafka-operator -n kafka
```
### MongoDB OperatorHelm 安装场景)
```bash
helm uninstall mongodb-kubernetes-operator -n mongodb
```
+80
View File
@@ -0,0 +1,80 @@
# Strimzi Kafka 集群示例
# 前提:已安装 Strimzi Operator,且 Operator 具备对本命名空间的 watch 权限。
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: juwan-kafka
namespace: juwan # 示例业务命名空间
spec:
kafka:
version: 3.9.0 # Kafka Broker 版本
replicas: 1 # 开发环境可用;生产环境建议 >= 3
listeners:
- name: plain
port: 9092
type: internal # 仅集群内部访问
tls: false # 明文 listener,内网调试方便
- name: tls
port: 9093
type: internal
tls: true # TLS listener,推荐业务接入使用
config:
# 单副本容错参数(仅适合开发环境)
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
storage:
type: ephemeral # 临时存储,Pod 重建会丢数据;生产建议 persistent-claim
zookeeper:
replicas: 1 # 开发环境可用;生产环境建议 >= 3
storage:
type: ephemeral
# 开启 Topic/User Operator,便于声明式管理 Topic 和账号
entityOperator:
topicOperator: {}
userOperator: {}
---
# 业务 Topic 示例
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: user-events # 用户事件主题
namespace: juwan
labels:
strimzi.io/cluster: juwan-kafka # 关联 Kafka 集群名
spec:
partitions: 3 # 分区数,决定并行消费能力
replicas: 1 # 副本数,开发环境示例
config:
retention.ms: 604800000 # 7 天
segment.bytes: 1073741824 # 1GiB
---
# Kafka 用户与 ACL 示例
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: app-producer # 应用侧生产者账号
namespace: juwan
labels:
strimzi.io/cluster: juwan-kafka
spec:
authentication:
type: tls # 生成 TLS 证书凭据 Secret
authorization:
type: simple
acls:
- resource:
type: topic
name: user-events
patternType: literal
operations:
- Read
- Write
- resource:
type: group
name: app-consumer-group
patternType: literal
operations:
- Read
@@ -0,0 +1,36 @@
# MongoDB 应用用户密码示例(请改为更安全的值,或对接外部 Secret 管理)
apiVersion: v1
kind: Secret
metadata:
name: mongodb-app-user-password
namespace: juwan # 示例业务命名空间
type: Opaque
stringData:
password: ChangeMe123456 # 示例明文,仅用于演示
---
# MongoDB Community Operator 自定义资源示例
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: juwan-mongodb
namespace: juwan
spec:
members: 3 # 副本集成员数,生产建议保持奇数
type: ReplicaSet
version: "7.0.12" # MongoDB 版本
security:
authentication:
modes:
- SCRAM # 启用用户名密码认证
users:
- name: app-user # 业务账号
db: admin
passwordSecretRef:
name: mongodb-app-user-password # 引用上方 Secret
roles:
- name: readWrite
db: appdb # 对 appdb 库授予读写
scramCredentialsSecretName: app-user-scram # Operator 生成的凭据 Secret
additionalMongodConfig:
# 示例:开启 WiredTiger 日志压缩
storage.wiredTiger.engineConfig.journalCompressor: zlib
@@ -0,0 +1,46 @@
# MongoDB 高可用(副本集)示例
# 说明:MongoDB 没有 Redis Sentinel 组件;副本集即其高可用机制。
apiVersion: v1
kind: Secret
metadata:
name: mongodb-ha-app-user-password
namespace: juwan
type: Opaque
stringData:
password: ChangeMe_ReallyStrongPassword
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: juwan-mongodb-ha
namespace: juwan
spec:
members: 3
type: ReplicaSet
version: "7.0.12"
# 生产建议开启持久化(具体 storageClassName 按集群调整)
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
security:
authentication:
modes:
- SCRAM
users:
- name: app-user
db: admin
passwordSecretRef:
name: mongodb-ha-app-user-password
roles:
- name: readWrite
db: appdb
scramCredentialsSecretName: app-user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
@@ -0,0 +1,218 @@
# MongoDB 分片集群最小示例(ConfigRS + 2 个 ShardRS + Mongos
# 使用方式:
# 1) 先 apply 本文件
# 2) 按文档执行 rs.initiate / sh.addShard / sh.enableSharding
# 注意:本示例侧重结构演示,生产环境请补齐资源限制、反亲和、PDB、备份与监控。
---
apiVersion: v1
kind: Service
metadata:
name: cfg-svc
namespace: juwan
spec:
clusterIP: None
selector:
app: mongo-cfg
ports:
- name: mongo
port: 27019
targetPort: 27019
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cfg
namespace: juwan
spec:
serviceName: cfg-svc
replicas: 3
selector:
matchLabels:
app: mongo-cfg
template:
metadata:
labels:
app: mongo-cfg
spec:
containers:
- name: mongod
image: mongo:7.0
args:
[
"--configsvr",
"--replSet",
"cfg-rs",
"--port",
"27019",
"--bind_ip_all",
]
ports:
- containerPort: 27019
name: mongo
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: shard1-svc
namespace: juwan
spec:
clusterIP: None
selector:
app: mongo-shard1
ports:
- name: mongo
port: 27018
targetPort: 27018
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: shard1
namespace: juwan
spec:
serviceName: shard1-svc
replicas: 3
selector:
matchLabels:
app: mongo-shard1
template:
metadata:
labels:
app: mongo-shard1
spec:
containers:
- name: mongod
image: mongo:7.0
args:
[
"--shardsvr",
"--replSet",
"shard1-rs",
"--port",
"27018",
"--bind_ip_all",
]
ports:
- containerPort: 27018
name: mongo
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: shard2-svc
namespace: juwan
spec:
clusterIP: None
selector:
app: mongo-shard2
ports:
- name: mongo
port: 27018
targetPort: 27018
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: shard2
namespace: juwan
spec:
serviceName: shard2-svc
replicas: 3
selector:
matchLabels:
app: mongo-shard2
template:
metadata:
labels:
app: mongo-shard2
spec:
containers:
- name: mongod
image: mongo:7.0
args:
[
"--shardsvr",
"--replSet",
"shard2-rs",
"--port",
"27018",
"--bind_ip_all",
]
ports:
- containerPort: 27018
name: mongo
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongos
namespace: juwan
spec:
selector:
app: mongos
ports:
- name: mongo
port: 27017
targetPort: 27017
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongos
namespace: juwan
spec:
replicas: 2
selector:
matchLabels:
app: mongos
template:
metadata:
labels:
app: mongos
spec:
containers:
- name: mongos
image: mongo:7.0
args:
- "mongos"
- "--configdb"
- "cfg-rs/cfg-0.cfg-svc.juwan.svc.cluster.local:27019,cfg-1.cfg-svc.juwan.svc.cluster.local:27019,cfg-2.cfg-svc.juwan.svc.cluster.local:27019"
- "--bind_ip_all"
- "--port"
- "27017"
ports:
- containerPort: 27017
name: mongo
@@ -1,11 +1,11 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: db-dx-init-script
namespace: juwan
labels:
app: db-dx-init-script
data:
init-extensions-sql: |
create extension if not exists "uuid-ossp";
create extension if not exists "pg_trgm";
apiVersion: v1
kind: ConfigMap
metadata:
name: db-dx-init-script
namespace: juwan
labels:
app: db-dx-init-script
data:
init-extensions-sql: |
create extension if not exists "uuid-ossp";
create extension if not exists "pg_trgm";
@@ -1,38 +1,38 @@
apiVersion: v1
kind: Namespace
metadata:
name: juwan
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: juwan
name: find-endpoints
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: discov-endpoints
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: find-endpoints-discov-endpoints
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: discov-endpoints
subjects:
- kind: ServiceAccount
name: find-endpoints
namespace: juwan
apiVersion: v1
kind: Namespace
metadata:
name: juwan
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: juwan
name: find-endpoints
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: discov-endpoints
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: find-endpoints-discov-endpoints
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: discov-endpoints
subjects:
- kind: ServiceAccount
name: find-endpoints
namespace: juwan
+33
View File
@@ -0,0 +1,33 @@
apiVersion: v1
kind: Service
metadata:
name: snowflake-sve
namespace: juwan
spec:
ClusterIP: None
selector:
app: snowflake
ports:
- port: 9000
targetPort: 9000
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: snowflake
namespace: juwan
spec:
serviceName: snowflake-svc
replicas: 3
selector:
matchLabels:
app: snowflake
template:
metadata:
labels:
app: snowflake
spec:
containers:
- name: snowflake
image:
+75
View File
@@ -0,0 +1,75 @@
# apiVersion: kafka.strimzi.io/v1
# kind: KafkaNodePool
# metadata:
# name: kafka-pool
# namespace: kafka
# labels:
# strimzi.io/cluster: my-cluster
# spec:
# replicas: 3
# roles:
# - controller
# - broker
# storage:
# type: jbod
# volumes:
# - id: 0
# type: persistent-claim
# size: 100Gi
# deleteClaim: false
# resources:
# requests:
# memory: 2Gi
# cpu: "1"
# limits:
# memory: 4Gi
# cpu: "2"
# ---
apiVersion: kafka.strimzi.io/v1
kind: KafkaNodePool
metadata:
name: controller-pool
namespace: kafka
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles:
- controller
storage:
type: persistent-claim
size: 10Gi
deleteClaim: false
resources:
requests:
memory: 1Gi
cpu: "0.5"
limits:
memory: 2Gi
cpu: "1"
---
apiVersion: kafka.strimzi.io/v1
kind: KafkaNodePool
metadata:
name: broker-pool
namespace: kafka
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles:
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
deleteClaim: false
resources:
requests:
memory: 2Gi
cpu: "1"
limits:
memory: 4Gi
cpu: "2"
+44
View File
@@ -0,0 +1,44 @@
apiVersion: kafka.strimzi.io/v1
kind: Kafka
metadata:
name: my-cluster
namespace: kafka
annotations:
strimzi.io/kraft: enabled
strimzi.io/node-pools: enabled
spec:
kafka:
version: 4.0.1
metadataVersion: 4.0-IV0
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
entityOperator:
topicOperator:
resources:
requests:
memory: 512Mi
cpu: "0.2"
limits:
memory: 512Mi
cpu: "0.5"
userOperator:
resources:
requests:
memory: 512Mi
cpu: "0.2"
limits:
memory: 512Mi
cpu: "0.5"
+13
View File
@@ -0,0 +1,13 @@
apiVersion: kafka.strimzi.io/v1
kind: KafkaTopic
metadata:
name: email-task
namespace: kafka
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 3
config:
retention.ms: 604800000
segment.bytes: 1073741824
+43 -2
View File
@@ -11,7 +11,11 @@ metadata:
rules:
- apiGroups: [""]
resources:
- nodes
- pods
- pods/log
- services
- endpoints
- namespaces
verbs: ["get", "list", "watch"]
---
@@ -50,6 +54,14 @@ data:
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: replace
source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
target_label: app
regex: (.+)
- action: replace
source_labels: [__meta_kubernetes_pod_label_app]
target_label: app
regex: (.+)
- action: replace
source_labels: [__meta_kubernetes_pod_node_name]
target_label: node
@@ -63,9 +75,29 @@ data:
source_labels: [__meta_kubernetes_pod_container_name]
target_label: container
- action: replace
source_labels: [__meta_kubernetes_pod_uid]
source_labels: [__meta_kubernetes_pod_uid, __meta_kubernetes_pod_container_name]
separator: /
target_label: __path__
replacement: /var/log/pods/*$1/*/*.log
replacement: /var/log/pods/*$1/*.log
- job_name: kubernetes-pods-static
pipeline_stages:
- regex:
source: filename
expression: /var/log/pods/(?P<namespace>[^_]+)_(?P<pod>[^_]+)_[^/]+/(?P<container>[^/]+)/[0-9]+\.log
- regex:
source: pod
expression: ^(?P<app>.+?)(?:-[a-f0-9]{8,10}-[a-z0-9]{5}|-[0-9]+)?$
- labels:
namespace:
pod:
container:
app:
static_configs:
- targets:
- localhost
labels:
job: kubernetes-pods
__path__: /var/log/pods/*/*/*.log
---
apiVersion: apps/v1
kind: DaemonSet
@@ -87,6 +119,9 @@ spec:
containers:
- name: promtail
image: grafana/promtail:2.9.6
securityContext:
runAsUser: 0
runAsGroup: 0
args:
- "-config.file=/etc/promtail/promtail.yaml"
volumeMounts:
@@ -97,6 +132,9 @@ spec:
- name: varlog
mountPath: /var/log
readOnly: true
- name: dockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
@@ -106,3 +144,6 @@ spec:
- name: varlog
hostPath:
path: /var/log
- name: dockercontainers
hostPath:
path: /var/lib/docker/containers
+119
View File
@@ -0,0 +1,119 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-api
namespace: juwan
labels:
app: email-api
spec:
replicas: 3
revisionHistoryLimit: 5
selector:
matchLabels:
app: email-api
template:
metadata:
labels:
app: email-api
spec:
serviceAccountName: find-endpoints
containers:
- name: email-api
image: email
ports:
- containerPort: 8888
env:
- name: KAFKA_BROKER
value: "my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
- name: REDIS_M_HOST
value: "user-redis-master.juwan:6379"
- name: REDIS_S_HOST
value: "user-redis-replica.juwan:6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: user-redis
key: password
readinessProbe:
tcpSocket:
port: 8888
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8888
initialDelaySeconds: 15
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1024Mi
volumeMounts:
- name: timezone
mountPath: /etc/localtime
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
---
apiVersion: v1
kind: Service
metadata:
name: email-api-svc
namespace: juwan
spec:
ports:
- port: 8888
targetPort: 8888
selector:
app: email-api
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: email-api-hpa-c
namespace: juwan
labels:
app: email-api-hpa-c
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: email-api
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: email-api-hpa-m
namespace: juwan
labels:
app: email-api-hpa-m
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: email-api
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
+100
View File
@@ -0,0 +1,100 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-consumer
namespace: juwan
labels:
app: email-consumer
spec:
replicas: 3
revisionHistoryLimit: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
selector:
matchLabels:
app: email-consumer
template:
metadata:
labels:
app: email-consumer
spec:
serviceAccountName: find-endpoints
containers:
- name: email-consumer
image: 103.236.53.208:4418/library/email-consumer@sha256:6fe8a3a57310a5e79feecc4bf38ac2c5b8c58a7f200f104f7bf4707b9db5fc13
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
volumeMounts:
- name: timezone
mountPath: /etc/localtime
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
---
apiVersion: v1
kind: Service
metadata:
name: email-consumer-svc
namespace: juwan
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: email-consumer
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: email-consumer-hpa-c
namespace: juwan
labels:
app: email-consumer-hpa-c
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: email-consumer
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: email-consumer-hpa-m
namespace: juwan
labels:
app: email-consumer-hpa-m
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: email-consumer
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
+107
View File
@@ -0,0 +1,107 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: snowflake
namespace: juwan
labels:
app: snowflake
spec:
replicas: 3
revisionHistoryLimit: 5
selector:
matchLabels:
app: snowflake
template:
metadata:
labels:
app: snowflake
spec:
serviceAccountName: find-endpoints
containers:
- name: snowflake
image: 103.236.53.208:4418/library/snowflake@sha256:1679cf94b69f426eec5d2f960ffb153bb7dbcd3bcaf0286261a43756384a86b3
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1024Mi
volumeMounts:
- name: timezone
mountPath: /etc/localtime
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
---
apiVersion: v1
kind: Service
metadata:
name: snowflake-svc
namespace: juwan
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: snowflake
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: snowflake-hpa-c
namespace: juwan
labels:
app: snowflake-hpa-c
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: snowflake
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: snowflake-hpa-m
namespace: juwan
labels:
app: snowflake-hpa-m
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: snowflake
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
+65 -11
View File
@@ -29,18 +29,35 @@ spec:
]
containers:
- name: user-rpc
image: user-rpc:v1
image: 103.236.53.208:4418/library/user-rpc@sha256:57746256905acb5757153aef536ebfd19338b7f935f01ba1f538fbfd0a12f6f5
ports:
- containerPort: 9001
- containerPort: 4001
env:
- name: DB_URI
- name: DB_PORT
valueFrom:
secretKeyRef:
name: user-db-app
key: uri
- name: REDIS_HOST
value: "user-redis.juwan:6379"
key: port
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: user-db-app
key: password
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: user-db-app
key: username
- name: DB_NAME
valueFrom:
secretKeyRef:
name: user-db-app
key: dbname
- name: REDIS_M_HOST
value: "user-redis-master.juwan:6379"
- name: REDIS_S_HOST
value: "user-redis-replica.juwan:6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
@@ -143,9 +160,9 @@ spec:
type: Utilization
averageUtilization: 80
---
# Redis Cluster
# Redis 主从复制
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisCluster
kind: RedisReplication
metadata:
name: user-redis
namespace: juwan
@@ -161,9 +178,10 @@ spec:
limits:
cpu: 500m
memory: 512Mi
redisSecret:
name: user-redis
key: password
redisSecret:
name: user-redis
key: password
redisExporter:
enabled: true
image: quay.io/opstree/redis-exporter:latest
@@ -172,7 +190,43 @@ spec:
runAsUser: 1000
fsGroup: 1000
storage:
size: 1Gi
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
# Sentinel 监控
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisSentinel
metadata:
name: user-redis-sentinel
namespace: juwan
spec:
clusterSize: 3
kubernetesConfig:
image: quay.io/opstree/redis-sentinel:v7.0.12
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
redisSentinelConfig:
redisReplicationName: user-redis
masterGroupName: mymaster
redisPort: "6379"
quorum: "2"
downAfterMilliseconds: "5000"
failoverTimeout: "10000"
parallelSyncs: "1"
---
# PostgreSQL 集群
+9 -1
View File
@@ -1,3 +1,11 @@
kubectl create secret generic user-redis \
--from-literal=password=$(openssl rand -base64 12) \
--namespace juwan
--namespace juwan
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm upgrade --install -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml prometheus-community prometheus-community/kube-prometheus-stack
kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.22/releases/cnpg-1.22.0.yaml
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts
helm install redis-operator ot-helm/redis-operator
kubectl create namespace kafka
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka