Merge branch 'main' into jenkins/init

This commit is contained in:
wwweww
2026-05-03 14:53:31 +00:00
60 changed files with 6062 additions and 15162 deletions
+35 -42
View File
@@ -4,63 +4,54 @@
- Docker(需要 buildx
- Python 3(构建脚本)
- Git(含 submodule:首次需 `git submodule update --init --recursive`
## 使用
```bash
cd deploy/dev
# 1. 构建所有微服务镜像
# 默认 8 并行,可通过环境变量 BAKE_BATCH_SIZE 调整
# 1. 构建所有镜像(默认 8 并行,可通过 BAKE_BATCH_SIZE 调整)
python3 build.py
# 2. 启动
docker compose up -d
# 3. 查看状态
docker compose ps
# 4. 通过网关访问
curl http://127.0.0.1:18080/healthz
# 5. 停止
docker compose down
# 3. 通过网关访问
open http://127.0.0.1:18080
```
构建脚本扫描 `app/` 下所有 `api``rpc``mq``adapter` 入口,通过 `docker buildx bake` 并行构建所有服务镜像,生成 `juwan/<service>-<type>:dev`
构建脚本扫描 `app/` 下所有 `api``rpc``mq``adapter` 入口`frontend/`,通过 `docker buildx bake` 并行构建所有镜像,生成 `juwan/<service>-<type>:dev``juwan/frontend:dev`
端到端接口测试走网关 `http://127.0.0.1:18080``18801-18814` 是各服务的直连端口,不经过认证链路
前端作为 submodule 接入 compose,通过 envoy 的同源 fallback 路由向浏览器提供,无需独立端口
Chat WebSocket 通过网关 `ws://127.0.0.1:18080/ws/chat` 访问。WebTransport 使用 `18443/udp``/wt/chat` 入口
如需只启动部分服务:
```bash
docker compose up -d postgres redis snowflake player-rpc player-api
```
Chat WebSocket 通过 `ws://127.0.0.1:18080/ws/chat` 访问。WebTransport 使用 `18443/udp``/wt/chat`
## 端口映射
| 服务 | 宿主机端口 |
| ---------------- | ---------------- |
| PostgreSQL | 15432 |
| Redis | 16379 |
| Kafka | 19092 |
| Envoy Gateway | 18080 |
| users-api | 18801 |
| player-api | 18802 |
| game-api | 18803 |
| shop-api | 18804 |
| order-api | 18805 |
| wallet-api | 18806 |
| community-api | 18807 |
| objectstory-api | 18808 |
| email-api | 18809 |
| chat-api | 18810, 18443/udp |
| review-api | 18811 |
| dispute-api | 18812 |
| notification-api | 18813 |
| search-api | 18814 |
| 服务 | 宿主机端口 | 说明 |
| ---------------- | ---------------- | ---------------------------------- |
| Envoy Gateway | 18080 | 浏览器入口,`/api/*``/ws/*`、前端静态都从这里出 |
| Redis | 16379 | 共享会话与验证码 |
| MongoDB | 27017 | chat 消息持久化 |
| Kafka | 19092 | email-mq 任务队列 |
| ratelimit | 18081, 16070 | 限流服务 |
| users-api | 18801 | 直连调试入口,不经认证链路 |
| player-api | 18802 | |
| game-api | 18803 | |
| shop-api | 18804 | |
| order-api | 18805 | |
| wallet-api | 18806 | |
| community-api | 18807 | |
| objectstory-api | 18808 | |
| email-api | 18809 | |
| chat-api | 18810, 18889, 18443/udp | |
| review-api | 18811 | |
| dispute-api | 18812 | |
| notification-api | 18813 | |
| search-api | 18814 | |
11 个 per-domain PostgreSQL`users-db``player-db``game-db``shop-db``order-db``wallet-db``community-db``review-db``dispute-db``notification-db``search-db`)和 `frontend` 容器不暴露宿主端口,仅在 compose 内网通过 DNS 互访。
## 环境变量
@@ -74,17 +65,19 @@ docker compose up -d postgres redis snowflake player-rpc player-api
| ADMIN_PASSWORD | 管理员密码 | admin123 |
| ADMIN_EMAIL | 管理员邮箱 | admin@juwan.dev |
默认 admin 固定 ID `100000`,拥有消费者、打手、店主三种身份全部开通,并预置了店铺、服务、钱包、帖子等演示数据,可直接用于完整链路联调。
## 认证
登录和注册通过 `users-api` 下发 `JToken` Cookie。`envoy-gateway` 负责 JWT 校验并注入认证头,`authz-adapter` 做会话态二次校验,后端服务只消费 `x-auth-user-id` 等头。
写接口需要先 `GET /healthz` 领取 `XSRF-TOKEN``XSRF-GUARD`,再在请求头带上 `xsrf-token`
写接口需要先 `GET /healthz` 领取 `__Host-XSRF-TOKEN``__Host-XSRF-GUARD` cookie,再在请求头带上 `xsrf-token`
注册和密码重置都需要先调验证码接口拿到 `requestId`,再把它放到 `X-Request-Id` 请求头里发后续请求。
## 数据库初始化
## 数据库初始化
首次启动时 PostgreSQL 会自动执行 `desc/sql/` 下的建表语句。如需重新初始化,删除 volume 后重启
每个 per-domain PostgreSQL 首次启动时,通过挂载 `desc/sql/<domain>/``deploy/dev/fixture/<domain>.sql` 自动完成建表与演示数据导入。如需完全重置
```bash
docker compose down -v
-4
View File
@@ -1,4 +0,0 @@
HARBOR_REGISTRY=harbor.example.com
HARBOR_PROJECT=juwan
IMAGE_NAME=st-1-example
IMAGE_TAG=latest
-48
View File
@@ -1,48 +0,0 @@
# Docker 服务器部署方案(Gitea Actions
本方案替代 Jenkins
1. `push` 代码后由 Gitea Actions 构建镜像并推送 Harbor。
2. 同一工作流通过 SSH 连接服务器,执行 `docker compose pull && docker compose up -d` 完成更新。
## 1) 服务器准备
在目标服务器安装:
- Docker Engine
- Docker Compose 插件(`docker compose version` 可用)
并确保部署用户有 docker 权限:
```bash
sudo usermod -aG docker <deploy-user>
```
## 2) Gitea 仓库 Secrets
在仓库中配置以下 Secrets
- `HARBOR_REGISTRY`:例如 `harbor.example.com`
- `HARBOR_PROJECT`:例如 `juwan`
- `HARBOR_USERNAME`
- `HARBOR_PASSWORD`
- `DEPLOY_HOST`:服务器地址
- `DEPLOY_PORT`:可选,默认 `22`
- `DEPLOY_USER`:服务器 SSH 用户
- `DEPLOY_SSH_KEY`:私钥内容(PEM
- `DEPLOY_PATH`:可选,默认 `/opt/st-1-example`
## 3) 触发规则
- 构建推送:`main/master/dev/feature/**`
- 自动部署:仅 `main/master`
如需改分支规则,编辑:
- `.gitea/workflows/build-push-harbor.yml`
## 4) 端口与服务
Compose 文件:`deploy/docker/docker-compose.yml`
默认映射:`8888:8888`,服务名:`st-example`
-9
View File
@@ -1,9 +0,0 @@
services:
st-example:
image: ${HARBOR_REGISTRY}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG:-latest}
container_name: st-example
restart: always
ports:
- "8888:8888"
environment:
TZ: Asia/Shanghai
-184
View File
@@ -1,184 +0,0 @@
# Operator 安装与示例使用
本文档提供 Strimzi Operator 与 MongoDB Community Operator 的两种安装方式:
- Helm 安装
- kubectl 安装
> 示例资源文件位于 `deploy/example`,默认使用 `juwan` 命名空间。
> 请先确保你的 Operator 能 watch 到 `juwan`,否则请改 namespace 或调整 Operator watch 范围。
## 1) Strimzi OperatorKafka
### 1.1 使用 Helm 安装
```bash
kubectl create namespace kafka
helm repo add strimzi https://strimzi.io/charts/
helm repo update
helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator -n kafka
```
### 1.2 使用 kubectl 安装
```bash
kubectl create namespace kafka
kubectl apply -f https://strimzi.io/install/latest?namespace=kafka -n kafka
```
### 1.3 安装验证
```bash
kubectl get pods -n kafka
kubectl get crd | grep kafka.strimzi.io
```
### 1.4 应用 Kafka 示例
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/kafka-strimzi-example.yaml
kubectl get kafka,kafkatopic,kafkauser -n juwan
```
## 2) MongoDB Community Operator
### 2.1 使用 Helm 安装
```bash
kubectl create namespace mongodb
helm repo add mongodb https://mongodb.github.io/helm-charts
helm repo update
helm install mongodb-kubernetes-operator mongodb/community-operator -n mongodb
```
### 2.2 使用 kubectl 安装
```bash
kubectl create namespace mongodb
kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-kubernetes-operator/master/config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
kubectl apply -k https://github.com/mongodb/mongodb-kubernetes-operator/config/rbac/
kubectl apply -k https://github.com/mongodb/mongodb-kubernetes-operator/config/manager/
```
### 2.3 安装验证
```bash
kubectl get pods -n mongodb
kubectl get crd | grep mongodbcommunity.mongodb.com
```
### 2.4 应用 MongoDB 示例
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/mongodb-community-example.yaml
kubectl get mongodbcommunity -n juwan
```
## 3) MongoDB:哨兵集群与分片集群搭建
### 3.1 关于“哨兵集群”的说明
MongoDB 没有 Redis Sentinel 的独立哨兵组件。
MongoDB 的高可用由 **Replica Set(副本集)** 原生完成(自动主从切换、故障恢复)。
因此在 MongoDB 场景里,“哨兵集群”通常对应为“副本集高可用集群”。
### 3.2 MongoDB“哨兵等价”方案:副本集高可用
本仓库提供了高可用副本集 YAML`deploy/example/mongodb-ha-replicaset-example.yaml`
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/mongodb-ha-replicaset-example.yaml
kubectl get mongodbcommunity -n juwan
```
查看副本集状态(任选一个 Pod 进入):
```bash
kubectl get pods -n juwan
kubectl exec -it -n juwan <mongodb-pod-name> -- mongosh --eval "rs.status()"
```
生产建议:
- 成员数保持奇数(3/5/7
- 使用持久化卷(PVC),不要用临时盘
- 跨可用区调度(反亲和)
- 开启备份与监控
### 3.3 MongoDB 分片集群架构(Sharded Cluster
分片集群由三层组成:
- Config Server ReplicaSet(保存分片元数据,建议 3 节点)
- Shard ReplicaSet(每个分片都是副本集,建议每分片 3 节点)
- Mongos(路由层,对业务暴露统一入口)
### 3.4 分片集群搭建步骤(kubectl 方式)
> 说明:MongoDB Community Operator 主要用于副本集管理。分片集群在社区实践中通常采用“手动编排(StatefulSet/Service+ mongosh 初始化”。
本仓库提供了分片集群基础编排 YAML:`deploy/example/mongodb-sharded-cluster-example.yaml`
```bash
kubectl create namespace juwan
kubectl apply -f deploy/example/mongodb-sharded-cluster-example.yaml
kubectl get pods,svc -n juwan
```
1) 部署 Config Server 副本集(3 节点)
- 使用 StatefulSet + Headless Service 部署 `mongod --configsvr --replSet cfg-rs`
1) 部署 Shard 副本集(例如 `shard1-rs``shard2-rs`,每个 3 节点)
- 使用 StatefulSet + Headless Service 部署 `mongod --shardsvr --replSet <shard-rs-name>`
1) 部署 Mongos 路由层
- Deployment 部署 `mongos --configdb cfg-rs/<cfg-0>:27019,<cfg-1>:27019,<cfg-2>:27019`
1) 初始化各副本集
```bash
# 初始化 Config Server RS
kubectl exec -it -n juwan <cfg-pod-0> -- mongosh --port 27019 --eval 'rs.initiate({_id:"cfg-rs",configsvr:true,members:[{_id:0,host:"cfg-0.cfg-svc.juwan.svc.cluster.local:27019"},{_id:1,host:"cfg-1.cfg-svc.juwan.svc.cluster.local:27019"},{_id:2,host:"cfg-2.cfg-svc.juwan.svc.cluster.local:27019"}]})'
# 初始化 shard1 RS
kubectl exec -it -n juwan <shard1-pod-0> -- mongosh --port 27018 --eval 'rs.initiate({_id:"shard1-rs",members:[{_id:0,host:"shard1-0.shard1-svc.juwan.svc.cluster.local:27018"},{_id:1,host:"shard1-1.shard1-svc.juwan.svc.cluster.local:27018"},{_id:2,host:"shard1-2.shard1-svc.juwan.svc.cluster.local:27018"}]})'
# 初始化 shard2 RS
kubectl exec -it -n juwan <shard2-pod-0> -- mongosh --port 27018 --eval 'rs.initiate({_id:"shard2-rs",members:[{_id:0,host:"shard2-0.shard2-svc.juwan.svc.cluster.local:27018"},{_id:1,host:"shard2-1.shard2-svc.juwan.svc.cluster.local:27018"},{_id:2,host:"shard2-2.shard2-svc.juwan.svc.cluster.local:27018"}]})'
```
1) 通过 Mongos 注册分片并启用分片
```bash
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.addShard("shard1-rs/shard1-0.shard1-svc.juwan.svc.cluster.local:27018,shard1-1.shard1-svc.juwan.svc.cluster.local:27018,shard1-2.shard1-svc.juwan.svc.cluster.local:27018")'
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.addShard("shard2-rs/shard2-0.shard2-svc.juwan.svc.cluster.local:27018,shard2-1.shard2-svc.juwan.svc.cluster.local:27018,shard2-2.shard2-svc.juwan.svc.cluster.local:27018")'
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.enableSharding("appdb")'
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.shardCollection("appdb.user_events", {"userId": "hashed"})'
```
1) 验证分片状态
```bash
kubectl exec -it -n juwan <mongos-pod-name> -- mongosh --port 27017 --eval 'sh.status()'
```
## 4) 卸载(可选)
### StrimziHelm 安装场景)
```bash
helm uninstall strimzi-kafka-operator -n kafka
```
### MongoDB OperatorHelm 安装场景)
```bash
helm uninstall mongodb-kubernetes-operator -n mongodb
```
-80
View File
@@ -1,80 +0,0 @@
# Strimzi Kafka 集群示例
# 前提:已安装 Strimzi Operator,且 Operator 具备对本命名空间的 watch 权限。
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: juwan-kafka
namespace: juwan # 示例业务命名空间
spec:
kafka:
version: 3.9.0 # Kafka Broker 版本
replicas: 1 # 开发环境可用;生产环境建议 >= 3
listeners:
- name: plain
port: 9092
type: internal # 仅集群内部访问
tls: false # 明文 listener,内网调试方便
- name: tls
port: 9093
type: internal
tls: true # TLS listener,推荐业务接入使用
config:
# 单副本容错参数(仅适合开发环境)
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
storage:
type: ephemeral # 临时存储,Pod 重建会丢数据;生产建议 persistent-claim
zookeeper:
replicas: 1 # 开发环境可用;生产环境建议 >= 3
storage:
type: ephemeral
# 开启 Topic/User Operator,便于声明式管理 Topic 和账号
entityOperator:
topicOperator: {}
userOperator: {}
---
# 业务 Topic 示例
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: user-events # 用户事件主题
namespace: juwan
labels:
strimzi.io/cluster: juwan-kafka # 关联 Kafka 集群名
spec:
partitions: 3 # 分区数,决定并行消费能力
replicas: 1 # 副本数,开发环境示例
config:
retention.ms: 604800000 # 7 天
segment.bytes: 1073741824 # 1GiB
---
# Kafka 用户与 ACL 示例
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: app-producer # 应用侧生产者账号
namespace: juwan
labels:
strimzi.io/cluster: juwan-kafka
spec:
authentication:
type: tls # 生成 TLS 证书凭据 Secret
authorization:
type: simple
acls:
- resource:
type: topic
name: user-events
patternType: literal
operations:
- Read
- Write
- resource:
type: group
name: app-consumer-group
patternType: literal
operations:
- Read
@@ -1,36 +0,0 @@
# MongoDB 应用用户密码示例(请改为更安全的值,或对接外部 Secret 管理)
apiVersion: v1
kind: Secret
metadata:
name: mongodb-app-user-password
namespace: juwan # 示例业务命名空间
type: Opaque
stringData:
password: ChangeMe123456 # 示例明文,仅用于演示
---
# MongoDB Community Operator 自定义资源示例
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: juwan-mongodb
namespace: juwan
spec:
members: 3 # 副本集成员数,生产建议保持奇数
type: ReplicaSet
version: "7.0.12" # MongoDB 版本
security:
authentication:
modes:
- SCRAM # 启用用户名密码认证
users:
- name: app-user # 业务账号
db: admin
passwordSecretRef:
name: mongodb-app-user-password # 引用上方 Secret
roles:
- name: readWrite
db: appdb # 对 appdb 库授予读写
scramCredentialsSecretName: app-user-scram # Operator 生成的凭据 Secret
additionalMongodConfig:
# 示例:开启 WiredTiger 日志压缩
storage.wiredTiger.engineConfig.journalCompressor: zlib
@@ -1,46 +0,0 @@
# MongoDB 高可用(副本集)示例
# 说明:MongoDB 没有 Redis Sentinel 组件;副本集即其高可用机制。
apiVersion: v1
kind: Secret
metadata:
name: mongodb-ha-app-user-password
namespace: juwan
type: Opaque
stringData:
password: ChangeMe_ReallyStrongPassword
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: juwan-mongodb-ha
namespace: juwan
spec:
members: 3
type: ReplicaSet
version: "7.0.12"
# 生产建议开启持久化(具体 storageClassName 按集群调整)
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
security:
authentication:
modes:
- SCRAM
users:
- name: app-user
db: admin
passwordSecretRef:
name: mongodb-ha-app-user-password
roles:
- name: readWrite
db: appdb
scramCredentialsSecretName: app-user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
@@ -1,218 +0,0 @@
# MongoDB 分片集群最小示例(ConfigRS + 2 个 ShardRS + Mongos
# 使用方式:
# 1) 先 apply 本文件
# 2) 按文档执行 rs.initiate / sh.addShard / sh.enableSharding
# 注意:本示例侧重结构演示,生产环境请补齐资源限制、反亲和、PDB、备份与监控。
---
apiVersion: v1
kind: Service
metadata:
name: cfg-svc
namespace: juwan
spec:
clusterIP: None
selector:
app: mongo-cfg
ports:
- name: mongo
port: 27019
targetPort: 27019
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cfg
namespace: juwan
spec:
serviceName: cfg-svc
replicas: 3
selector:
matchLabels:
app: mongo-cfg
template:
metadata:
labels:
app: mongo-cfg
spec:
containers:
- name: mongod
image: mongo:7.0
args:
[
"--configsvr",
"--replSet",
"cfg-rs",
"--port",
"27019",
"--bind_ip_all",
]
ports:
- containerPort: 27019
name: mongo
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: shard1-svc
namespace: juwan
spec:
clusterIP: None
selector:
app: mongo-shard1
ports:
- name: mongo
port: 27018
targetPort: 27018
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: shard1
namespace: juwan
spec:
serviceName: shard1-svc
replicas: 3
selector:
matchLabels:
app: mongo-shard1
template:
metadata:
labels:
app: mongo-shard1
spec:
containers:
- name: mongod
image: mongo:7.0
args:
[
"--shardsvr",
"--replSet",
"shard1-rs",
"--port",
"27018",
"--bind_ip_all",
]
ports:
- containerPort: 27018
name: mongo
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: shard2-svc
namespace: juwan
spec:
clusterIP: None
selector:
app: mongo-shard2
ports:
- name: mongo
port: 27018
targetPort: 27018
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: shard2
namespace: juwan
spec:
serviceName: shard2-svc
replicas: 3
selector:
matchLabels:
app: mongo-shard2
template:
metadata:
labels:
app: mongo-shard2
spec:
containers:
- name: mongod
image: mongo:7.0
args:
[
"--shardsvr",
"--replSet",
"shard2-rs",
"--port",
"27018",
"--bind_ip_all",
]
ports:
- containerPort: 27018
name: mongo
volumeMounts:
- name: data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongos
namespace: juwan
spec:
selector:
app: mongos
ports:
- name: mongo
port: 27017
targetPort: 27017
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongos
namespace: juwan
spec:
replicas: 2
selector:
matchLabels:
app: mongos
template:
metadata:
labels:
app: mongos
spec:
containers:
- name: mongos
image: mongo:7.0
args:
- "mongos"
- "--configdb"
- "cfg-rs/cfg-0.cfg-svc.juwan.svc.cluster.local:27019,cfg-1.cfg-svc.juwan.svc.cluster.local:27019,cfg-2.cfg-svc.juwan.svc.cluster.local:27019"
- "--bind_ip_all"
- "--port"
- "27017"
ports:
- containerPort: 27017
name: mongo
-11
View File
@@ -1,11 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: db-dx-init-script
namespace: juwan
labels:
app: db-dx-init-script
data:
init-extensions-sql: |
create extension if not exists "uuid-ossp";
create extension if not exists "pg_trgm";