Compare commits

41 Commits

Author SHA1 Message Date
zetaloop 8cf1af8019 feat: add gitea actions cd workflow, drop old harbor one
cd / discover (push) Waiting to run
cd / build (push) Blocked by required conditions
cd / rollout (push) Blocked by required conditions
2026-05-06 15:12:50 +08:00
zetaloop 2e4454ded3 fix(k01): apply-schema list clusters after wait to avoid race
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 14:21:44 +08:00
zetaloop 92f8344cc2 fix(k01): apply-schema drop owned objects and wait all clusters
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 14:02:53 +08:00
zetaloop 68bdb9797b fix(k01): apply-schema use TCP+PGPASSWORD for CNPG peer-auth bypass
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 13:21:23 +08:00
zetaloop 50f0846e11 fix(k01): raise redpanda memory to 400Mi/500Mi
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 13:17:39 +08:00
zetaloop d6c59b59d4 fix(k01): drop additionalRedpandaCmdFlags to avoid conflict with chart-generated flags
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 12:47:25 +08:00
zetaloop fe744ae6c4 fix(k01): enable redpanda developer_mode to skip 1GB memory minimum check
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 12:22:45 +08:00
zetaloop e43d2467da fix(k01): drop chartRef.chartVersion from Redpanda CR
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 12:13:55 +08:00
zetaloop a25086dcdf fix(k01): relax redpanda operator probes and raise memory limits
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 12:05:16 +08:00
zetaloop 6fc320656b feat(k01): replace strimzi kafka with redpanda
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 11:39:47 +08:00
zetaloop 1deb5dbdb2 fix(k01): resource requests based on actual usage
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 10:24:59 +08:00
zetaloop 6341d746da fix: fuck mongo
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 09:48:02 +08:00
zetaloop 50f079d86c fix(k01): replicate operator shell script and wrap exec with setarch for mongod
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 09:06:04 +08:00
zetaloop 01f8bc1729 fix(k01): wrap mongod entrypoint with setarch uname-2.6 to bypass kernel 6.19+ guard
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 08:32:46 +08:00
zetaloop ed3f80ca73 fix(k01): force glibc pthread rseq for mongo to bypass tcmalloc crash on kernel 6.19+
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 07:41:58 +08:00
zetaloop 4d4a16ba1b feat(k01): bump mongo redis images and workaround mongo tcmalloc segfault on newer kernels
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 07:18:24 +08:00
zetaloop 92822e9da8 fix(k01): rewrite strimzi namespace via sed before applying manifest 2026-05-06 07:17:40 +08:00
zetaloop 4a8e04d444 fix(k01): shrink postgres redis kafka memory requests
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 05:22:32 +08:00
zetaloop b9ff1f043d fix(k01): correct redis wait condition and serialize ratelimit on rl-redis
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 04:49:37 +08:00
zetaloop a3174d16d0 fix(k01): scope teardown cleanup to business resources only
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 04:38:55 +08:00
zetaloop 4ee866da95 feat(k01): add teardown script for clean reset of data and service layers
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 04:32:36 +08:00
zetaloop 513d0dbac2 fix(k01): lower cpu requests so all infra and services fit on 1 vcpu node 2026-05-06 04:32:25 +08:00
zetaloop bc8c5ad152 fix(k01): avoid set -e exit on arithmetic post-increment in apply-infra
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 03:52:41 +08:00
zetaloop 8dac0b8d76 fix(k01): apply infra cr documents one by one to avoid scheduler storm
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 03:44:54 +08:00
zetaloop da43d9b8f7 fix(k01): set production-grade memory limits across all workloads
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 03:24:31 +08:00
zetaloop c575b53843 refactor(k01): flatten directory layout and split deployment into five scripts
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 02:37:25 +08:00
zetaloop 8ba8c7ca20 fix(k01): bump kafka crd apiversion to v1 for strimzi 1.0.0
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 01:28:07 +08:00
zetaloop 95f3608b4b fix(k01): quote env value with spaces for shell source compatibility
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 01:01:24 +08:00
zetaloop 45ade5a6a0 fix(k01): merge secrets.sh into install-k3s.sh and lower operator resource requests
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 00:52:10 +08:00
zetaloop 4d93678046 fix(k01): lower operator resource requests to fit single-vcpu node
build-and-push-harbor / docker-build-push (push) Waiting to run
2026-05-06 00:30:05 +08:00
zetaloop 430cc63eb2 fix(k01): use server-side apply for cnpg and strimzi manifests
build-and-push-harbor / docker-build-push (push) Has been cancelled
2026-05-05 13:35:11 +08:00
zetaloop d9a41c9831 chore: point frontend submodule to new gitea instance
build-and-push-harbor / docker-build-push (push) Has been cancelled
2026-05-05 12:51:39 +08:00
zetaloop e92bdf30d9 feat(k01): add agent join mode to install-k3s.sh 2026-05-05 12:29:56 +08:00
zetaloop c456f3e296 fix(snowflake): support per-replica WorkerId via env for multi-instance StatefulSet 2026-05-05 12:29:47 +08:00
zetaloop cba510c675 docs(k01): fix readme 2026-05-05 12:22:14 +08:00
zetaloop 8697569b81 docs(k01): rewrite readme following center host documentation style 2026-05-05 12:11:30 +08:00
zetaloop 20ca50c127 feat(deploy): add k01 business cluster manifests for k3s with cnpg, strimzi, redis and mongodb operators 2026-05-05 12:08:10 +08:00
zetaloop 2d4dc236e9 fix(center/zot): drop unworkable healthcheck for distroless image 2026-05-05 11:08:04 +08:00
zetaloop f3b12f30f0 feat(center/caddy): add WebTransport reverse-proxy passthrough via PR #7669 fork 2026-05-05 10:54:37 +08:00
zetaloop 2a41969771 feat(deploy): add center host docker compose stack for git, registry and s3 hosting 2026-05-05 08:45:58 +08:00
wwweww d1ff2661d1 fix(k8s): lower cpu/memory requests for dev environment (50m/128Mi) 2026-05-04 13:59:15 +08:00
80 changed files with 5461 additions and 266 deletions
-150
View File
@@ -1,150 +0,0 @@
name: build-and-push-harbor
on:
push:
branches:
- main
- master
- dev
- "feature/**"
workflow_dispatch:
jobs:
docker-build-push:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: http://103.236.53.208:3000/actions/checkout@v4
- name: Set image tags
id: vars
run: |
echo "short_sha=${GITHUB_SHA::7}" >> "$GITHUB_OUTPUT"
echo "date_tag=$(date +%Y%m%d%H%M%S)" >> "$GITHUB_OUTPUT"
# 调试步骤:检查 Secret 是否正确读取 (只会输出字符长度,不会泄露密码)
- name: Debug Secrets
run: |
echo "Registry: ${{ secrets.HARBOR_REGISTRY }}"
echo "User length: ${#HARBOR_USERNAME}"
echo "Pass length: ${#HARBOR_PASSWORD}"
env:
HARBOR_USERNAME: ${{ secrets.HARBOR_USERNAME }}
HARBOR_PASSWORD: ${{ secrets.HARBOR_PASSWORD }}
- name: Login Harbor
env:
HARBOR_REGISTRY: ${{ secrets.HARBOR_REGISTRY }}
HARBOR_USERNAME: ${{ secrets.HARBOR_USERNAME }}
HARBOR_PASSWORD: ${{ secrets.HARBOR_PASSWORD }}
run: |
# 尝试登录,如果失败会打印详细信息
echo "$HARBOR_PASSWORD" | docker login "$HARBOR_REGISTRY" -u "$HARBOR_USERNAME" --password-stdin
- name: Build and Push Monorepo Services
env:
HARBOR_REGISTRY: ${{ secrets.HARBOR_REGISTRY }}
HARBOR_PROJECT: ${{ secrets.HARBOR_PROJECT }}
SHORT_SHA: ${{ steps.vars.outputs.short_sha }}
DATE_TAG: ${{ steps.vars.outputs.date_tag }}
shell: bash
run: |
set -euo pipefail
# 1. 定义要遍历的微服务根目录 (根据你的项目结构)
# 你的结构是 app/<service>/<type>
# 我们只关心 api, rpc, mq 这三种类型
echo "🔍 开始扫描 Go-Zero 微服务..."
# 找到 app 下所有的 api/rpc/mq 目录
find app -mindepth 2 -maxdepth 2 -type d \( -name "api" -o -name "rpc" -o -name "mq" \) | sort | while read -r service_dir; do
# service_dir 举例: app/community/api
service_type=$(basename "$service_dir") # api
parent_dir=$(dirname "$service_dir") # app/community
service_name=$(basename "$parent_dir") # community
# 2. 智能查找入口文件 (main.go 或 service_name.go)
# 在当前目录下查找含有 "package main" 的 .go 文件
entry_file=$(grep -l "package main" "$service_dir"/*.go | head -n 1 || true)
if [[ -z "$entry_file" ]]; then
echo "⚠️ 跳过 $service_dir: 未找到 package main 入口文件"
continue
fi
# 3. 智能查找配置文件 (etc/*.yaml)
config_file=$(ls "$service_dir/etc/"*.yaml 2>/dev/null | head -n 1 || true)
if [[ -z "$config_file" ]]; then
echo "⚠️ 警告 $service_name-$service_type: 未找到 etc/*.yaml 配置文件,容器启动可能失败"
config_name="config.yaml" # fallback
else
config_name=$(basename "$config_file")
fi
# 镜像名称: community-api, user-rpc 等
image_name="${service_name}-${service_type}"
image_ref="$HARBOR_REGISTRY/$HARBOR_PROJECT/$image_name"
echo "----------------------------------------------------"
echo "🚀 构建目标: $image_name"
echo "📂 入口文件: $entry_file"
echo "📄 配置文件: $config_name"
echo "----------------------------------------------------"
# 4. 动态生成 Dockerfile (针对 Monorepo 优化)
# 关键点:COPY . . 把整个根目录拷进去,因为 Go-Zero 项目通常依赖 ../../common
cat <<EOF > Dockerfile.tmp
FROM golang:alpine AS builder
# 优化:使用阿里云 Alpine 源
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && \
apk update --no-cache && apk add --no-cache tzdata
WORKDIR /build
# 优化:利用层缓存,先下载依赖
ENV GOPROXY=https://goproxy.cn,direct
COPY go.mod go.sum ./
RUN go mod download
# 拷贝所有源码 (解决 common 依赖问题)
COPY . .
# 编译
RUN go build -ldflags="-s -w" -o /app/main $entry_file
# 运行时镜像
FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /usr/share/zoneinfo/Asia/Shanghai /usr/share/zoneinfo/Asia/Shanghai
ENV TZ=Asia/Shanghai
WORKDIR /app
COPY --from=builder /app/main /app/main
# 拷贝 etc 配置
COPY $service_dir/etc /app/etc
# 启动命令
CMD ["./main", "-f", "etc/$config_name"]
EOF
# 5. 执行 Docker Build
docker build -f Dockerfile.tmp -t "$image_ref:$SHORT_SHA" .
# 打其他 Tag
docker tag "$image_ref:$SHORT_SHA" "$image_ref:$DATE_TAG"
docker tag "$image_ref:$SHORT_SHA" "$image_ref:latest"
# 6. 推送
echo "📤 推送镜像..."
docker push "$image_ref:$SHORT_SHA"
docker push "$image_ref:$DATE_TAG"
docker push "$image_ref:latest"
echo "✅ $image_name 完成"
rm -f Dockerfile.tmp
done
+143
View File
@@ -0,0 +1,143 @@
name: cd
on:
push:
branches: [main]
workflow_dispatch:
env:
REGISTRY: registry.juwan.xhttp.zip
REPO: juwan
jobs:
discover:
runs-on: ubuntu-latest
outputs:
targets: ${{ steps.list.outputs.targets }}
short_sha: ${{ steps.list.outputs.short_sha }}
steps:
- uses: actions/checkout@v4
- id: list
shell: bash
run: |
set -euo pipefail
echo "short_sha=${GITHUB_SHA::7}" >> "$GITHUB_OUTPUT"
python3 - <<'PY' >> "$GITHUB_OUTPUT"
import json, os
NAME_OVERRIDE = {
"users": ("users", "user"),
"user_verifications": ("user_verifications", "user-verifications"),
}
STATEFULSETS = {"snowflake-rpc": "snowflake"}
targets = []
for svc in sorted(os.listdir("app")):
svc_dir = f"app/{svc}"
if not os.path.isdir(svc_dir):
continue
for sub in sorted(os.listdir(svc_dir)):
d = f"{svc_dir}/{sub}"
if not os.path.isdir(d) or sub not in ("api","rpc","mq","adapter"):
continue
img_pre, wl_pre = NAME_OVERRIDE.get(svc, (svc, svc))
image = f"{img_pre}-{sub}"
workload = STATEFULSETS.get(image, f"{wl_pre}-{sub}")
targets.append({"image": image, "dir": d, "workload": workload})
print("targets=" + json.dumps(targets))
PY
build:
needs: discover
runs-on: ubuntu-latest
strategy:
fail-fast: false
max-parallel: 1
matrix:
target: ${{ fromJson(needs.discover.outputs.targets) }}
steps:
- uses: actions/checkout@v4
- name: Setup Buildx
uses: docker/setup-buildx-action@v3
- name: Login registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Generate Dockerfile
shell: bash
run: |
set -euo pipefail
dir='${{ matrix.target.dir }}'
entry=$(grep -l "package main" "$dir"/*.go | head -n1)
cfg=$(basename "$(find "$dir/etc" -maxdepth 1 -name '*.yaml' | head -n1)" 2>/dev/null || echo config.yaml)
cat > Dockerfile.build <<EOF
FROM golang:1.25-alpine AS builder
WORKDIR /build
ENV CGO_ENABLED=0 GOOS=linux
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod go mod download
COPY . .
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build -ldflags="-s -w" -o /app/main $entry
FROM alpine:3.21
RUN apk add --no-cache ca-certificates tzdata
ENV TZ=Asia/Shanghai
WORKDIR /app
COPY --from=builder /app/main /app/main
COPY $dir/etc /app/etc
CMD ["./main", "-f", "etc/$cfg"]
EOF
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.build
push: true
tags: |
${{ env.REGISTRY }}/${{ env.REPO }}/${{ matrix.target.image }}:${{ needs.discover.outputs.short_sha }}
${{ env.REGISTRY }}/${{ env.REPO }}/${{ matrix.target.image }}:latest
cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ env.REPO }}/buildcache:${{ matrix.target.image }}
cache-to: type=registry,ref=${{ env.REGISTRY }}/${{ env.REPO }}/buildcache:${{ matrix.target.image }},mode=max
rollout:
needs: [discover, build]
runs-on: ubuntu-latest
steps:
- name: Install kubectl
run: |
curl -sLo /usr/local/bin/kubectl \
"https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x /usr/local/bin/kubectl
- name: Rollout k01
env:
KUBECONFIG_B64: ${{ secrets.K01_KUBECONFIG }}
SHA_TAG: ${{ needs.discover.outputs.short_sha }}
TARGETS: ${{ needs.discover.outputs.targets }}
shell: bash
run: |
set -euo pipefail
mkdir -p ~/.kube
echo "$KUBECONFIG_B64" | base64 -d > ~/.kube/config
chmod 600 ~/.kube/config
python3 <<'PY'
import json, subprocess, os
reg = os.environ["REGISTRY"] + "/" + os.environ["REPO"]
for t in json.loads(os.environ["TARGETS"]):
img = t["image"]
wl = t["workload"]
kind = "statefulset" if wl == "snowflake" else "deployment"
ref = f"{reg}/{img}:{os.environ['SHA_TAG']}"
cmd = ["kubectl","-n","juwan","set","image",f"{kind}/{wl}",f"{img}={ref}"]
print(" ".join(cmd))
subprocess.run(cmd, check=False)
PY
+5 -1
View File
@@ -119,7 +119,11 @@ dist
# End of https://mrkandreev.name/snippets/gitignore-generator/#Node
DockerFile
/app/*/api/Dockerfile
/app/*/rpc/Dockerfile
/app/*/mq/Dockerfile
/app/*/adapter/Dockerfile
/app/*/test/Dockerfile
.idea
# Go compiled binaries
+1 -1
View File
@@ -1,3 +1,3 @@
[submodule "frontend"]
path = frontend
url = http://103.236.53.208:3000/juwan/juwan-frontend.git
url = https://git.juwan.xhttp.zip/juwan/juwan-frontend.git
+2 -2
View File
@@ -2,5 +2,5 @@ Name: snowflake.rpc
ListenOn: 0.0.0.0:8080
Snowflake:
DatacenterId: 1
WorkerId: 0
DatacenterId: ${SNOWFLAKE_DATACENTER_ID}
WorkerId: ${SNOWFLAKE_WORKER_ID}
+2
View File
@@ -0,0 +1,2 @@
GITEA_DOMAIN=git.juwan.xhttp.zip
RUNNER_TOKEN=
+4
View File
@@ -0,0 +1,4 @@
/secrets/
/garage/garage.toml
/zot/htpasswd
/.env
+100
View File
@@ -0,0 +1,100 @@
# 管理机部署
Zot(容器仓库)、Garage(对象存储)、Gitea(代码 + Actions Runner)、CaddyHTTPS 反代 + 业务入口),全部在同一台 center 机器上以 Docker Compose 运行。业务服务部署在另一台 k01 机器上,公网流量经由 center 的 Caddy 反代到 k01 上的 envoy-gateway NodePort。
部署机参考:centerVultr High Frequency / 1 vCPU / 1 GB RAM / 32 GB NVMe)。
## 前置条件
- Docker Engine 与 compose v2
- `apache2-utils`(提供 `htpasswd` 命令,用于给 Zot 生成 bcrypt 密码)
- DNS`git` / `registry` / `s3` / `juwan` 四条 A 记录全部指向 66.135.5.101,灰云直连
- 防火墙入站规则允许 TCP 80、443、UDP 443
## 首次部署
```bash
cd deploy/center
# 生成所有随机密码与 token,渲染 garage.toml / zot.htpasswd / .env
bash init.sh
# 启动 Caddy + Zot + Garage + Gitea
docker compose up -d caddy zot garage gitea
# 创建 Gitea 管理员
docker compose exec -u git gitea gitea admin user create \
--username admin \
--email admin@juwan.xhttp.zip \
--password "$(cat secrets/gitea-admin-password)" \
--admin --must-change-password=false
# 在浏览器打开 https://git.juwan.xhttp.zip
# → Site Administration → Actions → Runners → 生成 runner token
# → 把 token 写入 .env 的 RUNNER_TOKEN
# → 回到终端执行:
docker compose up -d runner
# 初始化 Garage:创建 layout、两个 bucket、生成 access key
bash garage/bootstrap.sh
```
`bootstrap.sh` 最后会打印 S3 连接信息,其中 `S3_ACCESS_KEY` / `S3_SECRET_KEY` 留给 k01 的 `objectstory-rpc` 和 CNPG backup 配置。
## 访问入口
| 子域 | 内容 |
| -------------------------- | ------------------------------------------------- |
| `git.juwan.xhttp.zip` | Gitea 代码仓库 |
| `registry.juwan.xhttp.zip` | Zot 镜像仓库 + 内置 zui 浏览器 |
| `s3.juwan.xhttp.zip` | Garage S3 API |
| `juwan.xhttp.zip` | 业务前端,Caddy 反代至 k01 envoy-gateway NodePort |
## 凭据与认证
`init.sh` 会把所有密码写入 `secrets/` 目录(权限 600`.gitignore` 已排除)。`garage/garage.toml``zot/htpasswd` 由模板渲染生成,同样不在仓库中跟踪。
### Zot
匿名用户可浏览 zui、`docker pull` 镜像。推送或删除需要登录:
```bash
docker login registry.juwan.xhttp.zip -u admin -p "$(cat secrets/zot-admin-password)"
```
### Gitea
注册链接默认关闭。管理员登录后通过以下方式创建新用户:
```bash
docker compose exec -u git gitea gitea admin user create \
--username NAME --email MAIL --password PASS
```
## Runner
通过宿主 `/var/run/docker.sock` 启动 job 容器。Workflow 里写 `runs-on: ubuntu-latest` 时,runner 会拉取 `gitea/runner-images:ubuntu-latest-slim` 作为临时工作环境。`docker build` 命令在此容器内调用宿主的 dockerd,生成的镜像可直接推送到本机 Zot。
## 日常维护
```bash
docker compose restart # 全部重启
docker compose logs -f caddy # 查看 Caddy 日志(含 ACME 信息)
docker compose logs -f runner # 查看 Runner 日志(含 job 输出)
# 彻底重置:删除所有 Compose 卷与 init.sh 生成的本地文件
docker compose down -v
rm -rf secrets garage/garage.toml zot/htpasswd
```
持久化数据所在的 Docker 卷:
| 卷 | 内容 |
| -------------------- | ------------------------ |
| `juwan-caddy-data` | ACME 证书 |
| `juwan-caddy-config` | Caddy 自动配置 |
| `juwan-zot-data` | 容器镜像层 |
| `juwan-garage-meta` | Garage 元数据 |
| `juwan-garage-data` | S3 对象数据 |
| `juwan-gitea-data` | Git 仓库与 SQLite 数据库 |
| `juwan-runner-data` | Runner 注册信息 |
+78
View File
@@ -0,0 +1,78 @@
{
email admin@juwan.xhttp.zip
servers {
enable_webtransport
}
}
(common_log) {
log {
output stdout
format console {
time_format common_log
time_local
}
}
}
(stream_proxy) {
flush_interval -1
transport http {
read_timeout 0
write_timeout 0
response_header_timeout 0
}
}
git.juwan.xhttp.zip {
import common_log
request_body {
max_size 2GB
}
reverse_proxy http://gitea:3000 {
import stream_proxy
}
}
registry.juwan.xhttp.zip {
import common_log
request_body {
max_size 2GB
}
reverse_proxy http://zot:5000 {
import stream_proxy
}
}
s3.juwan.xhttp.zip {
import common_log
request_body {
max_size 5GB
}
reverse_proxy http://garage:3900 {
import stream_proxy
}
}
juwan.xhttp.zip {
import common_log
handle /wt/* {
reverse_proxy https://140.82.15.92:8443 {
transport http {
versions 3
tls_insecure_skip_verify
}
}
}
handle {
reverse_proxy http://140.82.15.92:30080 {
lb_policy round_robin
health_uri /healthz
health_interval 10s
fail_duration 30s
import stream_proxy
}
}
}
+10
View File
@@ -0,0 +1,10 @@
FROM caddy:2.11.2-builder-alpine AS builder
# 编译带 PR #7669experimental WebTransport reverse-proxy passthrough
# 的 Caddy。来源是 tomholford 的 fork 分支 webtransport-reverse-proxy。
RUN xcaddy build \
--with github.com/caddyserver/caddy/v2=github.com/tomholford/caddy/v2@webtransport-reverse-proxy
FROM caddy:2.11.2-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
+113
View File
@@ -0,0 +1,113 @@
services:
# ==================== 反代 ====================
caddy:
build:
context: ./caddy
image: juwan-center/caddy:wt
container_name: juwan-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy-data:/data
- caddy-config:/config
depends_on:
- gitea
- zot
- garage
# ==================== 容器仓库 ====================
zot:
image: ghcr.io/project-zot/zot:v2.1.16
container_name: juwan-zot
restart: unless-stopped
command: ["serve", "/etc/zot/config.json"]
volumes:
- ./zot/config.json:/etc/zot/config.json:ro
- ./zot/htpasswd:/etc/zot/htpasswd:ro
- zot-data:/var/lib/registry
expose:
- "5000"
# ==================== S3 对象存储 ====================
garage:
image: dxflrs/garage:v2.3.0
container_name: juwan-garage
restart: unless-stopped
command: ["/garage", "server"]
volumes:
- ./garage/garage.toml:/etc/garage.toml:ro
- garage-meta:/var/lib/garage/meta
- garage-data:/var/lib/garage/data
expose:
- "3900"
- "3901"
- "3902"
- "3903"
# ==================== Git 服务 ====================
gitea:
image: docker.gitea.com/gitea:1.26.1
container_name: juwan-gitea
restart: unless-stopped
environment:
USER_UID: "1000"
USER_GID: "1000"
GITEA__database__DB_TYPE: sqlite3
GITEA__server__DOMAIN: ${GITEA_DOMAIN}
GITEA__server__ROOT_URL: https://${GITEA_DOMAIN}/
GITEA__server__PROTOCOL: http
GITEA__server__HTTP_PORT: "3000"
GITEA__server__DISABLE_SSH: "true"
GITEA__service__DISABLE_REGISTRATION: "true"
GITEA__security__INSTALL_LOCK: "true"
GITEA__actions__ENABLED: "true"
volumes:
- gitea-data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
expose:
- "3000"
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:3000/api/healthz >/dev/null || exit 1"]
interval: 30s
timeout: 5s
retries: 5
start_period: 30s
# ==================== CI/CD 执行器 ====================
runner:
image: gitea/act_runner:0.6.1
container_name: juwan-runner
restart: unless-stopped
environment:
GITEA_INSTANCE_URL: http://gitea:3000
GITEA_RUNNER_REGISTRATION_TOKEN: ${RUNNER_TOKEN}
GITEA_RUNNER_NAME: juwan-center
GITEA_RUNNER_LABELS: ubuntu-latest:docker://docker.gitea.com/runner-images:ubuntu-latest-slim
CONFIG_FILE: /data/config.yaml
volumes:
- runner-data:/data
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
gitea:
condition: service_healthy
volumes:
caddy-data:
name: juwan-caddy-data
caddy-config:
name: juwan-caddy-config
zot-data:
name: juwan-zot-data
garage-meta:
name: juwan-garage-meta
garage-data:
name: juwan-garage-data
gitea-data:
name: juwan-gitea-data
runner-data:
name: juwan-runner-data
+31
View File
@@ -0,0 +1,31 @@
#!/usr/bin/env bash
set -euo pipefail
GARAGE="docker compose exec -T garage /garage"
NODE_ID="$($GARAGE node id -q | cut -d@ -f1 | tr -d '\r')"
echo "node id: $NODE_ID"
$GARAGE layout assign -z dc1 -c 10G "$NODE_ID"
$GARAGE layout apply --version 1
$GARAGE bucket create juwan-objectstory
$GARAGE bucket create juwan-pg-backup
KEY_INFO="$($GARAGE key create juwan-app)"
echo "$KEY_INFO"
ACCESS_KEY="$(echo "$KEY_INFO" | awk '/Key ID:/ {print $3}')"
SECRET_KEY="$(echo "$KEY_INFO" | awk '/Secret key:/ {print $3}')"
$GARAGE bucket allow --read --write --owner juwan-objectstory --key juwan-app
$GARAGE bucket allow --read --write --owner juwan-pg-backup --key juwan-app
cat <<EOF
S3_ENDPOINT=https://s3.juwan.xhttp.zip
S3_REGION=garage
S3_ACCESS_KEY=$ACCESS_KEY
S3_SECRET_KEY=$SECRET_KEY
S3_BUCKET_NAME=juwan-objectstory
EOF
+26
View File
@@ -0,0 +1,26 @@
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
db_engine = "sqlite"
replication_factor = 1
consistency_mode = "consistent"
rpc_bind_addr = "[::]:3901"
rpc_public_addr = "garage:3901"
rpc_secret = "@RPC_SECRET@"
[s3_api]
api_bind_addr = "[::]:3900"
s3_region = "garage"
root_domain = ".s3.juwan.xhttp.zip"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.juwan.xhttp.zip"
index = "index.html"
[admin]
api_bind_addr = "[::]:3903"
admin_token = "@ADMIN_TOKEN@"
metrics_token = "@METRICS_TOKEN@"
metrics_require_token = true
+50
View File
@@ -0,0 +1,50 @@
#!/usr/bin/env bash
set -euo pipefail
CENTER_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$CENTER_DIR"
mkdir -p secrets
chmod 700 secrets
write_secret() {
local name="$1" value="$2"
printf '%s\n' "$value" > "secrets/$name"
chmod 600 "secrets/$name"
}
RPC_SECRET="$(openssl rand -hex 32)"
ADMIN_TOKEN="$(openssl rand -base64 32 | tr -d '\n')"
METRICS_TOKEN="$(openssl rand -base64 32 | tr -d '\n')"
ZOT_PASSWORD="$(openssl rand -hex 16)"
GITEA_PASSWORD="$(openssl rand -hex 16)"
write_secret garage-rpc-secret "$RPC_SECRET"
write_secret garage-admin-token "$ADMIN_TOKEN"
write_secret garage-metrics-token "$METRICS_TOKEN"
write_secret zot-admin-password "$ZOT_PASSWORD"
write_secret gitea-admin-password "$GITEA_PASSWORD"
if [ ! -f .env ]; then
cp .env.example .env
fi
python3 - "$RPC_SECRET" "$ADMIN_TOKEN" "$METRICS_TOKEN" <<'PY'
import sys, pathlib
rpc, admin, metrics = sys.argv[1:4]
src = pathlib.Path("garage/garage.toml.template").read_text()
out = (src
.replace("@RPC_SECRET@", rpc)
.replace("@ADMIN_TOKEN@", admin)
.replace("@METRICS_TOKEN@", metrics))
pathlib.Path("garage/garage.toml").write_text(out)
PY
htpasswd -bBn admin "$ZOT_PASSWORD" > zot/htpasswd
chmod 600 zot/htpasswd
echo
echo "secrets/ 写入完成,garage/garage.toml、zot/htpasswd 已渲染"
echo
echo "Zot: admin / $ZOT_PASSWORD"
echo "Gitea: admin / $GITEA_PASSWORD"
+50
View File
@@ -0,0 +1,50 @@
{
"distSpecVersion": "1.1.1",
"storage": {
"rootDirectory": "/var/lib/registry",
"dedupe": true,
"gc": true,
"gcDelay": "1h",
"gcInterval": "24h"
},
"http": {
"address": "0.0.0.0",
"port": "5000",
"realm": "zot",
"compat": ["docker2s2"],
"auth": {
"htpasswd": {
"path": "/etc/zot/htpasswd"
},
"failDelay": 5
},
"accessControl": {
"repositories": {
"**": {
"anonymousPolicy": ["read"],
"defaultPolicy": ["read", "create", "update", "delete"]
}
},
"metrics": {
"users": ["admin"]
}
}
},
"log": {
"level": "info"
},
"extensions": {
"search": {
"enable": true
},
"ui": {
"enable": true
},
"metrics": {
"enable": true,
"prometheus": {
"path": "/metrics"
}
}
}
}
+3
View File
@@ -210,6 +210,9 @@ services:
image: juwan/snowflake-rpc:dev
container_name: juwan-snowflake
restart: unless-stopped
environment:
SNOWFLAKE_DATACENTER_ID: 1
SNOWFLAKE_WORKER_ID: 0
authz-adapter:
image: juwan/authz-adapter:dev
+24
View File
@@ -0,0 +1,24 @@
REGISTRY_HOST=registry.juwan.xhttp.zip
REGISTRY_USERNAME=admin
REGISTRY_PASSWORD=
JWT_SECRET_KEY=
ADMIN_USERNAME=admin
ADMIN_PASSWORD=
ADMIN_EMAIL=admin@juwan.xhttp.zip
EMAIL_SMTP_HOST=smtp-relay.brevo.com
EMAIL_SMTP_PORT=587
EMAIL_SMTP_USERNAME=
EMAIL_SMTP_PASSWORD=
EMAIL_FROM_ADDRESS=dev@juwan.xhttp.zip
EMAIL_FROM_NAME="Juwan Team"
EMAIL_REPLY_TO=
S3_ENDPOINT=https://s3.juwan.xhttp.zip
S3_ACCESS_KEY=
S3_SECRET_KEY=
S3_BUCKET_NAME=juwan-objectstory
S3_REGION=garage
MONGO_PASSWORD=
+2
View File
@@ -0,0 +1,2 @@
secrets/
.env
+72
View File
@@ -0,0 +1,72 @@
# k01 业务集群
该目录是 juwan-backend 所有 k3s 节点的初始化配置。公网入口由 center 的 Caddy 接管——`/wt/*` 走 UDP 直达 chat-api,其余路径反代到 envoy-gateway NodePort 30080。
第一台机器按以下步骤初始化为 k3s server;后续加入的 k02、k03 只运行 `install.sh agent`,其他步骤在 server 上执行一次即可。
## 前置条件
- Ubuntu 26.04 LTSroot
- center 已部署,`registry.juwan.xhttp.zip` 可推可拉
- 已从 Gitea 拉取仓库:`git clone https://git.juwan.xhttp.zip/juwan/juwan-backend.git`
- `/root/registry-password` 文件存放 zot admin 密码(`chmod 600`
- `.env` 已按 `.env.example` 填好(zot admin 密码、Brevo SMTP、Garage S3 凭据)
如果还没 `.env`:先 `cp .env.example .env && nano .env`,再跑 `secrets.sh`
## k3s server 初始化
```bash
cd /root/juwan-backend/deploy/k01
bash install.sh # k3s + Helm + 四个 Operator
bash secrets.sh # 生成所有 k8s Secret
bash apply-infra.sh # 数据层 + envoy + ratelimit,分批等待 Ready
bash apply-schema.sh # 向 CNPG 写入 schema 与 fixture
bash apply-services.sh # 启动业务 Deployment
# 可以用 `bash teardown.sh` 来卸载数据层和业务层
```
## 做什么
控制面是 k3s server,跑着 CNPG / Strimzi / Redis / MongoDB 四个 Operator 管理有状态服务。
数据层 11 个 per-domain PostgreSQL Cluster + 12 个 RedisReplication + 1 个 MongoDBCommunity + Strimzi KRaft Kafka。
业务层 27 个 Go 服务镜像指向 `registry.juwan.xhttp.zip/juwan/<name>:latest`,每个 domain 一套 rpc + api,外加 snowflake、authz-adapter、email-mq 和 frontend。所有 Deployment 带 `imagePullSecrets: registry-creds`containerd 的 `registries.yaml` 配了 zot admin 凭据。
email-api 跟 user-rpc 共用 user-redis 实例,因为注册和重置密码的验证码 key 跨服务读写。
chat-api 的 WebTransport 走 UDP 8443 hostPortcenter Caddy 的 PR 7669 fork 在中心握手后反向代理 WebTransport 连接到 chat-api。
## 生成的 Secret
`secrets.sh` 生成随机密码写入 `secrets/` 目录,同时 `kubectl create secret``juwan` namespace。需要手动填的是 `.env` 里的 zot admin 密码、Brevo SMTP key 和 Garage S3 access key。
CNPG 每个 Cluster Ready 后自动生成 `<cluster>-app` Secretusername/password/dbname/host/port),业务 pod 的 env 由这些 Secret 提供。
## 加节点
在 server 上取 token
```bash
cat /var/lib/rancher/k3s/server/node-token
```
新机器上执行:
```bash
cd /root/juwan-backend/deploy/k01
echo "<zot-admin-password>" > /root/registry-password && chmod 600 /root/registry-password
K3S_URL=https://<server-ip>:6443 K3S_TOKEN=<token> bash install.sh agent
```
## 日常操作
```bash
kubectl -n juwan get pods -o wide
kubectl -n juwan rollout restart deploy/user-rpc
kubectl -n kafka get kafka,kafkatopic,kafkanodepool
```
+52
View File
@@ -0,0 +1,52 @@
#!/usr/bin/env bash
set -euo pipefail
INFRA_DIR="$(cd "$(dirname "$0")/infra" && pwd)"
export KUBECONFIG="${KUBECONFIG:-/etc/rancher/k3s/k3s.yaml}"
apply_docs() {
local file="$1" kind="$2" wait_expr="$3" buf i
i=0
buf=""
while IFS= read -r line; do
if [[ "$line" == "---" ]]; then
printf '%s\n' "$buf" | kubectl apply -f -
i=$((i+1))
echo " ($i) applied"
buf=""
else
[[ -n "$buf" ]] && buf+=$'\n'
buf+="$line"
fi
done < "$file"
printf '%s\n' "$buf" | kubectl apply -f -
i=$((i+1))
echo " ($i) applied"
if [ -n "$kind" ]; then
kubectl -n juwan wait --for="$wait_expr" --timeout=900s "$kind" --all || true
fi
}
echo envoy + ratelimit
kubectl apply -f "${INFRA_DIR}/envoy.yaml"
kubectl apply -f "${INFRA_DIR}/ratelimit.yaml"
kubectl -n juwan wait --for=condition=Ready pod -l app=envoy-gateway --timeout=120s || true
kubectl -n juwan wait --for=condition=Ready pod -l "app in (ratelimit,rl-redis)" --timeout=120s || true
echo redis
apply_docs "${INFRA_DIR}/redis.yaml" "" ""
kubectl -n juwan wait --for=condition=Ready pod -l redis_setup_type=replication --timeout=600s || true
echo postgres
apply_docs "${INFRA_DIR}/postgres.yaml" cluster.postgresql.cnpg.io "condition=Ready"
echo mongo
kubectl apply -f "${INFRA_DIR}/mongo.yaml"
kubectl -n juwan wait --for=jsonpath='{.status.phase}'=Running mongodbcommunity/chat-mongodb --timeout=600s || true
echo kafka
kubectl apply -f "${INFRA_DIR}/kafka.yaml"
kubectl -n redpanda wait --for=condition=Ready redpanda/juwan-kafka --timeout=900s || true
kubectl get pods -A
+59
View File
@@ -0,0 +1,59 @@
#!/usr/bin/env bash
set -euo pipefail
K01_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(cd "$K01_DIR/../.." && pwd)"
SQL_DIR="$REPO_ROOT/desc/sql"
FIXTURE_DIR="$REPO_ROOT/deploy/dev/fixture"
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
domain_dir() {
case "$1" in
user) echo users ;;
*) echo "$1" ;;
esac
}
psql_exec() {
local cluster="$1" sql="$2"
local pw
pw="$(kubectl -n juwan get secret "${cluster}-app" -o jsonpath='{.data.password}' | base64 -d)"
kubectl -n juwan exec -i "${cluster}-1" -c postgres -- env PGPASSWORD="$pw" \
psql -v ON_ERROR_STOP=1 -h 127.0.0.1 -U app -d app <<<"$sql"
}
psql_file() {
local cluster="$1" file="$2"
local pw
pw="$(kubectl -n juwan get secret "${cluster}-app" -o jsonpath='{.data.password}' | base64 -d)"
kubectl -n juwan exec -i "${cluster}-1" -c postgres -- env PGPASSWORD="$pw" \
psql -v ON_ERROR_STOP=1 -h 127.0.0.1 -U app -d app < "$file"
}
kubectl -n juwan wait --for=condition=Ready cluster.postgresql.cnpg.io --all --timeout=600s
clusters=()
while IFS= read -r name; do
[ -n "$name" ] && clusters+=("$name")
done < <(kubectl -n juwan get cluster -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
for cluster in "${clusters[@]}"; do
domain="${cluster%-db}"
dir="$(domain_dir "$domain")"
echo "$cluster"
psql_exec "$cluster" "DROP OWNED BY app CASCADE;"
psql_file "$cluster" "$SQL_DIR/common/update_updated_at_column.sql"
for f in "$SQL_DIR/$dir"/*.sql; do
[ -f "$f" ] || continue
echo " $(basename "$f")"
psql_file "$cluster" "$f"
done
if [ -f "$FIXTURE_DIR/$dir.sql" ]; then
echo " $dir.sql"
psql_file "$cluster" "$FIXTURE_DIR/$dir.sql"
fi
done
echo
echo "schema + fixture loaded, ${#clusters[@]} clusters"
+30
View File
@@ -0,0 +1,30 @@
#!/usr/bin/env bash
set -euo pipefail
SVC_DIR="$(cd "$(dirname "$0")/services" && pwd)"
export KUBECONFIG="${KUBECONFIG:-/etc/rancher/k3s/k3s.yaml}"
apply_wait() {
for f in "$@"; do
echo "${f%.yaml}"
kubectl apply -f "${SVC_DIR}/${f}"
done
kubectl -n juwan wait --for=condition=Available deploy --all --timeout=600s || true
}
cd "$SVC_DIR"
apply_wait snowflake.yaml authz-adapter.yaml
domain_files=()
for f in *.yaml; do
case "$f" in
snowflake.yaml|authz-adapter.yaml|chat.yaml|email.yaml|frontend.yaml) ;;
*) domain_files+=("$f") ;;
esac
done
apply_wait "${domain_files[@]}"
apply_wait chat.yaml email.yaml frontend.yaml
kubectl get pods -n juwan
+38
View File
@@ -0,0 +1,38 @@
apiVersion: v1
kind: Namespace
metadata:
name: juwan
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: juwan
name: find-endpoints
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: discov-endpoints
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: find-endpoints-discov-endpoints
subjects:
- kind: ServiceAccount
namespace: juwan
name: find-endpoints
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: discov-endpoints
File diff suppressed because it is too large Load Diff
+77
View File
@@ -0,0 +1,77 @@
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
name: juwan-kafka
namespace: redpanda
spec:
clusterSpec:
image:
tag: v26.1.6
fullnameOverride: juwan-kafka
console:
enabled: false
external:
enabled: false
service:
enabled: false
auth:
sasl:
enabled: false
tls:
enabled: false
listeners:
kafka:
port: 9092
authenticationMethod: null
tls:
enabled: false
admin:
tls:
enabled: false
rpc:
tls:
enabled: false
http:
enabled: false
schemaRegistry:
enabled: false
storage:
persistentVolume:
enabled: true
size: 5Gi
storageClass: local-path
resources:
requests:
cpu: 50m
memory: 400Mi
limits:
cpu: 500m
memory: 500Mi
config:
node:
developer_mode: true
statefulset:
replicas: 1
podTemplate:
spec:
affinity:
podAntiAffinity: null
tuning:
tune_aio_events: false
logging:
logLevel: info
usageStats:
enabled: false
---
apiVersion: cluster.redpanda.com/v1alpha2
kind: Topic
metadata:
name: email-task
namespace: redpanda
spec:
partitions: 1
replicationFactor: 1
cluster:
clusterRef:
name: juwan-kafka
+58
View File
@@ -0,0 +1,58 @@
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: chat-mongodb
namespace: juwan
spec:
members: 1
type: ReplicaSet
version: "7.0.32"
security:
authentication:
modes:
- SCRAM
users:
- name: app-user
db: admin
passwordSecretRef:
name: chat-mongodb-app-user-password
roles:
- name: readWrite
db: juwan_chat
scramCredentialsSecretName: chat-mongodb-app-user-scram
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
statefulSet:
spec:
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 30m
memory: 80Mi
limits:
memory: 400Mi
- name: mongodb-agent
resources:
requests:
cpu: 20m
memory: 35Mi
limits:
memory: 100Mi
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
- metadata:
name: logs-volume
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 256Mi
+251
View File
@@ -0,0 +1,251 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: user-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: player-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: game-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: shop-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: order-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: wallet-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: community-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: review-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: dispute-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: notification-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
namespace: juwan
name: search-db
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.3-system-trixie
primaryUpdateStrategy: unsupervised
bootstrap:
initdb:
database: app
owner: app
storage:
size: 1Gi
resources:
requests:
cpu: 30m
memory: 50Mi
limits:
memory: 200Mi
+157
View File
@@ -0,0 +1,157 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: ratelimit-config
namespace: juwan
data:
ratelimit.yaml: |
domain: api
descriptors:
- key: generic_key
value: login
descriptors:
- key: remote_address
rate_limit:
unit: MINUTE
requests_per_unit: 10
- key: generic_key
value: register
descriptors:
- key: remote_address
rate_limit:
unit: MINUTE
requests_per_unit: 5
- key: generic_key
value: forgot_password_send
descriptors:
- key: remote_address
rate_limit:
unit: MINUTE
requests_per_unit: 3
- key: generic_key
value: verify_code_send
descriptors:
- key: remote_address
rate_limit:
unit: MINUTE
requests_per_unit: 3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rl-redis
namespace: juwan
labels:
app: rl-redis
spec:
replicas: 1
selector:
matchLabels:
app: rl-redis
template:
metadata:
labels:
app: rl-redis
spec:
containers:
- name: redis
image: redis:8.6.3-alpine
ports:
- containerPort: 6379
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
memory: 60Mi
---
apiVersion: v1
kind: Service
metadata:
name: rl-redis-svc
namespace: juwan
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: rl-redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ratelimit
namespace: juwan
labels:
app: ratelimit
spec:
replicas: 1
selector:
matchLabels:
app: ratelimit
template:
metadata:
labels:
app: ratelimit
spec:
initContainers:
- name: wait-rl-redis
image: busybox:1.37
command: ["sh", "-c", "until nc -z rl-redis-svc 6379; do sleep 1; done"]
containers:
- name: ratelimit
image: envoyproxy/ratelimit:fe26676d
command: ["/bin/ratelimit"]
env:
- name: REDIS_SOCKET_TYPE
value: tcp
- name: REDIS_URL
value: rl-redis-svc:6379
- name: USE_STATSD
value: "false"
- name: RUNTIME_ROOT
value: /data
- name: RUNTIME_SUBDIRECTORY
value: ratelimit
- name: RUNTIME_WATCH_ROOT
value: "true"
- name: LOG_LEVEL
value: info
ports:
- containerPort: 8081
name: grpc
- containerPort: 6070
name: debug
volumeMounts:
- name: config
mountPath: /data/ratelimit/config
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
memory: 60Mi
volumes:
- name: config
configMap:
name: ratelimit-config
---
apiVersion: v1
kind: Service
metadata:
name: ratelimit-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8081
targetPort: 8081
- name: debug
port: 6070
targetPort: 6070
selector:
app: ratelimit
+370
View File
@@ -0,0 +1,370 @@
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: user-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: user-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: player-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: player-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: game-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: game-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: shop-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: shop-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: order-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: order-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: wallet-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: wallet-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: community-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: community-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: review-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: review-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: dispute-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: dispute-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: notification-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: notification-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: search-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: search-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: RedisReplication
metadata:
name: chat-redis
namespace: juwan
spec:
clusterSize: 1
kubernetesConfig:
image: quay.io/opstree/redis:v8.6.2
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 5m
memory: 10Mi
limits:
memory: 80Mi
redisSecret:
name: chat-redis
key: password
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
+127
View File
@@ -0,0 +1,127 @@
#!/usr/bin/env bash
set -euo pipefail
REGISTRY_HOST="registry.juwan.xhttp.zip"
CNPG_VERSION="1.29.0"
REDPANDA_OP_VERSION="v26.1.3"
REDIS_OP_VERSION="0.24.0"
MONGODB_OP_VERSION="1.8.0"
MODE="${1:-server}"
if [ "$MODE" != "server" ] && [ "$MODE" != "agent" ]; then
echo "usage: $0 [server|agent]" >&2
exit 1
fi
if [ ! -f /root/registry-password ]; then
echo "need /root/registry-password (zot admin password)" >&2
exit 1
fi
K01_DIR="$(cd "$(dirname "$0")" && pwd)"
write_registries() {
mkdir -p /etc/rancher/k3s
cat > /etc/rancher/k3s/registries.yaml <<EOF
mirrors:
${REGISTRY_HOST}:
endpoint:
- "https://${REGISTRY_HOST}"
configs:
${REGISTRY_HOST}:
auth:
username: admin
password: $(cat /root/registry-password)
EOF
}
if [ "$MODE" = "agent" ]; then
if [ -z "${K3S_URL:-}" ] || [ -z "${K3S_TOKEN:-}" ]; then
echo "agent mode requires K3S_URL and K3S_TOKEN env" >&2
echo " on the server: cat /var/lib/rancher/k3s/server/node-token" >&2
echo " then on agent: K3S_URL=https://<server-ip>:6443 K3S_TOKEN=<token> $0 agent" >&2
exit 1
fi
write_registries
if ! command -v k3s-agent >/dev/null 2>&1 && ! systemctl is-active --quiet k3s-agent; then
curl -sfL https://get.k3s.io | K3S_URL="$K3S_URL" K3S_TOKEN="$K3S_TOKEN" sh -
else
systemctl restart k3s-agent
fi
echo
echo "k3s agent joined ${K3S_URL}"
exit 0
fi
if ! systemctl is-active --quiet k3s; then
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="--disable=traefik --write-kubeconfig-mode=644" \
sh -
fi
if ! command -v helm >/dev/null 2>&1; then
curl -fsSL https://packages.buildkite.com/helm-linux/helm-debian/gpgkey | \
gpg --dearmor -o /usr/share/keyrings/helm.gpg
echo "deb [signed-by=/usr/share/keyrings/helm.gpg] https://packages.buildkite.com/helm-linux/helm-debian/any/ any main" \
> /etc/apt/sources.list.d/helm-stable-debian.list
apt-get update
apt-get install -y helm
fi
write_registries
systemctl restart k3s
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
until kubectl get nodes >/dev/null 2>&1; do sleep 2; done
kubectl apply -f "${K01_DIR}/base/"
kubectl apply --server-side --force-conflicts -f \
"https://github.com/cloudnative-pg/cloudnative-pg/releases/download/v${CNPG_VERSION}/cnpg-${CNPG_VERSION}.yaml"
kubectl -n cnpg-system set resources deploy/cnpg-controller-manager \
--requests=cpu=30m,memory=40Mi --limits=cpu=200m,memory=200Mi
kubectl create namespace redpanda 2>/dev/null || true
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/ 2>/dev/null || true
helm repo add mongodb https://mongodb.github.io/helm-charts 2>/dev/null || true
helm repo add redpanda https://charts.redpanda.com 2>/dev/null || true
helm repo update
helm upgrade --install redpanda-controller redpanda/operator \
--version "${REDPANDA_OP_VERSION}" \
--namespace redpanda \
--set crds.enabled=true \
--set resources.requests.cpu=30m \
--set resources.requests.memory=100Mi \
--set resources.limits.cpu=500m \
--set resources.limits.memory=300Mi \
--set-json 'livenessProbe={"initialDelaySeconds":30,"periodSeconds":60,"timeoutSeconds":10,"failureThreshold":5}' \
--set-json 'readinessProbe={"initialDelaySeconds":15,"periodSeconds":30,"timeoutSeconds":10,"failureThreshold":5}'
helm upgrade --install redis-operator ot-helm/redis-operator \
--version "${REDIS_OP_VERSION}" \
--namespace redis-operator --create-namespace \
--set resources.requests.cpu=20m \
--set resources.requests.memory=30Mi \
--set resources.limits.cpu=500m \
--set resources.limits.memory=150Mi
helm upgrade --install mongodb-kubernetes mongodb/mongodb-kubernetes \
--version "${MONGODB_OP_VERSION}" \
--namespace mongodb-operator --create-namespace \
--set operator.watchNamespace=juwan \
--set operator.resources.requests.cpu=30m \
--set operator.resources.requests.memory=50Mi \
--set operator.resources.limits.cpu=500m \
--set operator.resources.limits.memory=200Mi
kubectl -n cnpg-system rollout status deploy/cnpg-controller-manager --timeout=300s
kubectl -n redpanda rollout status deploy/redpanda-controller-operator --timeout=300s
kubectl -n redis-operator rollout status deploy/redis-operator --timeout=300s
kubectl -n mongodb-operator rollout status deploy/mongodb-kubernetes-operator --timeout=300s
echo
echo "k3s server + 4 operators ready"
echo "node token: $(cat /var/lib/rancher/k3s/server/node-token)"
+93
View File
@@ -0,0 +1,93 @@
#!/usr/bin/env bash
set -euo pipefail
K01_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$K01_DIR"
if [ ! -f .env ]; then
echo ".env not found, copy from .env.example and fill in" >&2
exit 1
fi
set -a
. ./.env
set +a
mkdir -p secrets
chmod 700 secrets
write_secret() {
local name="$1" value="$2"
printf '%s\n' "$value" > "secrets/$name"
chmod 600 "secrets/$name"
}
JWT_SECRET_KEY="${JWT_SECRET_KEY:-$(openssl rand -hex 32)}"
ADMIN_PASSWORD="${ADMIN_PASSWORD:-$(openssl rand -hex 16)}"
write_secret jwt-secret "$JWT_SECRET_KEY"
write_secret admin-password "$ADMIN_PASSWORD"
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl -n juwan create secret docker-registry registry-creds \
--docker-server="${REGISTRY_HOST}" \
--docker-username="${REGISTRY_USERNAME}" \
--docker-password="${REGISTRY_PASSWORD}" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl -n juwan create secret generic jwt-secret \
--from-literal=secret-key="$JWT_SECRET_KEY" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl -n juwan create secret generic admin-bootstrap \
--from-literal=username="${ADMIN_USERNAME}" \
--from-literal=password="$ADMIN_PASSWORD" \
--from-literal=email="${ADMIN_EMAIL}" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl -n juwan create secret generic email-smtp \
--from-literal=host="${EMAIL_SMTP_HOST}" \
--from-literal=port="${EMAIL_SMTP_PORT}" \
--from-literal=username="${EMAIL_SMTP_USERNAME}" \
--from-literal=password="${EMAIL_SMTP_PASSWORD}" \
--from-literal=from-address="${EMAIL_FROM_ADDRESS}" \
--from-literal=from-name="${EMAIL_FROM_NAME}" \
--from-literal=reply-to="${EMAIL_REPLY_TO:-}" \
--dry-run=client -o yaml | kubectl apply -f -
kubectl -n juwan create secret generic objectstory-s3 \
--from-literal=endpoint="${S3_ENDPOINT}" \
--from-literal=access-key="${S3_ACCESS_KEY}" \
--from-literal=secret-key="${S3_SECRET_KEY}" \
--from-literal=bucket="${S3_BUCKET_NAME}" \
--from-literal=region="${S3_REGION}" \
--dry-run=client -o yaml | kubectl apply -f -
DEV_CERTS="$(cd "$K01_DIR/../dev/certs" && pwd)"
kubectl -n juwan create secret tls chat-wt-tls \
--cert="${DEV_CERTS}/tls.crt" \
--key="${DEV_CERTS}/tls.key" \
--dry-run=client -o yaml | kubectl apply -f -
DOMAINS=()
while IFS= read -r name; do
DOMAINS+=("${name%-redis}")
done < <(grep -E '^ name: [a-z-]+-redis$' "$K01_DIR/infra/redis.yaml" | awk '{print $2}')
for d in "${DOMAINS[@]}"; do
pwd_val="$(openssl rand -hex 16)"
write_secret "redis-${d}-password" "$pwd_val"
kubectl -n juwan create secret generic "${d}-redis" \
--from-literal=password="$pwd_val" \
--dry-run=client -o yaml | kubectl apply -f -
done
MONGO_PASSWORD="${MONGO_PASSWORD:-$(openssl rand -hex 16)}"
write_secret mongo-password "$MONGO_PASSWORD"
kubectl -n juwan create secret generic chat-mongodb-app-user-password \
--from-literal=password="$MONGO_PASSWORD" \
--dry-run=client -o yaml | kubectl apply -f -
echo
echo "secrets/ written, k8s Secrets applied to namespace juwan"
echo "admin password: $ADMIN_PASSWORD"
+50
View File
@@ -0,0 +1,50 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: authz-adapter
namespace: juwan
labels:
app: authz-adapter
spec:
replicas: 1
selector:
matchLabels:
app: authz-adapter
template:
metadata:
labels:
app: authz-adapter
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: authz-adapter
image: registry.juwan.xhttp.zip/juwan/authz-adapter:latest
ports:
- name: grpc
containerPort: 9002
env:
- name: LISTEN_ON
value: "0.0.0.0:9002"
- name: USER_RPC_TARGET
value: "user-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: authz-adapter-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 9002
targetPort: 9002
selector:
app: authz-adapter
+99
View File
@@ -0,0 +1,99 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: chat-api
namespace: juwan
labels:
app: chat-api
spec:
replicas: 1
selector:
matchLabels:
app: chat-api
template:
metadata:
labels:
app: chat-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: chat-api
image: registry.juwan.xhttp.zip/juwan/chat-api:latest
ports:
- name: http
containerPort: 8888
- name: ws
containerPort: 8889
- name: wt
containerPort: 8443
hostPort: 8443
protocol: UDP
- name: metrics
containerPort: 4001
env:
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: chat-mongodb-app-user-password
key: password
- name: MONGO_URI
value: "mongodb://app-user:$(MONGO_PASSWORD)@chat-mongodb-0.chat-mongodb-svc.juwan.svc.cluster.local:27017/juwan_chat?replicaSet=chat-mongodb&authSource=admin"
- name: MONGO_DATABASE
value: juwan_chat
- name: REDIS_HOST
value: chat-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: chat-redis
key: password
- name: JWT_SECRET_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret-key
- name: CHAT_WT_CERT_FILE
value: "/etc/certs/tls.crt"
- name: CHAT_WT_KEY_FILE
value: "/etc/certs/tls.key"
volumeMounts:
- name: certs
mountPath: /etc/certs
readOnly: true
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
volumes:
- name: certs
secret:
secretName: chat-wt-tls
---
apiVersion: v1
kind: Service
metadata:
name: chat-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: ws
port: 8889
targetPort: 8889
- name: wt
port: 8443
targetPort: 8443
protocol: UDP
- name: metrics
port: 4001
targetPort: 4001
selector:
app: chat-api
+140
View File
@@ -0,0 +1,140 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: community-rpc
namespace: juwan
labels:
app: community-rpc
spec:
replicas: 1
selector:
matchLabels:
app: community-rpc
template:
metadata:
labels:
app: community-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: community-rpc
image: registry.juwan.xhttp.zip/juwan/community-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: community-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: community-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: community-db-app
key: dbname
- name: DB_HOST
value: community-db-rw.juwan
- name: DB_HOST_RO
value: community-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: community-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: community-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: community-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: community-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: community-api
namespace: juwan
labels:
app: community-api
spec:
replicas: 1
selector:
matchLabels:
app: community-api
template:
metadata:
labels:
app: community-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: community-api
image: registry.juwan.xhttp.zip/juwan/community-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: COMMUNITY_RPC_TARGET
value: "community-rpc-svc.juwan:8080"
- name: USER_RPC_TARGET
value: "user-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: community-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: community-api
+142
View File
@@ -0,0 +1,142 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: dispute-rpc
namespace: juwan
labels:
app: dispute-rpc
spec:
replicas: 1
selector:
matchLabels:
app: dispute-rpc
template:
metadata:
labels:
app: dispute-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: dispute-rpc
image: registry.juwan.xhttp.zip/juwan/dispute-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: dispute-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: dispute-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: dispute-db-app
key: dbname
- name: DB_HOST
value: dispute-db-rw.juwan
- name: DB_HOST_RO
value: dispute-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: dispute-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: dispute-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: dispute-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: dispute-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dispute-api
namespace: juwan
labels:
app: dispute-api
spec:
replicas: 1
selector:
matchLabels:
app: dispute-api
template:
metadata:
labels:
app: dispute-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: dispute-api
image: registry.juwan.xhttp.zip/juwan/dispute-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: DISPUTE_RPC_TARGET
value: "dispute-rpc-svc.juwan:8080"
- name: ORDER_RPC_TARGET
value: "order-rpc-svc.juwan:8080"
- name: PLAYER_RPC_TARGET
value: "player-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: dispute-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: dispute-api
+147
View File
@@ -0,0 +1,147 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-api
namespace: juwan
labels:
app: email-api
spec:
replicas: 1
selector:
matchLabels:
app: email-api
template:
metadata:
labels:
app: email-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: email-api
image: registry.juwan.xhttp.zip/juwan/email-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: REDIS_HOST
value: user-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: user-redis
key: password
- name: KAFKA_BROKER
value: "juwan-kafka-0.juwan-kafka.redpanda:9092"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: email-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: email-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-mq
namespace: juwan
labels:
app: email-mq
spec:
replicas: 1
selector:
matchLabels:
app: email-mq
template:
metadata:
labels:
app: email-mq
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: email-mq
image: registry.juwan.xhttp.zip/juwan/email-mq:latest
ports:
- name: metrics
containerPort: 4001
env:
- name: KAFKA_BROKER
value: "juwan-kafka-0.juwan-kafka.redpanda:9092"
- name: EMAIL_SMTP_HOST
valueFrom:
secretKeyRef:
name: email-smtp
key: host
- name: EMAIL_SMTP_PORT
valueFrom:
secretKeyRef:
name: email-smtp
key: port
- name: EMAIL_SMTP_USERNAME
valueFrom:
secretKeyRef:
name: email-smtp
key: username
- name: EMAIL_SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: email-smtp
key: password
- name: EMAIL_FROM_ADDRESS
valueFrom:
secretKeyRef:
name: email-smtp
key: from-address
- name: EMAIL_FROM_NAME
valueFrom:
secretKeyRef:
name: email-smtp
key: from-name
- name: EMAIL_REPLY_TO
valueFrom:
secretKeyRef:
name: email-smtp
key: reply-to
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: email-mq-svc
namespace: juwan
spec:
ports:
- name: metrics
port: 4001
targetPort: 4001
selector:
app: email-mq
+45
View File
@@ -0,0 +1,45 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: juwan
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: frontend
image: registry.juwan.xhttp.zip/juwan/frontend:latest
ports:
- name: http
containerPort: 3000
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: juwan
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: frontend
+138
View File
@@ -0,0 +1,138 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: game-rpc
namespace: juwan
labels:
app: game-rpc
spec:
replicas: 1
selector:
matchLabels:
app: game-rpc
template:
metadata:
labels:
app: game-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: game-rpc
image: registry.juwan.xhttp.zip/juwan/game-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: game-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: game-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: game-db-app
key: dbname
- name: DB_HOST
value: game-db-rw.juwan
- name: DB_HOST_RO
value: game-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: game-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: game-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: game-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: game-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: game-api
namespace: juwan
labels:
app: game-api
spec:
replicas: 1
selector:
matchLabels:
app: game-api
template:
metadata:
labels:
app: game-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: game-api
image: registry.juwan.xhttp.zip/juwan/game-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: GAME_RPC_TARGET
value: "game-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: game-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: game-api
+138
View File
@@ -0,0 +1,138 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: notification-rpc
namespace: juwan
labels:
app: notification-rpc
spec:
replicas: 1
selector:
matchLabels:
app: notification-rpc
template:
metadata:
labels:
app: notification-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: notification-rpc
image: registry.juwan.xhttp.zip/juwan/notification-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: notification-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: notification-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: notification-db-app
key: dbname
- name: DB_HOST
value: notification-db-rw.juwan
- name: DB_HOST_RO
value: notification-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: notification-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: notification-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: notification-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: notification-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: notification-api
namespace: juwan
labels:
app: notification-api
spec:
replicas: 1
selector:
matchLabels:
app: notification-api
template:
metadata:
labels:
app: notification-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: notification-api
image: registry.juwan.xhttp.zip/juwan/notification-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: NOTIFICATION_RPC_TARGET
value: "notification-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: notification-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: notification-api
+131
View File
@@ -0,0 +1,131 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: objectstory-rpc
namespace: juwan
labels:
app: objectstory-rpc
spec:
replicas: 1
selector:
matchLabels:
app: objectstory-rpc
template:
metadata:
labels:
app: objectstory-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: objectstory-rpc
image: registry.juwan.xhttp.zip/juwan/objectstory-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: S3_ENDPOINT
valueFrom:
secretKeyRef:
name: objectstory-s3
key: endpoint
- name: S3_ACCESS_KEY
valueFrom:
secretKeyRef:
name: objectstory-s3
key: access-key
- name: S3_SECRET_KEY
valueFrom:
secretKeyRef:
name: objectstory-s3
key: secret-key
- name: S3_BUCKET_NAME
valueFrom:
secretKeyRef:
name: objectstory-s3
key: bucket
- name: S3_REGION
valueFrom:
secretKeyRef:
name: objectstory-s3
key: region
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: objectstory-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: objectstory-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: objectstory-api
namespace: juwan
labels:
app: objectstory-api
spec:
replicas: 1
selector:
matchLabels:
app: objectstory-api
template:
metadata:
labels:
app: objectstory-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: objectstory-api
image: registry.juwan.xhttp.zip/juwan/objectstory-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: OBJECTSTORY_RPC_TARGET
value: "objectstory-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: objectstory-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: objectstory-api
+142
View File
@@ -0,0 +1,142 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-rpc
namespace: juwan
labels:
app: order-rpc
spec:
replicas: 1
selector:
matchLabels:
app: order-rpc
template:
metadata:
labels:
app: order-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: order-rpc
image: registry.juwan.xhttp.zip/juwan/order-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: order-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: order-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: order-db-app
key: dbname
- name: DB_HOST
value: order-db-rw.juwan
- name: DB_HOST_RO
value: order-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: order-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: order-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: order-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: order-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-api
namespace: juwan
labels:
app: order-api
spec:
replicas: 1
selector:
matchLabels:
app: order-api
template:
metadata:
labels:
app: order-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: order-api
image: registry.juwan.xhttp.zip/juwan/order-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: ORDER_RPC_TARGET
value: "order-rpc-svc.juwan:8080"
- name: PLAYER_RPC_TARGET
value: "player-rpc-svc.juwan:8080"
- name: SHOP_RPC_TARGET
value: "shop-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: order-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: order-api
+140
View File
@@ -0,0 +1,140 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: player-rpc
namespace: juwan
labels:
app: player-rpc
spec:
replicas: 1
selector:
matchLabels:
app: player-rpc
template:
metadata:
labels:
app: player-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: player-rpc
image: registry.juwan.xhttp.zip/juwan/player-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: player-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: player-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: player-db-app
key: dbname
- name: DB_HOST
value: player-db-rw.juwan
- name: DB_HOST_RO
value: player-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: player-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: player-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: player-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: player-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: player-api
namespace: juwan
labels:
app: player-api
spec:
replicas: 1
selector:
matchLabels:
app: player-api
template:
metadata:
labels:
app: player-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: player-api
image: registry.juwan.xhttp.zip/juwan/player-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: PLAYER_RPC_TARGET
value: "player-rpc-svc.juwan:8080"
- name: USER_RPC_TARGET
value: "user-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: player-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: player-api
+142
View File
@@ -0,0 +1,142 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: review-rpc
namespace: juwan
labels:
app: review-rpc
spec:
replicas: 1
selector:
matchLabels:
app: review-rpc
template:
metadata:
labels:
app: review-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: review-rpc
image: registry.juwan.xhttp.zip/juwan/review-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: review-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: review-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: review-db-app
key: dbname
- name: DB_HOST
value: review-db-rw.juwan
- name: DB_HOST_RO
value: review-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: review-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: review-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: review-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: review-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: review-api
namespace: juwan
labels:
app: review-api
spec:
replicas: 1
selector:
matchLabels:
app: review-api
template:
metadata:
labels:
app: review-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: review-api
image: registry.juwan.xhttp.zip/juwan/review-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: ORDER_RPC_TARGET
value: "order-rpc-svc.juwan:8080"
- name: PLAYER_RPC_TARGET
value: "player-rpc-svc.juwan:8080"
- name: REVIEW_RPC_TARGET
value: "review-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: review-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: review-api
+138
View File
@@ -0,0 +1,138 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: search-rpc
namespace: juwan
labels:
app: search-rpc
spec:
replicas: 1
selector:
matchLabels:
app: search-rpc
template:
metadata:
labels:
app: search-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: search-rpc
image: registry.juwan.xhttp.zip/juwan/search-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: search-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: search-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: search-db-app
key: dbname
- name: DB_HOST
value: search-db-rw.juwan
- name: DB_HOST_RO
value: search-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: search-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: search-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: search-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: search-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: search-api
namespace: juwan
labels:
app: search-api
spec:
replicas: 1
selector:
matchLabels:
app: search-api
template:
metadata:
labels:
app: search-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: search-api
image: registry.juwan.xhttp.zip/juwan/search-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: SEARCH_RPC_TARGET
value: "search-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: search-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: search-api
+142
View File
@@ -0,0 +1,142 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: shop-rpc
namespace: juwan
labels:
app: shop-rpc
spec:
replicas: 1
selector:
matchLabels:
app: shop-rpc
template:
metadata:
labels:
app: shop-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: shop-rpc
image: registry.juwan.xhttp.zip/juwan/shop-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: shop-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: shop-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: shop-db-app
key: dbname
- name: DB_HOST
value: shop-db-rw.juwan
- name: DB_HOST_RO
value: shop-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: shop-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: shop-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
- name: USER_RPC_TARGET
value: "user-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: shop-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: shop-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: shop-api
namespace: juwan
labels:
app: shop-api
spec:
replicas: 1
selector:
matchLabels:
app: shop-api
template:
metadata:
labels:
app: shop-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: shop-api
image: registry.juwan.xhttp.zip/juwan/shop-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: PLAYER_RPC_TARGET
value: "player-rpc-svc.juwan:8080"
- name: SHOP_RPC_TARGET
value: "shop-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: shop-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: shop-api
+73
View File
@@ -0,0 +1,73 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: snowflake
namespace: juwan
labels:
app: snowflake
spec:
serviceName: snowflake-headless
replicas: 1
selector:
matchLabels:
app: snowflake
template:
metadata:
labels:
app: snowflake
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: snowflake
image: registry.juwan.xhttp.zip/juwan/snowflake-rpc:latest
command: ["/bin/sh", "-c"]
args:
- |
export SNOWFLAKE_WORKER_ID="${POD_NAME##*-}"
exec /app/main -f /app/etc/snowflake.yaml
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: SNOWFLAKE_DATACENTER_ID
value: "1"
ports:
- name: grpc
containerPort: 8080
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: snowflake-headless
namespace: juwan
spec:
clusterIP: None
selector:
app: snowflake
ports:
- name: grpc
port: 8080
targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: snowflake-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
selector:
app: snowflake
@@ -0,0 +1,85 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-verifications-rpc
namespace: juwan
labels:
app: user-verifications-rpc
spec:
replicas: 1
selector:
matchLabels:
app: user-verifications-rpc
template:
metadata:
labels:
app: user-verifications-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: user-verifications-rpc
image: registry.juwan.xhttp.zip/juwan/user_verifications-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: user-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: user-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: user-db-app
key: dbname
- name: DB_HOST
value: user-db-rw.juwan
- name: DB_HOST_RO
value: user-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: user-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: user-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
- name: USER_RPC_TARGET
value: "user-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: user-verifications-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: user-verifications-rpc
+160
View File
@@ -0,0 +1,160 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-rpc
namespace: juwan
labels:
app: user-rpc
spec:
replicas: 1
selector:
matchLabels:
app: user-rpc
template:
metadata:
labels:
app: user-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: user-rpc
image: registry.juwan.xhttp.zip/juwan/users-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: user-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: user-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: user-db-app
key: dbname
- name: DB_HOST
value: user-db-rw.juwan
- name: DB_HOST_RO
value: user-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: user-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: user-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
- name: ADMIN_USERNAME
valueFrom:
secretKeyRef:
name: admin-bootstrap
key: username
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: admin-bootstrap
key: password
- name: ADMIN_EMAIL
valueFrom:
secretKeyRef:
name: admin-bootstrap
key: email
- name: JWT_SECRET_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: secret-key
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: user-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: user-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
namespace: juwan
labels:
app: user-api
spec:
replicas: 1
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: user-api
image: registry.juwan.xhttp.zip/juwan/users-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: USER_RPC_TARGET
value: "user-rpc-svc.juwan:8080"
- name: USER_VERIFICATIONS_RPC_TARGET
value: "user-verifications-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: user-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: user-api
+138
View File
@@ -0,0 +1,138 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: wallet-rpc
namespace: juwan
labels:
app: wallet-rpc
spec:
replicas: 1
selector:
matchLabels:
app: wallet-rpc
template:
metadata:
labels:
app: wallet-rpc
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: wallet-rpc
image: registry.juwan.xhttp.zip/juwan/wallet-rpc:latest
ports:
- name: grpc
containerPort: 8080
- name: metrics
containerPort: 4001
env:
- name: PD_USERNAME
valueFrom:
secretKeyRef:
name: wallet-db-app
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: wallet-db-app
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: wallet-db-app
key: dbname
- name: DB_HOST
value: wallet-db-rw.juwan
- name: DB_HOST_RO
value: wallet-db-ro.juwan
- name: DB_PORT
value: "5432"
- name: REDIS_HOST
value: wallet-redis-master.juwan
- name: REDIS_PORT
value: "6379"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: wallet-redis
key: password
- name: SNOWFLAKE_RPC_TARGET
value: "snowflake-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: wallet-rpc-svc
namespace: juwan
spec:
ports:
- name: grpc
port: 8080
targetPort: 8080
- name: metrics
port: 4001
targetPort: 4001
selector:
app: wallet-rpc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wallet-api
namespace: juwan
labels:
app: wallet-api
spec:
replicas: 1
selector:
matchLabels:
app: wallet-api
template:
metadata:
labels:
app: wallet-api
spec:
imagePullSecrets:
- name: registry-creds
containers:
- name: wallet-api
image: registry.juwan.xhttp.zip/juwan/wallet-api:latest
ports:
- name: http
containerPort: 8888
- name: metrics
containerPort: 4001
env:
- name: WALLET_RPC_TARGET
value: "wallet-rpc-svc.juwan:8080"
resources:
requests:
cpu: 10m
memory: 32Mi
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: wallet-api-svc
namespace: juwan
spec:
ports:
- name: http
port: 8888
targetPort: 8888
- name: metrics
port: 4001
targetPort: 4001
selector:
app: wallet-api
+33
View File
@@ -0,0 +1,33 @@
#!/usr/bin/env bash
set -euo pipefail
K01_DIR="$(cd "$(dirname "$0")" && pwd)"
export KUBECONFIG="${KUBECONFIG:-/etc/rancher/k3s/k3s.yaml}"
echo services
for f in "${K01_DIR}/services/"*.yaml; do
kubectl delete -f "$f" --ignore-not-found --wait=false
done
echo data crs
kubectl -n juwan delete cluster.postgresql.cnpg.io --all --wait=false 2>/dev/null || true
kubectl -n juwan delete redisreplication --all --wait=false 2>/dev/null || true
kubectl -n juwan delete redissentinel --all --wait=false 2>/dev/null || true
kubectl -n juwan delete mongodbcommunity --all --wait=false 2>/dev/null || true
kubectl -n redpanda delete topic --all --wait=false 2>/dev/null || true
kubectl -n redpanda delete redpanda --all --wait=false 2>/dev/null || true
echo network
kubectl delete -f "${K01_DIR}/infra/envoy.yaml" --ignore-not-found --wait=false
kubectl delete -f "${K01_DIR}/infra/ratelimit.yaml" --ignore-not-found --wait=false
sleep 30
echo cleanup orphaned
kubectl -n juwan delete pod --all --force --grace-period=0 2>/dev/null || true
kubectl -n juwan delete pvc --all --wait=false 2>/dev/null || true
kubectl -n redpanda delete pvc -l app.kubernetes.io/instance=juwan-kafka --wait=false 2>/dev/null || true
kubectl get pods,pvc -n juwan
kubectl get pods,pvc -n redpanda
kubectl describe node | grep -A 6 Allocated
+2 -2
View File
@@ -45,8 +45,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
volumeMounts:
- name: timezone
mountPath: /etc/localtime
+6 -6
View File
@@ -68,8 +68,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -240,8 +240,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: chat-redis
key: password
@@ -276,8 +276,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -195,8 +195,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: community-redis
key: password
@@ -231,8 +231,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
+2 -2
View File
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -195,8 +195,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: dispute-redis
key: password
@@ -231,8 +231,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
+2 -2
View File
@@ -49,8 +49,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+2 -2
View File
@@ -72,8 +72,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
volumeMounts:
- name: timezone
mountPath: /etc/localtime
+2 -2
View File
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -166,8 +166,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# redisSecret: # 记得创建密码
# name: game-redis
# key: password
@@ -204,8 +204,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# podSecurityContext:
# runAsUser: 1000
# fsGroup: 1000
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -195,8 +195,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: notification-redis
key: password
@@ -231,8 +231,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
@@ -35,8 +35,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -35,8 +35,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+2 -2
View File
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -195,8 +195,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: order-redis
key: password
@@ -231,8 +231,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
+2 -2
View File
@@ -35,8 +35,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -166,8 +166,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# redisSecret: # 记得创建密码
# name: player-rpc-redis
# key: password
@@ -204,8 +204,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# podSecurityContext:
# runAsUser: 1000
# fsGroup: 1000
+2 -2
View File
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -195,8 +195,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: review-redis
key: password
@@ -231,8 +231,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
+2 -2
View File
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -195,8 +195,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: search-redis
key: password
@@ -231,8 +231,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
+2 -2
View File
@@ -35,8 +35,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -65,8 +65,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -165,8 +165,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# redisSecret: # 记得创建密码
# name: shop-rpc-redis
# key: password
@@ -203,8 +203,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# podSecurityContext:
# runAsUser: 1000
# fsGroup: 1000
+2 -2
View File
@@ -35,8 +35,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+2 -2
View File
@@ -36,8 +36,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -82,8 +82,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -178,8 +178,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: user-redis
key: password
@@ -216,8 +216,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
@@ -66,8 +66,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -195,8 +195,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
redisSecret:
name: user-verifications-redis
key: password
@@ -231,8 +231,8 @@ spec:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
podSecurityContext:
runAsUser: 1000
fsGroup: 1000
+2 -2
View File
@@ -35,8 +35,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
+6 -6
View File
@@ -65,8 +65,8 @@ spec:
periodSeconds: 20
resources:
requests:
cpu: 500m
memory: 512Mi
cpu: 50m
memory: 128Mi
limits:
cpu: 1000m
memory: 1024Mi
@@ -165,8 +165,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# redisSecret: # 记得创建密码
# name: wallet-rpc-redis
# key: password
@@ -203,8 +203,8 @@ spec:
# cpu: 100m
# memory: 128Mi
# limits:
# cpu: 500m
# memory: 512Mi
# cpu: 50m
# memory: 128Mi
# podSecurityContext:
# runAsUser: 1000
# fsGroup: 1000