148 lines
3.7 KiB
Markdown
148 lines
3.7 KiB
Markdown
# Email Task 部署故障排查与修复记录
|
||
|
||
## 1. 问题现象
|
||
|
||
部署 `email-task` 时出现调度失败:
|
||
|
||
```text
|
||
Warning FailedScheduling 0/1 nodes are available: 1 Insufficient memory.
|
||
no new claims to deallocate, preemption: 0/1 nodes are available:
|
||
1 No preemption victims found for incoming pod.
|
||
```
|
||
|
||
表现为:
|
||
|
||
- `Deployment` 期望副本无法全部就绪
|
||
- `Pod` 长时间 `Pending`
|
||
|
||
---
|
||
|
||
## 2. 排查思路
|
||
|
||
按以下顺序排查:
|
||
|
||
1. **看部署配置是否过高请求**(`requests/limits` + `replicas`)
|
||
2. **看节点可分配资源和已分配资源**(确认是否真的是内存不足)
|
||
3. **看滚动策略是否会额外拉起新 Pod**(`maxSurge` 可能放大内存压力)
|
||
4. **看容器健康检查是否匹配服务类型**(任务型服务不一定监听端口)
|
||
|
||
---
|
||
|
||
## 3. 关键排查命令
|
||
|
||
### 3.1 查看节点可分配资源
|
||
|
||
```powershell
|
||
kubectl get nodes -o custom-columns=NAME:.metadata.name,ALLOCATABLE_CPU:.status.allocatable.cpu,ALLOCATABLE_MEM:.status.allocatable.memory
|
||
```
|
||
|
||
### 3.2 查看部署与 Pod 状态
|
||
|
||
```powershell
|
||
kubectl -n juwan get deploy email-task -o wide
|
||
kubectl -n juwan get pods -l app=email-task -o wide
|
||
kubectl -n juwan describe pod -l app=email-task
|
||
```
|
||
|
||
### 3.3 查看节点资源分配占比
|
||
|
||
```powershell
|
||
kubectl describe node minikube
|
||
```
|
||
|
||
关注输出中的 `Allocated resources`:
|
||
|
||
- `memory requests` 已接近节点上限(本次约 97%)
|
||
|
||
### 3.4 查看部署策略与探针配置
|
||
|
||
```powershell
|
||
kubectl -n juwan get deploy email-task -o yaml
|
||
kubectl -n juwan logs deploy/email-task --tail=120
|
||
```
|
||
|
||
---
|
||
|
||
## 4. 根因分析
|
||
|
||
本次是**组合问题**:
|
||
|
||
1. **内存请求过高 + 副本过多**
|
||
- 原始配置:`replicas=3`
|
||
- 每个 Pod 请求 `memory=512Mi`
|
||
- 单节点场景下,叠加现有业务后无法继续调度
|
||
|
||
2. **滚动更新默认 `maxSurge=25%`**
|
||
- 更新时可能额外起新 Pod,进一步触发内存不足
|
||
|
||
3. **探针不匹配服务行为**
|
||
- 原配置为 `tcpSocket:8080` 探针
|
||
- 实际 `email-task` 是任务型服务,日志显示启动后并未提供该端口服务
|
||
- 导致 `Readiness/Liveness` 持续失败
|
||
|
||
---
|
||
|
||
## 5. 修复方案
|
||
|
||
仅修改文件:
|
||
|
||
- `deploy/k8s/service/email/email.yaml`
|
||
|
||
### 5.1 降低资源请求与副本基线
|
||
|
||
- `replicas: 3 -> 1`
|
||
- `requests.cpu: 500m -> 100m`
|
||
- `requests.memory: 512Mi -> 128Mi`
|
||
- `limits.cpu: 1000m -> 500m`
|
||
- `limits.memory: 1024Mi -> 512Mi`
|
||
|
||
### 5.2 调整 HPA 基线与上限
|
||
|
||
- 两个 HPA(CPU / Memory)统一:
|
||
- `minReplicas: 3 -> 1`
|
||
- `maxReplicas: 10 -> 3`
|
||
|
||
### 5.3 调整滚动发布策略
|
||
|
||
- `strategy.rollingUpdate.maxSurge: 0`
|
||
- `strategy.rollingUpdate.maxUnavailable: 1`
|
||
|
||
目的:避免滚动期间额外拉起 Pod 造成瞬时内存不足。
|
||
|
||
### 5.4 移除不适配的 8080 TCP 探针
|
||
|
||
移除:
|
||
|
||
- `readinessProbe.tcpSocket:8080`
|
||
- `livenessProbe.tcpSocket:8080`
|
||
|
||
---
|
||
|
||
## 6. 修复执行命令
|
||
|
||
```powershell
|
||
kubectl apply -f deploy/k8s/service/email/email.yaml
|
||
kubectl -n juwan rollout restart deploy/email-task
|
||
kubectl -n juwan rollout status deploy/email-task --timeout=180s
|
||
kubectl -n juwan get pods -l app=email-task -o wide
|
||
kubectl -n juwan describe pods -l app=email-task | Select-String -Pattern 'FailedScheduling|Unhealthy|Warning|Events|Node:'
|
||
```
|
||
|
||
---
|
||
|
||
## 7. 修复结果
|
||
|
||
- `Deployment` 滚动成功
|
||
- 新 Pod 成功调度并 `Running`
|
||
- 无新的 `FailedScheduling` 与 `Unhealthy` 事件
|
||
|
||
---
|
||
|
||
## 8. 后续建议
|
||
|
||
1. 若要恢复多副本,先按节点容量逐步上调(建议先 2 副本并观测)。
|
||
2. 为任务型服务设计更合适的健康检查方式:
|
||
- 可考虑 `exec` 探针或业务自检端点。
|
||
3. 在单节点开发环境中统一降低默认 `requests`,防止多个服务叠加后调度失败。
|
||
4. 如需高可用,建议扩容节点而不是仅依赖压缩资源。
|