Initial commit: k3s deployment configurations

This commit is contained in:
K3s Admin
2026-01-21 08:37:05 +00:00
commit 3616496b86
28 changed files with 1502 additions and 0 deletions

26
k3s/argocd-ingress.yaml Normal file
View File

@@ -0,0 +1,26 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.entrypoints: websecure
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- argocd.u9.net3w.com
secretName: argocd-tls-secret
rules:
- host: argocd.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
name: https

131
k3s/k3s-demo.yaml Normal file
View File

@@ -0,0 +1,131 @@
# 1. 创建一个独立的命名空间,保持环境整洁
apiVersion: v1
kind: Namespace
metadata:
name: demo-space
---
# 2. 定义网站内容 (ConfigMap) - 我们直接把 HTML 写在配置里
apiVersion: v1
kind: ConfigMap
metadata:
name: html-config
namespace: demo-space
data:
index-blue.html: |
<html><body style="background-color:#0000; text-align:center; padding-top:50px;">
<h1 style="color:#006064;">我是网站 1 (蓝色)</h1>
<p>部署在 K3s 集群中</p>
</body></html>
index-green.html: |
<html><body style="background-color:#4a148c; text-align:center; padding-top:50px;">
<h1 style="color:#1b5e20;">我是网站 2 (绿色)</h1>
<p>我有 3 个副本,正在为您提供高可用服务!</p>
</body></html>
---
# 3. 部署网站 1 (Blue)
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-blue
namespace: demo-space
spec:
replicas: 1
selector:
matchLabels:
app: site-blue
template:
metadata:
labels:
app: site-blue
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html-volume
mountPath: /usr/share/nginx/html/index.html
subPath: index-blue.html
volumes:
- name: html-volume
configMap:
name: html-config
---
# 4. 部署网站 2 (Green) - 注意这里 replicas 是 3
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-green
namespace: demo-space
spec:
replicas: 3
selector:
matchLabels:
app: site-green
template:
metadata:
labels:
app: site-green
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html-volume
mountPath: /usr/share/nginx/html/index.html
subPath: index-green.html
volumes:
- name: html-volume
configMap:
name: html-config
---
# 5. 定义服务 (Service) - 让集群内部能找到它们
apiVersion: v1
kind: Service
metadata:
name: service-blue
namespace: demo-space
spec:
selector:
app: site-blue
ports:
- port: 80
---
apiVersion: v1
kind: Service
metadata:
name: service-green
namespace: demo-space
spec:
selector:
app: site-green
ports:
- port: 80
---
# 6. 定义路由 (Ingress) - 这就是 K3s 的“大门”
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: demo-space
spec:
rules:
- host: site1.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-blue
port:
number: 80
- host: site2.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-green
port:
number: 80

View File

@@ -0,0 +1,18 @@
# 1. 基于 Python 基础镜像
FROM python:3.9-slim
# 2. 设置工作目录
WORKDIR /app
# 3. 复制依赖文件并安装
COPY requirements.txt .
RUN pip install -r requirements.txt
# 4. 复制业务代码
COPY main.py .
# 5. 暴露端口
EXPOSE 5000
# 6. 启动命令
CMD ["python", "main.py"]

14
k3s/kaifa/pj1test/main.py Normal file
View File

@@ -0,0 +1,14 @@
from flask import Flask
import socket
app = Flask(__name__)
@app.route('/')
def hello():
# 获取当前容器的主机名
pod_name = socket.gethostname()
return f"<h1>Hello Fei! (v2)</h1><p>Running in Pod: <b>{pod_name}</b></p>"
if __name__ == '__main__':
# 监听所有IP端口5000
app.run(host='0.0.0.0', port=5000)

View File

@@ -0,0 +1,25 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-python-ingress
namespace: demo-space
# ▼▼▼ 1. 新增:告诉 Cert-Manager 用哪个签发机构 (和之前 Registry 一样) ▼▼▼
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
# ▼▼▼ 2. 新增:开启 HTTPS 并指定证书存放的 Secret 名字 ▼▼▼
tls:
- hosts:
- dev.u9.net3w.com
secretName: my-python-tls-secret
rules:
- host: dev.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-python-service
port:
number: 80

5
k3s/kaifa/pj1test/nano Normal file
View File

@@ -0,0 +1,5 @@
@app.route('/')
def hello():
pod_name = socket.gethostname()
# 修改了这里的内容:
return f"<h1>Hello Fei! (v2 from Private Registry)</h1><p>Running in Pod: <b>{pod_name}</b></p>"

105
k3s/kaifa/pj1test/note.md Normal file
View File

@@ -0,0 +1,105 @@
这里有两个问题阻碍了您的进度:
1. **构建报错 (DNS 问题)**`Temporary failure in name resolution` 说明 Docker 在构建过程中**没网**,无法连接到 Python 官方源下载 Flask。这在 VPS 上很常见,通常是因为 Docker 容器没法正确读取宿主机的 DNS 设置。
2. **代码未修改**:您刚才 `cat main.py` 的结果显示,代码里还是 `Hello Fei!`。如果您不修改代码内容(比如改成 v2即便部署成功了网页上显示的也没变化您就不知道到底更新成功没有。
我们来一次性解决这两个问题。
---
### 第一步:修改代码 (让 v2 显而易见)
请再次编辑 `main.py`,必须改一点内容,否则我们无法验证更新。
```bash
vim main.py
```
`return` 那一行改成下面这样(加上 **v2** 字样):
```python
return f"<h1>Hello Fei! (v2)</h1><p>Running in Pod: <b>{pod_name}</b></p>"
```
*保存并退出 (`:wq`)*
---
### 第二步:使用“宿主网络”构建 (解决 DNS 报错)
为了解决 `pip install` 没网的问题,我们在 build 命令中加上 **`--network=host`**。这会让构建过程直接使用 VPS 的网络连接,不再受 Docker 网络的限制。
执行这条命令:
```bash
# 加入了 --network=host 参数
docker build --network=host -t registry.u9.net3w.com/my-python-app:v2 .
```
---
### 第三步:推送到私有仓库
构建成功后(不再报错 pip 失败),推送到您的私有仓库:
```bash
docker push registry.u9.net3w.com/my-python-app:v2
```
---
### 第四步:更新 K3s 的“拉取钥匙” (必须做!)
刚才我们为了修仓库把密码改成了 `123456`,但 K3s 命名空间 (`demo-space`) 里的钥匙可能还是旧的。**如果不更新K3s 会因为密码错误拉不到镜像。**
```bash
# 1. 删掉旧钥匙
kubectl delete secret my-registry-key -n demo-space
# 2. 创建新钥匙 (密码是 123456)
kubectl create secret docker-registry my-registry-key \
--docker-server=registry.u9.net3w.com \
--docker-username=admin \
--docker-password=123456 \
--namespace=demo-space
```
---
### 第五步:部署上线
1. **检查 YAML**:确保 `my-app.yaml` 里的 image 是 v2。
```bash
sed -i 's|image:.*|image: registry.u9.net3w.com/my-python-app:v2|' my-app.yaml
```
2. **应用更新**
```bash
kubectl apply -f my-app.yaml
```
3. **重启 Pod (强制拉取新镜像)**
```bash
kubectl rollout restart deployment my-python-app -n demo-space
```
4. **观察状态**
```bash
watch kubectl get pods -n demo-space
```
**一旦看到新 Pod 跑起来,去浏览器刷新页面,看到 "(v2)" 字样就大功告成了!**

View File

@@ -0,0 +1 @@
flask

97
k3s/kuma-stack.yaml Normal file
View File

@@ -0,0 +1,97 @@
# 1. 创建一个独立的命名空间,保持整洁
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
# 2. 申请一块 10GB 的硬盘 (使用 Longhorn)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kuma-pvc
namespace: monitoring
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
---
# 3. 部署应用 (StatefulSet 也可以用 Deployment单实例用 Deployment 足够)
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: uptime-kuma
strategy:
type: Recreate
template:
metadata:
labels:
app: uptime-kuma
spec:
containers:
- name: uptime-kuma
image: louislam/uptime-kuma:1
ports:
- containerPort: 3001
volumeMounts:
- name: data
mountPath: /app/data
volumes:
- name: data
persistentVolumeClaim:
claimName: kuma-pvc
---
# 4. 创建内部服务
apiVersion: v1
kind: Service
metadata:
name: kuma-service
namespace: monitoring
spec:
selector:
app: uptime-kuma
ports:
- protocol: TCP
port: 80
targetPort: 3001
---
# 5. 暴露到外网 (HTTPS + 域名)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kuma-ingress
namespace: monitoring
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: status.u9.net3w.com # <--- 您的新域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kuma-service
port:
number: 80
tls:
- hosts:
- status.u9.net3w.com
secretName: status-tls-secret

72
k3s/my-blog/01-mysql.yaml Normal file
View File

@@ -0,0 +1,72 @@
# 01-mysql.yaml (新版)
# --- 第一部分:申请一张硬盘券 (PVC) ---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc # 记住这个券的名字
namespace: demo-space
spec:
accessModes:
- ReadWriteOnce # 只能被一个节点读写
storageClassName: longhorn # K3s 默认的存储驱动,利用 VPS 本地硬盘
resources:
requests:
storage: 2Gi # 申请 2GB 大小
---
# --- 第二部分:数据库服务 (不变) ---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
namespace: demo-space
spec:
ports:
- port: 3306
selector:
app: wordpress-mysql
---
# --- 第三部分:部署数据库 (挂载硬盘) ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
namespace: demo-space
spec:
selector:
matchLabels:
app: wordpress-mysql
strategy:
type: Recreate # 有状态应用建议用 Recreate (先关旧的再开新的)
template:
metadata:
labels:
app: wordpress-mysql
spec:
containers:
- image: mariadb:10.6.4-focal
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password123"
- name: MYSQL_DATABASE
value: "wordpress"
- name: MYSQL_USER
value: "wordpress"
- name: MYSQL_PASSWORD
value: "wordpress"
ports:
- containerPort: 3306
name: mysql
# ▼▼▼ 重点变化在这里 ▼▼▼
volumeMounts:
- name: mysql-store
mountPath: /var/lib/mysql # 容器里数据库存文件的位置
volumes:
- name: mysql-store
persistentVolumeClaim:
claimName: mysql-pvc # 使用上面定义的那张券

View File

@@ -0,0 +1,57 @@
# 02-wordpress.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-service
namespace: demo-space
spec:
ports:
- port: 80
selector:
app: wordpress
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: demo-space
spec:
replicas: 2 # 我们启动 2 个 WordPress 前台
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress:latest
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: "mysql-service" # 魔法所在!直接填名字
- name: WORDPRESS_DB_USER
value: "wordpress"
- name: WORDPRESS_DB_PASSWORD
value: "wordpress"
- name: WORDPRESS_DB_NAME
value: "wordpress"
- name: WORDPRESS_CONFIG_EXTRA
value: |
/* HTTPS behind reverse proxy - Complete configuration */
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
$_SERVER['HTTPS'] = 'on';
}
if (isset($_SERVER['HTTP_X_FORWARDED_HOST'])) {
$_SERVER['HTTP_HOST'] = $_SERVER['HTTP_X_FORWARDED_HOST'];
}
/* Force SSL for admin */
define('FORCE_SSL_ADMIN', true);
/* Fix cookie issues */
@ini_set('session.cookie_httponly', true);
@ini_set('session.cookie_secure', true);
@ini_set('session.use_only_cookies', true);
ports:
- containerPort: 80
name: wordpress

View File

@@ -0,0 +1,28 @@
# 03-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress-ingress
namespace: demo-space
annotations:
# ▼▼▼ 关键注解:我要申请证书 ▼▼▼
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: blog.u9.net3w.com # 您的域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress-service
port:
number: 80
# ▼▼▼ 关键配置:证书存放在这个 Secret 里 ▼▼▼
tls:
- hosts:
- blog.u9.net3w.com
secretName: blog-tls-secret # K3s 会自动创建这个 secret 并填入证书

View File

@@ -0,0 +1,30 @@
# 1. 定义一个“虚假”的服务,作为 K8s 内部的入口
#
# external-app.yaml (修正版)
apiVersion: v1
kind: Service
metadata:
name: host-app-service
namespace: demo-space
spec:
ports:
- name: http # <--- Service 这里叫 http
protocol: TCP
port: 80
targetPort: 3100
---
apiVersion: v1
kind: Endpoints
metadata:
name: host-app-service
namespace: demo-space
subsets:
- addresses:
- ip: 85.137.244.98
ports:
- port: 3100
name: http # <--- 【关键修改】这里必须也叫 http才能配对成功

View File

@@ -0,0 +1,25 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: host-app-ingress
namespace: demo-space
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
# ▼▼▼ 核心修复:添加这一行 ▼▼▼
ingress.kubernetes.io/custom-response-headers: "Content-Security-Policy: upgrade-insecure-requests"
spec:
rules:
- host: wt.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: host-app-service
port:
number: 80
tls:
- hosts:
- wt.u9.net3w.com
secretName: wt-tls-secret

16
k3s/my-blog/issuer.yaml Normal file
View File

@@ -0,0 +1,16 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# Let's Encrypt 的生产环境接口
server: https://acme-v02.api.letsencrypt.org/directory
# 填您的真实邮箱,证书过期前会发邮件提醒(虽然它会自动续期)
email: fszy2021@gmail.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: traefik

View File

@@ -0,0 +1,27 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system # 注意Longhorn 安装在这个命名空间
annotations:
# 1. 告诉 Cert-Manager请用这个发证机构给我发证
cert-manager.io/cluster-issuer: letsencrypt-prod
# (可选) 强制 Traefik 使用 HTTPS 入口但这行通常不需要Traefik 会自动识别 TLS
# traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
rules:
- host: storage.u9.net3w.com # 您的域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: longhorn-frontend
port:
number: 80
# 2. 告诉 K3s证书下载下来后存在哪里
tls:
- hosts:
- storage.u9.net3w.com
secretName: longhorn-tls-secret # 证书会自动保存在这个 Secret 里

View File

@@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-apache
namespace: demo-space
spec:
selector:
matchLabels:
run: php-apache
replicas: 1
template:
metadata:
labels:
run: php-apache
spec:
containers:
- name: php-apache
image: registry.k8s.io/hpa-example
ports:
- containerPort: 80
resources:
# 必须设置资源限制HPA 才能计算百分比
limits:
cpu: 500m
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: php-apache
namespace: demo-space
spec:
ports:
- port: 80
selector:
run: php-apache

120
k3s/n8n/n8n-stack.yaml Normal file
View File

@@ -0,0 +1,120 @@
# 1. 独立的命名空间
apiVersion: v1
kind: Namespace
metadata:
name: n8n-system
---
# 2. 数据持久化 (保存工作流和账号信息)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: n8n-pvc
namespace: n8n-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 5Gi
---
# 3. 核心应用
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n
namespace: n8n-system
labels:
app: n8n
spec:
replicas: 1
selector:
matchLabels:
app: n8n
template:
metadata:
labels:
app: n8n
spec:
securityContext:
fsGroup: 1000
containers:
- name: n8n
image: n8nio/n8n:latest
securityContext:
runAsUser: 1000
runAsGroup: 1000
ports:
- containerPort: 5678
env:
# ▼▼▼ 关键配置 ▼▼▼
- name: N8N_HOST
value: "n8n.u9.net3w.com"
- name: N8N_PORT
value: "5678"
- name: N8N_PROTOCOL
value: "https"
- name: WEBHOOK_URL
value: "https://n8n.u9.net3w.com/"
# 时区设置 (方便定时任务)
- name: GENERIC_TIMEZONE
value: "Asia/Shanghai"
- name: TZ
value: "Asia/Shanghai"
# 禁用 n8n 的一些统计收集
- name: N8N_DIAGNOSTICS_ENABLED
value: "false"
volumeMounts:
- name: data
mountPath: /home/node/.n8n
volumes:
- name: data
persistentVolumeClaim:
claimName: n8n-pvc
---
# 4. 服务暴露
apiVersion: v1
kind: Service
metadata:
name: n8n-service
namespace: n8n-system
spec:
selector:
app: n8n
ports:
- protocol: TCP
port: 80
targetPort: 5678
---
# 5. Ingress (自动 HTTPS)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: n8n-ingress
namespace: n8n-system
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- n8n.u9.net3w.com
secretName: n8n-tls
rules:
- host: n8n.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: n8n-service
port:
number: 80

62
k3s/nav/nav-config.yaml Normal file
View File

@@ -0,0 +1,62 @@
apiVersion: v1
kind: Namespace
metadata:
name: navigation
---
# ▼▼▼ 核心知识点ConfigMap ▼▼▼
apiVersion: v1
kind: ConfigMap
metadata:
name: homepage-config
namespace: navigation
data:
# 配置文件 1: 定义小组件 (显示时间、搜索框、资源占用)
widgets.yaml: |
- search:
provider: google
target: _blank
- resources:
cpu: true
memory: true
disk: true
- datetime:
text_size: xl
format:
timeStyle: short
# 配置文件 2: 定义您的服务链接 (请注意看下面的 icon 和 href)
services.yaml: |
- 我的应用:
- 个人博客:
icon: wordpress.png
href: https://blog.u9.net3w.com
description: 我的数字花园
- 远程桌面:
icon: linux.png
href: https://wt.u9.net3w.com
description: K8s 外部反代测试
- 基础设施:
- 状态监控:
icon: uptime-kuma.png
href: https://status.u9.net3w.com
description: Uptime Kuma
widget:
type: uptimekuma
url: http://kuma-service.monitoring.svc.cluster.local # ▼ 重点K8s 内部 DNS
slug: my-wordpress-blog # (高级玩法:稍后填这个)
- 存储管理:
icon: longhorn.png
href: https://storage.u9.net3w.com
description: 分布式存储面板
widget:
type: longhorn
url: http://longhorn-frontend.longhorn-system.svc.cluster.local
# 配置文件 3: 基础设置
settings.yaml: |
title: K3s 指挥中心
background: https://images.unsplash.com/photo-1519681393784-d120267933ba?auto=format&fit=crop&w=1920&q=80
theme: dark
color: slate

71
k3s/nav/nav-deploy.yaml Normal file
View File

@@ -0,0 +1,71 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: homepage
namespace: navigation
spec:
replicas: 1
selector:
matchLabels:
app: homepage
template:
metadata:
labels:
app: homepage
spec:
containers:
- name: homepage
image: ghcr.io/gethomepage/homepage:latest
ports:
- containerPort: 3000
# ▼▼▼ 关键动作:把 ConfigMap 挂载成文件 ▼▼▼
volumeMounts:
- name: config-volume
mountPath: /app/config # 容器里的配置目录
volumes:
- name: config-volume
configMap:
name: homepage-config # 引用上面的 ConfigMap
---
apiVersion: v1
kind: Service
metadata:
name: homepage-service
namespace: navigation
spec:
selector:
app: homepage
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: homepage-ingress
namespace: navigation
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
# 开启这个可以允许跨域调用 (可选)
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
rules:
- host: nav.u9.net3w.com # <--- 您的新域名
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: homepage-service
port:
number: 80
tls:
- hosts:
- nav.u9.net3w.com
secretName: nav-tls-secret

View File

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: registry-auth-secret
namespace: registry-system
type: Opaque
stringData:
# ▼▼▼ 重点:这是 123456 的 SHA1 签名,直接复制不要改 ▼▼▼
htpasswd: |
admin:{SHA}fEqNCco3Yq9h5ZUglD3SZJT4lBs=

27
k3s/registry/note.md Normal file
View File

@@ -0,0 +1,27 @@
root@98-hk:~/k3s/registry# docker run --rm --entrypoint htpasswd httpd:alpine -Bbn admin 123456
Unable to find image 'httpd:alpine' locally
alpine: Pulling from library/httpd
1074353eec0d: Pull complete
0bd765d2a2cb: Pull complete
0c4ffdba1e9e: Pull complete
4f4fb700ef54: Pull complete
0c51c0b07eae: Pull complete
e626d5c4ed2c: Pull complete
988cd7d09a31: Pull complete
Digest: sha256:6b7535d8a33c42b0f0f48ff0067765d518503e465b1bf6b1629230b62a466a87
Status: Downloaded newer image for httpd:alpine
admin:$2y$05$yYEah4y9O9F/5TumcJSHAuytQko2MAyFM1MuqgAafDED7Fmiyzzse
root@98-hk:~/k3s/registry# # 注意:两边要有单引号 ' '
kubectl create secret generic registry-auth-secret \
--from-literal=htpasswd='admin:$2y$05$yYEah4y9O9F/5TumcJSHAuytQko2MAyFM1MuqgAafDED7Fmiyzzse' \
--namespace registry-system
secret/registry-auth-secret created
root@98-hk:~/k3s/registry# # 重新部署应用
kubectl apply -f registry-stack.yaml
namespace/registry-system unchanged
persistentvolumeclaim/registry-pvc unchanged
deployment.apps/registry created
service/registry-service unchanged
ingress.networking.k8s.io/registry-ingress unchanged
root@98-hk:~/k3s/registry#

View File

@@ -0,0 +1,120 @@
# 1. 创建独立的命名空间
apiVersion: v1
kind: Namespace
metadata:
name: registry-system
---
# 2. 将刚才生成的密码文件创建为 K8s Secret
---
# 3. 申请硬盘空间 (存放镜像文件)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-pvc
namespace: registry-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 20Gi # 给仓库 20G 空间,不够随时可以扩
---
# 4. 部署 Registry 应用
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
namespace: registry-system
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- containerPort: 5000
env:
# --- 开启认证 ---
- name: REGISTRY_AUTH
value: "htpasswd"
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: "Registry Realm"
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: "/auth/htpasswd"
# --- 存储路径 ---
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: "/var/lib/registry"
volumeMounts:
- name: data-volume
mountPath: /var/lib/registry
- name: auth-volume
mountPath: /auth
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: registry-pvc
- name: auth-volume
secret:
secretName: registry-auth-secret
---
# 5. 内部服务
apiVersion: v1
kind: Service
metadata:
name: registry-service
namespace: registry-system
spec:
selector:
app: registry
ports:
- protocol: TCP
port: 80
targetPort: 5000
---
# 6. 暴露 HTTPS 域名
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: registry-ingress
namespace: registry-system
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
# 增加上传大小限制 (Docker 镜像层可能很大)
ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
rules:
- host: registry.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: registry-service
port:
number: 80
tls:
- hosts:
- registry.u9.net3w.com
secretName: registry-tls-secret

View File

@@ -0,0 +1,23 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portainer-ingress
namespace: portainer
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- portainer.u9.net3w.com # <--- 换个新子域名
secretName: portainer-tls
rules:
- host: portainer.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portainer
port:
number: 9000

148
k3s/tool/portainer.md Normal file
View File

@@ -0,0 +1,148 @@
这是一个非常棒的想法!天天面对黑底白字的命令行确实容易累,而且在查看日志流、多 Pod 资源占用时图形化界面GUI有着无法比拟的优势。
对于 K3s我有 **三个最强推荐**,分别对应不同的使用场景。
---
### 方案一:最强桌面端神器 —— OpenLens (推荐开发使用)
如果您希望在**自己的电脑Windows/Mac**上管理远程的 K3s而不占用 VPS 的资源Lens 是绝对的王者。它就像是一个专门用来操作 K8s 的“远程桌面”。
**优点:**
* **零资源占用**:运行在您自己电脑上,不消耗 VPS 内存。
* **功能极其强大**:查看日志、进入终端、编辑 YAML、查看监控点击即达。
* **体验流畅**:原生应用,比网页版更丝滑。
#### 🚀 如何连接 (只需 3 步)
要让本地的 Lens 连上远程的 K3s您需要把 VPS 上的“钥匙”拿回来。
1. **在 VPS 上获取 Kubeconfig**
执行命令查看配置文件内容:
```bash
cat /etc/rancher/k3s/k3s.yaml
```
*(复制这一大段内容)*
2. **在本地电脑创建配置文件**
在您的电脑上新建一个文件(例如 `my-k3s-config`),粘贴刚才的内容。
**⚠️ 关键修改**:找到 `server: https://127.0.0.1:6443` 这一行,把 `127.0.0.1` 改成您的 **域名** (比如 `dev.u9.net3w.com`,前提是 6443 端口已开放) 或者 **VPS 公网 IP**
3. **导入 Lens**
下载并安装 **OpenLens** (免费开源版) 或 **Lens Desktop**
打开软件 -> File -> Add Cluster -> 粘贴刚才修改后的配置内容。
**瞬间,您的 K3s 集群全貌就会展现在眼前!**
---
### 方案二:最强网页版 —— Portainer (推荐运维/监控)
如果您希望打开浏览器就能管理,而且您习惯了 Docker 的管理方式,那么 **Portainer** 是不二之选。它对 K8s 的支持非常友好,界面简洁漂亮。
**优点:**
* **随时随地访问**:有浏览器就能管。
* **可视化极佳**:有漂亮的仪表盘,能看到 CPU/内存使用率。
* **部署简单**:它自己就是 K3s 里的一个 Pod。
#### 🚀 如何安装 (30秒搞定)
直接在您的 VPS 上执行:
1. **安装 Portainer**
```bash
kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
```
*(这一步会自动创建命名空间和所有资源)*
2. **暴露端口 (我们要用 NodePort 访问)**
默认 Portainer 用的是 LoadBalancer我们改一下让它能直接通过端口访问。
```bash
# 编辑服务配置
kubectl edit service portainer -n portainer
```
找到 `type: LoadBalancer` 改为 `type: NodePort`
或者,既然您已经玩得很溜了,**建议直接写一个 Ingress** 给它!
**创建 `portainer-ingress.yaml`**
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portainer-ingress
namespace: portainer
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- portainer.u9.net3w.com # <--- 换个新子域名
secretName: portainer-tls
rules:
- host: portainer.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portainer
port:
number: 9000
```
3. **应用并访问**
`kubectl apply -f portainer-ingress.yaml`
稍等片刻,访问 `https://portainer.u9.net3w.com`,设置管理员密码即可。
---
### 方案三:官方全家桶 —— Rancher (企业级)
K3s 是 Rancher Labs 开发的Rancher 是他们的旗舰管理平台。
**优点:** 功能最全,企业级标准,可以管理成百上千个集群。
**缺点:** **非常吃内存**。Rancher 本身启动就需要占用 2G+ 内存。如果您的 VPS 只有 4G 或更少,**强烈不推荐**,它会把您的业务挤垮。
---
### 💡 极客的选择K9s (终端里的 GUI)
如果您还是离不开终端,但想要图形化的便利,**K9s** 是必备神器。它是一个运行在终端里的图形界面。
**安装:**
```bash
# 在 VPS 上安装 Linuxbrew 后安装,或者直接下载二进制文件
curl -sS https://webinstall.dev/k9s | bash
```
**启动:**
输入 `k9s`
您可以像打游戏一样:
*`0` 显示所有 Pod。
*`l` 看日志。
*`s` 进入 Shell。
*`ctrl-d` 删除 Pod。
* 全键盘操作,酷炫到没朋友。
---
### 我的建议
1. **首选方案**:先在本地安装 **OpenLens**。不用在 VPS 上装任何东西,无负担,体验最好。
2. **次选方案**:如果经常需要在不同电脑上管理,在 VPS 上部署一个 **Portainer**
**您想先试试哪一个?如果是 OpenLens我可以教您怎么把 VPS 的 6443 端口安全地放行。**

98
k3s/tool/rancher/note.md Normal file
View File

@@ -0,0 +1,98 @@
**有眼光!** 既然您的 VPS 配置不错(之前提到有 16GB 内存),跑 Rancher 是**绝对最爽**的选择。它就像给您的 Kubernetes 集群装了一个“Windows 操作系统”,所有操作都能点点鼠标完成。
虽然您喜欢用 YAML 硬编码(这很好),但在安装 Rancher 这种巨型软件时,**Helm**Kubernetes 的包管理器,类似于 Python 的 pip 或 Linux 的 apt是唯一标准且靠谱的途径。手动写 YAML 安装 Rancher 会涉及到几百个文件,极易出错。
请按照以下步骤把这头“牛”Rancher 的 Logo牵进您的集群。
---
### 🚀 第一步:安装 Helm (如果您还没有)
Helm 是安装 Rancher 的必备工具。在 Node 1 上执行:
```bash
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
```
### 第二步:添加 Rancher 仓库
告诉 Helm 去哪里下载 Rancher。
```bash
# 1. 添加仓库
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
# 2. 创建 Rancher 专属的命名空间
kubectl create namespace cattle-system
```
### 第三步:一键安装 Rancher (关键)
我们要安装 Rancher并让它**自动利用**您集群里已经存在的 `cert-manager` 来申请 `rancher.u9.net3w.com` 的 HTTPS 证书。
请**直接复制**下面这整段命令(我已经帮您把参数配好了):
* **域名**: `rancher.u9.net3w.com`
* **初始密码**: `admin123456` (进去后强制要求改)
* **副本数**: `1` (为了节省资源,生产环境通常用 3但 1 个也够用)
* **证书**: 使用 Let's Encrypt 自动生成
```bash
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.u9.net3w.com \
--set bootstrapPassword=admin123456 \
--set replicas=1 \
--set ingress.tls.source=letsEncrypt \
--set letsEncrypt.email=admin@net3w.com \
--set letsEncrypt.ingress.class=traefik
```
---
### 第四步:等待“巨兽”苏醒
Rancher 是一个庞大的 Java/Go 应用,启动需要一点时间(通常 2-5 分钟)。
您可以像看电影一样监控它的启动过程:
```bash
watch kubectl get pods -n cattle-system
```
* **阶段 1**: 您会看到 `rancher-webhook` 先启动。
* **阶段 2**: 接着 `rancher-xxxx` 主 Pod 开始 `ContainerCreating`
* **阶段 3**: 当 `rancher-xxxx` 状态变为 **`Running`** 且 **`READY 1/1`** 时,说明启动成功。
*(如果 Ready 很久都是 0/1不要急那是它在做数据库初始化只要不报错 CrashLoopBackOff 就等着)*
---
### 第五步:登录并接管一切
一旦 Pod 跑起来证书也签发好Ingress 生效)后:
1. 打开浏览器访问:**[https://rancher.u9.net3w.com](https://www.google.com/search?q=https://rancher.u9.net3w.com)**
2. **密码**:输入刚才设置的 `admin123456`
3. **接管**
* 登录后,它会提示您设置一个新的复杂密码。
* 然后您会看到一个名为 **"local"** 的集群 —— **这就是您的 K3s 集群!**
---
### 🎉 您现在可以用图形界面做什么?
进入 "local" 集群后,您可以尝试以下操作,体验一下为什么我说它是“神器”:
1. **看工作负载**:点击左侧菜单 **Workload (工作负载)**,您会看到之前的 `my-python-app`, `registry`, `wordpress` 全都在列表里CPU/内存占用一目了然。
2. **查看日志**:点进 `my-python-app` -> 点击 Pod 名字 -> 右上角三个点 -> **View Logs**。界面非常舒服,支持实时滚动。
3. **进入终端**:同上,点击 **Execute Shell**,直接在浏览器里获得一个 root 终端,再也不用 SSH 连来连去了。
4. **拖拽扩容**:找到 Deployment点一下 `+` 号,瞬间从 1 个 Pod 变成 5 个 Pod感受 Kubernetes 的弹性。
**快去试试吧!如果安装过程中卡在 Pulling Image 或者 Pending请告诉我 `kubectl get pods -n cattle-system` 的截图。**

View File

@@ -0,0 +1,109 @@
# 1. 命名空间
apiVersion: v1
kind: Namespace
metadata:
name: gitea-system
---
# 2. 数据持久化 (存放代码仓库和数据库)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-data-pvc
namespace: gitea-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn # 沿用你的 Longhorn
resources:
requests:
storage: 10Gi
---
# 3. 部署 Gitea 应用
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitea
namespace: gitea-system
spec:
replicas: 1
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
app: gitea
spec:
containers:
- name: gitea
image: gitea/gitea:latest
ports:
- containerPort: 3000
name: http
- containerPort: 22
name: ssh
volumeMounts:
- name: gitea-data
mountPath: /data
env:
# 初始设置,避免手动改配置文件
- name: GITEA__server__DOMAIN
value: "git.u9.net3w.com"
- name: GITEA__server__ROOT_URL
value: "https://git.u9.net3w.com/"
- name: GITEA__server__SSH_PORT
value: "22" # 注意:通过 Ingress 访问时通常用 HTTPSSSH 需要额外配置 NodePort暂时先设为标准
volumes:
- name: gitea-data
persistentVolumeClaim:
claimName: gitea-data-pvc
---
# 4. Service (内部网络)
apiVersion: v1
kind: Service
metadata:
name: gitea-service
namespace: gitea-system
spec:
selector:
app: gitea
ports:
- protocol: TCP
port: 80
targetPort: 3000
name: http
- protocol: TCP
port: 2222 # 如果未来要用 SSH可以映射这个端口
targetPort: 22
name: ssh
---
# 5. Ingress (暴露 HTTPS 域名)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gitea-ingress
namespace: gitea-system
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
# 允许大文件上传 (Git push 可能很大)
nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
rules:
- host: git.u9.net3w.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitea-service
port:
number: 80
tls:
- hosts:
- git.u9.net3w.com
secretName: gitea-tls-secret