在自建 k8s 集群中部署 laf
一、前置条件
本次部署是在内网环境下,关于镜像或者网络方面的问题,本文档不做详细说明,如有需要,欢迎留言探讨。
本文档中的 k8s 集群并不是 sealos 一键部署的,是二进制部署方式自建的,集群中需要有默认 Storage Class,用于创建 pvc。
k8s版本: 1.26
master ip: 192.168.2.150
node1 ip: 192.168.2.151
node2 ip: 192.168.2.152
因为使用了 helm 部署,请提前根据 k8s 版本下载安装对应的 helm。
参考:
(1) 安装Helm
(2) Helm版本支持策略
提前把 laf 项目下载下来,解压
地址:labring/laf
集群中需要有 nginx-ingress-controller,以下是一个1.9.6版本的nginx-ingress-controller的部署文件:
(注:采用的 daemonset 部署模式;并且直接使用主机网络,即 hostNetwork 设为 true )
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resourceNames:
- ingress-nginx-leader
resources:
- leases
verbs:
- get
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: registry.k8s.io/ingress-nginx/controller:v1.9.6
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
- containerPort: 8443
hostPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ingress-nginx
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
defaultMode: 420
secretName: ingress-nginx-admission
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65532
seccompProfile:
type: RuntimeDefault
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.9.6
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
如果要开启 metrics(本文档开启),需要提前部署 prometheus(具体请查看 kube-prometheus-stack 的文档);
如不开启,请在下文部署过程中,将 metrics.enabled、metrics.serviceMonitor.enabled 等等有关 metrics 开启与否的选项改为 false。
为简化部署,本文档仅使用 http;
可以选择是否开启 https,若开启需要签证、修改ingress等等,如有需要可留言。
需要域名,如果没有,建议使用 nip.io 形式的域名,例:192.168.2.150.nip.io;
如果自定义域名,需要集群内和集群外需要访问 laf 的客户端能够解析自定义的域名。
192.168.2.150.nip.io
*.oss.192.168.2.150.nip.io
*.192.168.2.150.nip.io
二、部署 Mongodb
并没有使用 laf 文档里的 kubeblocks 方式启动 mongodb实例,考虑到又得额外部署一个 kubeblocks 组件,有点麻烦,不如 bitnami mongodb chart 来得直接,虽然是 replicaset 模式,但是只起了一个副本,后期也可扩展节点。
请提前拉取镜像:“bitnami/mongodb:5.0.10-debian-11-r3”
下载 mongodb chart
helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/mongodb --version 12.1.31 --untar
创建 values-final.yaml 文件
global:
imageRegistry: ""
architecture: "replicaset"
auth:
enabled: true
rootUser: "root"
rootPassword: "Passw0rd"
tls:
enabled: false
replicaSetName: "rs0"
replicaCount: 1
persistence:
enabled: true
size: 50Gi
accessModes:
- "ReadWriteOnce"
arbiter:
enabled: false
metrics:
enabled: true
username: "root"
password: "Passw0rd"
serviceMonitor:
enabled: true
namespace: "laf-system"
部署
helm install --debug --dry-run mongodb -n laf-system --create-namespace --values values-final.yaml ./mongodb
helm install mongodb -n laf-system --create-namespace --values values-final.yaml ./mongodb
三、部署 Minio
请提前拉取镜像:“quay.io/minio/minio:RELEASE.2023-03-22T06-36-24Z”、“quay.io/minio/mc:RELEASE.2022-11-07T23-47-39Z”
进入 laf-main 项目目录下的 build/charts/minio 下
[root@example minio]
total 44
-rw-r--r-- 1 sysadm sysadm 359 Jul 23 10:20 Chart.yaml
-rw-r--r-- 1 sysadm sysadm 9370 Jul 23 10:20 README.md
drwxr-xr-x 2 sysadm sysadm 4096 Aug 5 11:01 templates
-rw-r--r-- 1 sysadm sysadm 16757 Jul 23 10:20 values.yaml
创建 values-final.yaml 文件
image:
repository: "quay.io/minio/minio"
tag: "RELEASE.2023-03-22T06-36-24Z"
pullPolicy: "IfNotPresent"
mcImage:
repository: "quay.io/minio/mc"
tag: "RELEASE.2022-11-07T23-47-39Z"
pullPolicy: "IfNotPresent"
rootUser: "minio-root"
rootPassword: "Passw0rd"
persistence:
enabled: true
size: "100Gi"
accessMode: "ReadWriteOnce"
domain: "oss.192.168.2.150.nip.io"
consoleHost: "minio.192.168.2.150.nip.io"
tls:
enabled: false
ingress:
enabled: true
consoleIngress:
enabled: true
metrics:
serviceMonitor:
enabled: true
additionalLabels:
release: "prometheus"
namespace: "laf-system"
部署
helm install --debug --dry-run minio -n laf-system --values values-final.yaml ./
helm install minio -n laf-system --values values-final.yaml ./
四、部署 Laf Server
请提前拉取镜像:
“docker.io/lafyun/runtime-node-init:latest”
“docker.io/lafyun/runtime-node:latest”
“docker.io/lafyun/laf-server:latest”
创建 values-final.yaml 文件
apiServerHost: "api.192.168.2.150.nip.io"
apiServerUrl: "http://api.192.168.2.150.nip.io"
databaseUrl: "mongodb://root:Passw0rd@mongodb-headless.laf-system.svc.cluster.local:27017/sys_db?authSource=admin&replicaSet=rs0&w=majority"
default_region:
database_url: "mongodb://root:Passw0rd@mongodb-headless.laf-system.svc.cluster.local:27017/sys_db?authSource=admin&replicaSet=rs0&w=majority"
fixed_namespace: "laf-system"
minio_domain: "oss.192.168.2.150.nip.io"
minio_external_endpoint: "http://oss.192.168.2.150.nip.io"
minio_internal_endpoint: "http://minio.laf-system.svc.cluster.local:9000"
minio_root_access_key: "minio-root"
minio_root_secret_key: "Passw0rd"
runtime_domain: "192.168.2.150.nip.io"
website_domain: "192.168.2.150.nip.io"
runtime_exporter_secret: "Passw0rd"
prometheus_url: "http://prometheus-operated.prometheus.svc.cluster.local:9090"
tls:
enabled: false
wildcard_certificate_secret_name: ""
default_runtime:
init_image: "docker.io/lafyun/runtime-node-init:latest"
image: "docker.io/lafyun/runtime-node:latest"
issuer:
enabled: false
jwt:
secret: "Passw0rd"
expires_in: "7d"
siteName: "192.168.2.150.nip.io"
image:
repository: "docker.io/lafyun/laf-server"
pullPolicy: "IfNotPresent"
tag: "latest"
ingress:
enabled: true
在 templates/deployment.yaml 中的 env 中添加:
- name: DEFAULT_RUNTIME_INIT_IMAGE
value: {{.Values.default_runtime.init_image | quote}}
- name: DEFAULT_RUNTIME_IMAGE
value: {{.Values.default_runtime.image | quote}}
在 templates/cert-issuer.yaml 首行和最后一行分别添加:
{{- if .Values.issuer.enabled }}
{{- end }}
根据需要更改 templates/rumtime-exporter.yaml 中 deployment 的 image,并注释掉 mongodb 的 serviceMonitor 部分:
...
containers:
- image: docker.io/lafyun/runtime-exporter:latest
imagePullPolicy: Always
name: runtime-exporter
...
部署
helm install --debug --dry-run server -n laf-system --values values-final.yaml ./
helm install server -n laf-system --values values-final.yaml ./
五、部署 Web
创建 values-final.yaml 文件
domain: "192.168.2.150.nip.io"
image:
repository: "docker.io/lafyun/laf-web"
pullPolicy: "IfNotPresent"
tag: "latest"
ingress:
enabled: true
部署
helm install --debug --dry-run web -n laf-system --values values-final.yaml ./
helm install web -n laf-system --values values-final.yaml ./
六、浏览器访问
http://192.168.2.150.nip.io
七、写在最后
目前存在的问题是,因为 laf 是依赖于 mongodb 的,一旦 laf-server 部署完成,默认数据就会写入 mongodb,之后再改参数(例如从http改成https、更改init_image等等),使用 helm upgrade server 更新 laf-server 实际将不会生效,可能的方法是直接修改 mongodb 的数据,个人觉得如果不是十分必要,不如整套直接重新部署。
有任何疑问,欢迎讨论!