192.168.11.138:22 INFO [2023-12-13 07:22:12] >> Health check image-cri-shim!
192.168.11.138:22 INFO [2023-12-13 07:22:12] >> image-cri-shim is running
192.168.11.138:22 INFO [2023-12-13 07:22:12] >> init shim success
192.168.11.136:22 Firewall stopped and disabled on system startup
192.168.11.138:22 127.0.0.1 localhost
192.168.11.138:22 ::1 ip6-localhost ip6-loopback
192.168.11.136:22 * Applying /etc/sysctl.d/10-console-messages.conf …
192.168.11.136:22 kernel.printk = 4 4 1 7
192.168.11.136:22 * Applying /etc/sysctl.d/10-ipv6-privacy.conf …
192.168.11.136:22 net.ipv6.conf.all.use_tempaddr = 2
192.168.11.136:22 net.ipv6.conf.default.use_tempaddr = 2
192.168.11.136:22 * Applying /etc/sysctl.d/10-kernel-hardening.conf …
192.168.11.136:22 kernel.kptr_restrict = 1
192.168.11.136:22 * Applying /etc/sysctl.d/10-magic-sysrq.conf …
192.168.11.136:22 kernel.sysrq = 176
192.168.11.136:22 * Applying /etc/sysctl.d/10-network-security.conf …
192.168.11.136:22 net.ipv4.conf.default.rp_filter = 2
192.168.11.136:22 net.ipv4.conf.all.rp_filter = 2
192.168.11.136:22 * Applying /etc/sysctl.d/10-ptrace.conf …
192.168.11.136:22 kernel.yama.ptrace_scope = 1
192.168.11.136:22 * Applying /etc/sysctl.d/10-zeropage.conf …
192.168.11.136:22 vm.mmap_min_addr = 65536
192.168.11.136:22 * Applying /usr/lib/sysctl.d/50-default.conf …
192.168.11.136:22 kernel.core_uses_pid = 1
192.168.11.136:22 net.ipv4.conf.default.rp_filter = 2
192.168.11.136:22 net.ipv4.conf.default.accept_source_route = 0
192.168.11.136:22 sysctl: setting key “net.ipv4.conf.all.accept_source_route”: Invalid argument
192.168.11.136:22 net.ipv4.conf.default.promote_secondaries = 1
192.168.11.136:22 sysctl: setting key “net.ipv4.conf.all.promote_secondaries”: Invalid argument
192.168.11.136:22 net.ipv4.ping_group_range = 0 2147483647
192.168.11.136:22 net.core.default_qdisc = fq_codel
192.168.11.136:22 fs.protected_hardlinks = 1
192.168.11.136:22 fs.protected_symlinks = 1
192.168.11.136:22 fs.protected_regular = 1
192.168.11.136:22 fs.protected_fifos = 1
192.168.11.136:22 * Applying /usr/lib/sysctl.d/50-pid-max.conf …
192.168.11.136:22 kernel.pid_max = 4194304
192.168.11.136:22 * Applying /usr/lib/sysctl.d/99-protect-links.conf …
192.168.11.136:22 fs.protected_fifos = 1
192.168.11.136:22 fs.protected_hardlinks = 1
192.168.11.136:22 fs.protected_regular = 2
192.168.11.136:22 fs.protected_symlinks = 1
192.168.11.136:22 * Applying /etc/sysctl.d/99-sysctl.conf …
192.168.11.136:22 fs.file-max = 1048576 # sealos
192.168.11.136:22 net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.11.136:22 net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.11.136:22 net.core.somaxconn = 65535 # sealos
192.168.11.136:22 net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.11.136:22 net.ipv4.ip_forward = 1 # sealos
192.168.11.136:22 net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.11.136:22 net.ipv6.conf.all.forwarding = 1 # sealos
192.168.11.136:22 * Applying /etc/sysctl.conf …
192.168.11.136:22 fs.file-max = 1048576 # sealos
192.168.11.136:22 net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.11.136:22 net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.11.136:22 net.core.somaxconn = 65535 # sealos
192.168.11.136:22 net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.11.136:22 net.ipv4.ip_forward = 1 # sealos
192.168.11.136:22 net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.11.136:22 net.ipv6.conf.all.forwarding = 1 # sealos
INFO [2023-12-13 07:22:12] >> pull pause image sealos.hub:5000/pause:3.8
192.168.11.136:22 INFO [2023-12-13 07:22:12] >> pull pause image sealos.hub:5000/pause:3.8
192.168.11.138:22 Firewall stopped and disabled on system startup
192.168.11.138:22 * Applying /etc/sysctl.d/10-console-messages.conf …
192.168.11.138:22 kernel.printk = 4 4 1 7
192.168.11.138:22 * Applying /etc/sysctl.d/10-ipv6-privacy.conf …
192.168.11.138:22 net.ipv6.conf.all.use_tempaddr = 2
192.168.11.138:22 net.ipv6.conf.default.use_tempaddr = 2
192.168.11.138:22 * Applying /etc/sysctl.d/10-kernel-hardening.conf …
192.168.11.138:22 kernel.kptr_restrict = 1
192.168.11.138:22 * Applying /etc/sysctl.d/10-magic-sysrq.conf …
192.168.11.138:22 kernel.sysrq = 176
192.168.11.138:22 * Applying /etc/sysctl.d/10-network-security.conf …
192.168.11.138:22 net.ipv4.conf.default.rp_filter = 2
192.168.11.138:22 net.ipv4.conf.all.rp_filter = 2
192.168.11.138:22 * Applying /etc/sysctl.d/10-ptrace.conf …
192.168.11.138:22 kernel.yama.ptrace_scope = 1
192.168.11.138:22 * Applying /etc/sysctl.d/10-zeropage.conf …
192.168.11.138:22 vm.mmap_min_addr = 65536
192.168.11.138:22 * Applying /usr/lib/sysctl.d/50-default.conf …
192.168.11.138:22 kernel.core_uses_pid = 1
192.168.11.138:22 net.ipv4.conf.default.rp_filter = 2
192.168.11.138:22 net.ipv4.conf.default.accept_source_route = 0
192.168.11.138:22 sysctl: setting key “net.ipv4.conf.all.accept_source_route”: Invalid argument
192.168.11.138:22 net.ipv4.conf.default.promote_secondaries = 1
192.168.11.138:22 sysctl: setting key “net.ipv4.conf.all.promote_secondaries”: Invalid argument
192.168.11.138:22 net.ipv4.ping_group_range = 0 2147483647
192.168.11.138:22 net.core.default_qdisc = fq_codel
192.168.11.138:22 fs.protected_hardlinks = 1
192.168.11.138:22 fs.protected_symlinks = 1
192.168.11.138:22 fs.protected_regular = 1
192.168.11.138:22 fs.protected_fifos = 1
192.168.11.138:22 * Applying /usr/lib/sysctl.d/50-pid-max.conf …
192.168.11.138:22 kernel.pid_max = 4194304
192.168.11.138:22 * Applying /usr/lib/sysctl.d/99-protect-links.conf …
192.168.11.138:22 fs.protected_fifos = 1
192.168.11.138:22 fs.protected_hardlinks = 1
192.168.11.138:22 fs.protected_regular = 2
192.168.11.138:22 fs.protected_symlinks = 1
192.168.11.138:22 * Applying /etc/sysctl.d/99-sysctl.conf …
192.168.11.138:22 fs.file-max = 1048576 # sealos
192.168.11.138:22 net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.11.138:22 net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.11.138:22 net.core.somaxconn = 65535 # sealos
192.168.11.138:22 net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.11.138:22 net.ipv4.ip_forward = 1 # sealos
192.168.11.138:22 net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.11.138:22 net.ipv6.conf.all.forwarding = 1 # sealos
192.168.11.138:22 * Applying /etc/sysctl.conf …
192.168.11.138:22 fs.file-max = 1048576 # sealos
192.168.11.138:22 net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.11.138:22 net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.11.138:22 net.core.somaxconn = 65535 # sealos
192.168.11.138:22 net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.11.138:22 net.ipv4.ip_forward = 1 # sealos
192.168.11.138:22 net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.11.138:22 net.ipv6.conf.all.forwarding = 1 # sealos
192.168.11.138:22 INFO [2023-12-13 07:22:13] >> pull pause image sealos.hub:5000/pause:3.8
Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
192.168.11.136:22 Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
192.168.11.138:22 Image is up to date for sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
INFO [2023-12-13 07:22:14] >> init kubelet success
INFO [2023-12-13 07:22:14] >> init rootfs success
192.168.11.136:22 INFO [2023-12-13 07:22:14] >> init kubelet success
192.168.11.136:22 INFO [2023-12-13 07:22:14] >> init rootfs success
192.168.11.138:22 INFO [2023-12-13 07:22:14] >> init kubelet success
192.168.11.138:22 INFO [2023-12-13 07:22:14] >> init rootfs success
2023-12-13T07:22:14 info Executing pipeline Init in CreateProcessor.
2023-12-13T07:22:14 info start to copy kubeadm config to master0
2023-12-13T07:22:14 info start to generate cert and kubeConfig…
2023-12-13T07:22:14 info start to generator cert and copy to masters…
2023-12-13T07:22:14 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost sealos-master:sealos-master] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.11.135:192.168.11.135]}
2023-12-13T07:22:14 info Etcd altnames : {map[localhost:localhost sealos-master:sealos-master] map[127.0.0.1:127.0.0.1 192.168.11.135:192.168.11.135 ::1:::1]}, commonName : sealos-master
2023-12-13T07:22:17 info start to copy etc pki files to masters
2023-12-13T07:22:17 info start to copy etc pki files to masters
2023-12-13T07:22:17 info start to create kubeconfig…
2023-12-13T07:22:18 info start to copy kubeconfig files to masters
2023-12-13T07:22:18 info start to copy static files to masters
2023-12-13T07:22:18 info start to init master0…
2023-12-13T07:22:18 info domain apiserver.cluster.local:192.168.11.135 append success
W1213 07:22:18.760889 6699 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme “unix” to the “criSocket” with value “/run/containerd/containerd.sock”. Please update your configuration!
W1213 07:22:18.760956 6699 utils.go:69] The recommended value for “healthzBindAddress” in “KubeletConfiguration” is: 127.0.0.1; the provided value is: 0.0.0.0
[init] Using Kubernetes version: v1.25.6
[preflight] Running pre-flight checks
[WARNING HTTPProxy]: Connection to “https://192.168.11.135” uses proxy “http://192.168.3.169:7890”. If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to “10.96.0.0/22” uses proxy “http://192.168.3.169:7890”. This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING HTTPProxyCIDR]: connection to “100.64.0.0/10” uses proxy “http://192.168.3.169:7890”. This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing “sa” key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/admin.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/kubelet.conf”
W1213 07:22:33.973084 6699 kubeconfig.go:249] a kubeconfig file “/etc/kubernetes/controller-manager.conf” exists already but has an unexpected API Server URL: expected: https://192.168.11.135:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/controller-manager.conf”
W1213 07:22:34.033612 6699 kubeconfig.go:249] a kubeconfig file “/etc/kubernetes/scheduler.conf” exists already but has an unexpected API Server URL: expected: https://192.168.11.135:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/scheduler.conf”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- ‘systemctl status kubelet’
- ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- ‘crictl –runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause’
Once you have found the failing container, you can inspect its logs with:
- ‘crictl –runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID’
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
2023-12-13T07:26:55 error Applied to cluster error: failed to init init master0 failed, error: exit status 1. Please clean and reinstall
Error: failed to init init master0 failed, error: exit status 1. Please clean and reinstall
========================================
版本信息:
root@sealos-master:~# uname -a
Linux sealos-master 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 ×86_64 ×86_64 ×86_64 GNU/Linux
root@sealos-master:~# sealos version
CriVersionInfo:
RuntimeApiVersion: v1
RuntimeName: containerd
RuntimeVersion: v1.6.23
Version: 0.1.0
SealosVersion:
buildDate: “2023-10-09T10:07:15Z”
compiler: gc
gitCommit: 881c10cb
gitVersion: 4.3.5
goVersion: go1.20.8
platform: linux/amd64
WARNING: Failed to get kubernetes version.
Check kubernetes status or use command “sealos run” to launch kubernetes
root@sealos-master:~# sealos run
Error: cluster status is not ClusterSuccess
===========================================
安装命令:
curl -sfL https://mirror.ghproxy.com/https://raw.githubusercontent.com/labring/sealos/main/scripts/cloud/install.sh -o /tmp/install.sh && bash /tmp/install.sh –zh \
–cloud-version=v5.0.0-beta3 \
–image-registry=registry.cn-shanghai.aliyuncs.com \
–proxy-prefix=https://mirror.ghproxy.com \
–master-ips=192.168.11.135 \
–node-ips=192.168.11.136,192.168.11.138 \
–ssh-password=1q12w2..