software installation – There is no config file when I install “kubeadm/kubelet/kubectl” via “apt” or “snap” on Ubuntu 22.04LTS!
I just installed Kubernetes following this instruction: kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ on my Ubuntu desktop 22.04 LTS.
When I try this command sudo kubectl cluster-info
for example, I get this result:
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
So I searched through the internet and found this command kubectl config view
and this is the result:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
So I tried sudo kubeadm init
but I got:
[init] Using Kubernetes version: v1.25.4
[preflight] Running pre-flight checks
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E1207 13:27:13.819798 50381 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-12-07T13:27:13-08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I also tried sudo kubeadm config images pull --v=5
and got the following result/error:
I1207 17:14:06.585276 62896 initconfiguration.go:116] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I1207 17:14:06.585610 62896 interface.go:432] Looking for default routes with IPv4 addresses
I1207 17:14:06.585617 62896 interface.go:437] Default route transits interface "wlp2s0"
I1207 17:14:06.585904 62896 interface.go:209] Interface wlp2s0 is up
I1207 17:14:06.585979 62896 interface.go:257] Interface "wlp2s0" has 2 addresses :[192.168.1.2/24 fe80::7a46:dd7e:145:d5e2/64].
I1207 17:14:06.586002 62896 interface.go:224] Checking addr 192.168.1.2/24.
I1207 17:14:06.586008 62896 interface.go:231] IP found 192.168.1.2
I1207 17:14:06.586032 62896 interface.go:263] Found valid IPv4 address 192.168.1.2 for interface "wlp2s0".
I1207 17:14:06.586037 62896 interface.go:443] Found active IP 192.168.1.2
I1207 17:14:06.586088 62896 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I1207 17:14:06.589640 62896 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
exit status 1
output: E1207 17:14:07.075957 62948 remote_image.go:222] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" image="registry.k8s.io/kube-apiserver:v1.25.4"
time="2022-12-07T17:14:07-08:00" level=fatal msg="pulling image: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService"
, error
k8s.io/kubernetes/cmd/kubeadm/app/util/runtime.(*CRIRuntime).PullImage
cmd/kubeadm/app/util/runtime/runtime.go:138
k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages
cmd/kubeadm/app/cmd/config.go:326
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1
cmd/kubeadm/app/cmd/config.go:312
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1594
failed to pull image "registry.k8s.io/kube-apiserver:v1.25.4"
k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages
cmd/kubeadm/app/cmd/config.go:327
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1
cmd/kubeadm/app/cmd/config.go:312
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1594
Followings are also some other codes I have tried:
service kubelet status:
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor prese>
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2022-12-08>
Docs: https://kubernetes.io/docs/home/
Process: 9088 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_>
Main PID: 9088 (code=exited, status=1/FAILURE)
CPU: 88ms
lines 1-9/9 (END)
journalctl -u kubelet:
Dec 06 22:14:42 a systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.254559 85576 server.go:413] >
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.254699 85576 server.go:415] >
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.255385 85576 server.go:576] >
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.295733 85576 server.go:464] >
Dec 06 22:14:43 a kubelet[85576]: I1206 22:14:43.295813 85576 server.go:660] >
Dec 06 22:14:43 a kubelet[85576]: E1206 22:14:43.296299 85576 run.go:74] "com>
Dec 06 22:14:43 a systemd[1]: kubelet.service: Current command vanished from th>
Dec 06 22:14:43 a systemd[1]: kubelet.service: Main process exited, code=exited>
Dec 06 22:14:43 a systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 22:14:43 a systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Dec 06 22:14:43 a systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 06 22:14:43 a kubelet[85623]: E1206 22:14:43.688813 85623 run.go:74] "com>
Dec 06 22:14:43 a systemd[1]: kubelet.service: Main process exited, code=exited>
Dec 06 22:14:43 a systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 22:14:53 a systemd[1]: kubelet.service: Scheduled restart job, restart c>
Dec 06 22:14:53 a systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Dec 06 22:14:53 a systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 06 22:14:53 a kubelet[85707]: E1206 22:14:53.884675 85707 run.go:74] "com>
Dec 06 22:14:53 a systemd[1]: kubelet.service: Main process exited, code=exited>
Dec 06 22:14:53 a systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 22:15:04 a systemd[1]: kubelet.service: Scheduled restart job, restart c>
Dec 06 22:15:04 a systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
lines 1-23
which kubelet:
/usr/bin/kubelet
Inside the sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
file there was ExecStart=
that I updated to ExecStart=/usr/bin/kubelet
Moreover I tried sudo swapoff -a
at the beginning( nothing changed) and end of this process, but I think after all of the process, now I get a new error when I try sudo kubeadm init
:
[init] Using Kubernetes version: v1.25.4
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E1208 09:13:01.458175 10384 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-12-08T09:13:01-08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Read more here: Source link