openeuler22.03 Kubeadm安装k8s 1.29.4版本 ,CRI->containerd, CNI->flannel
- 操作系统:openEuler 22.03 (LTS-SP3)
- 容器运行时:containerd://1.7.16
# 此内容为部署完的k8s集群信息
[root@openeuler-1 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
openeuler-1 Ready control-plane 11m v1.29.4 172.8.8.11 <none> openEuler 22.03 (LTS-SP3) 5.10.0-182.0.0.95.oe2203sp3.x86_64 containerd://1.7.16
安装容器运行时
安装containerd,并配置开机自启动
https://github.com/containerd/containerd/blob/main/docs/getting-started.md
wget https://github.com/containerd/containerd/releases/download/v1.7.16/containerd-1.7.16-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-1.7.16-linux-amd64.tar.gz
[root@cloud ~]# cat /usr/lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
[root@cloud ~]# systemctl daemon-reload
[root@cloud ~]# systemctl enable --now containerd
安装runc
wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc
安装cni插件
wget https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.1.tgz
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.1.tgz
# 创建 /etc/crictl.yaml 文件
[root@openeuler-1 ~]# cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
生成containerd配置文件
[root@cloud ~]# mkdir /etc/containerd
[root@cloud ~]# containerd config default > /etc/containerd/config.toml
配置 systemd cgroup 驱动
结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
修改sandbox_image镜像地址
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
重启containerd
sudo systemctl restart containerd
安装前参数配置
编辑hosts文件,添加本机host地址
[root@cloud ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.2 cloud
手动启用 IPv4 数据包转发
modprobe br_netfilter
# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
vim /etc/sysctl.conf # 设置以下值为1,不然会覆盖成0
net.ipv4.ip_forward=1
# 应用 sysctl 参数而不重新启动
sudo sysctl --system
# 验证
sysctl net.ipv4.ip_forward
sysctl net.bridge.bridge-nf-call-iptables
将 SELinux 设置为 permissive 模式
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
关闭swap
如果 kubelet 未被正确配置使用交换分区,则你必须禁用交换分区。 例如,sudo swapoff -a 将暂时禁用交换分区。要使此更改在重启后保持不变,请确保在如 /etc/fstab、systemd.swap 等配置文件中禁用交换分区,具体取决于你的系统如何配置
安装 k8s
配置yum源,下载 kubelet kubeadm kubectl
如果下载慢可以更换阿里云镜像源。
# 这会覆盖 /etc/yum.repos.d/kubernetes.repo 中现存的所有配置
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/repodata/repomd.xml.key
EOF
# 下载安装最新的 kubelet kubeadm kubectl
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 立即启动、开机自启
sudo systemctl enable --now kubelet
# 查看各个组件版本
kubectl version --client
kubelet --version
kubeadm version
kubeadm初始化
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 172.8.8.11 --image-repository registry.aliyuncs.com/google_containers
- pod-network-cidr: 规划的pod cidr 段
- apiserver-advertise-address: 服务器地址
- image-repository:用阿里云镜像快速下载
结果如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.8.8.11:6443 --token wzz40f.q6mvvg00u4th4wsm \
--discovery-token-ca-cert-hash sha256:73d5a3305e67ec46d67ff8e4f4e47ba14c5a948f94dca05fcb024c8361c88b61
安装cni插件 flannel
此时coredns pod状态异常,需要安装cni插件。
官网:https://github.com/flannel-io/flannel#deploying-flannel-manually
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
修改pod cidr段为上面init时的cidr段(默认不用修改):
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
执行:
kubectl apply -f kube-flannel.yml
结果:
[root@openeuler-1 ~]# kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-vpdts 1/1 Running 0 6m3s
kube-system coredns-857d9ff4c9-45dj5 1/1 Running 0 7m22s
kube-system coredns-857d9ff4c9-dph2s 1/1 Running 0 7m22s
kube-system etcd-openeuler-1 1/1 Running 0 7m38s
kube-system kube-apiserver-openeuler-1 1/1 Running 0 7m38s
kube-system kube-controller-manager-openeuler-1 1/1 Running 0 7m38s
kube-system kube-proxy-f6qxk 1/1 Running 0 7m23s
kube-system kube-scheduler-openeuler-1 1/1 Running 0 7m38s
pod service 测试
去除节点上的污点,允许pod调度到该节点
kubectl taint nodes openeuler-1 node-role.kubernetes.io/control-plane:NoSchedule-
pod yaml:
[root@cloud yaml]# cat simple-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: MyApp
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
service yaml:
[root@cloud yaml]# cat my-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
执行,并查看结果:
kubectl apply -f simple-pod.yaml
kubectl apply -f simple-pod.yaml
[root@cloud yaml]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 43m 10.10.0.4 cloud <none> <none>
[root@cloud yaml]# kubectl get svc -owide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 82m <none>
my-service ClusterIP 10.108.55.244 <none> 80/TCP 35m app.kubernetes.io/name=MyApp
# curl pod ip 查看结果:
[root@cloud yaml]# curl 10.10.0.4
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</body>
</html>
# curl service ip 查看结果:
[root@cloud yaml]# curl 10.108.55.244
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</body>
</html>
附
问题排错
'systemctl status kubelet'
'journalctl -xeu kubelet'
查看service cidr
kubectl cluster-info dump | grep service-cluster-ip-range
评论