政务云Kylin Linux Server 二进制安装K8S

政务云Kylin Linux Server 二进制安装K8S

因为麒麟软件仓库没有K8S安装包,so只能采用二进制安装K8S咯
主机名IP操作系统内核版本Kubernetes容器引擎
master10.76.62.207Kylin Linux Advanced Server V10 (Tercel)4.19.90-23.8.v2101.ky10.aarch641.23.1727.2.1
node110.76.62.208Kylin Linux Advanced Server V10 (Tercel)4.19.90-23.8.v2101.ky10.aarch641.23.1727.2.1
node210.76.62.209Kylin Linux Advanced Server V10 (Tercel)4.19.90-23.8.v2101.ky10.aarch641.23.1727.2.1

一、二进制部署准备环境

1、安装常用软件包

dnf -y install bind-utils expect rsync wget jq psmisc vim net-tools telnet device-mapper-persistent-data lvm2 git ntpdate

屏幕截图 2024-09-30 100304

2、免密钥登录

2.1所有节点设置相应的主机名及hosts文件解析

cat >> /etc/hosts <<'EOF'
10.76.62.207  master
10.76.62.208  node1
10.76.62.209  node2
EOF

2.2 将“master”节点配置免密码登录其他节点

所有节点公用同一套密钥,可以实现所有节点互通

cat > free_login.sh <<'EOF'
#!/bin/bash

ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q

export mypasswd=f0a03e06eb348

# 定义主机列表
k8s_host=(master node1 node2)

# 配置免密登录,利用expect工具免交互输入
for i in ${k8s_host[@]}; do
  expect -c "
  spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
    expect {
      \"*yes/no*\" {send \"yes\r\"; exp_continue}
      \"*password*\" {send \"$mypasswd\r\"; exp_continue}
    }"
done

# 复制 .ssh 目录,但排除 master 节点
for i in ${k8s_host[@]}; do
  if [[ $i != "master" ]]; then
    scp -rp ~/.ssh $i:~
  fi
done
EOF
[root@master sda]# sh free_login.sh

3、基础环境优化

3.1所有节点关闭firewalld,selinux

setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
systemctl disable --now firewalld

3.2所有节点关闭swap分区

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

# 确认swap关闭
free -h

3.3所有节点配置limit

cat >> /etc/security/limits.conf <<'EOF'
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

3.4Linux内核调优

cat > /etc/sysctl.d/k8s.conf <<'EOF'
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv6.conf.all.disable_ipv6 = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

# 应用调优
sysctl --system

3.5所有节点同步时间

本文时间服务器只有政务云内网可以访问,请根据实际情况更改

# 同步时区和时间
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ntpdate 10.134.40.6

# 定期任务同步
*/5 * * * * /usr/sbin/ntpdate 10.134.40.6

4、kube-proxy的负载均衡使用ipvsadm实现(所有节点)

4.1安装ipvsadm等相关工具

dnf -y install ipvsadm ipset sysstat conntrack libseccomp

4.2加载相关模块

# 加入开机自动加载
cat > /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

# 使模块立即生效
while read -r module; do modprobe $module; done < /etc/modules-load.d/ipvs.conf

# 将"systemd-modules-load"服务设置为开机自启动
systemctl enable --now systemd-modules-load && systemctl status systemd-modules-load

# 查看模块是否加载成功
lsmod | grep --color=auto -e nf_conntrack -e ip_vs

屏幕截图 2024-09-30 110049

二、安装基础组件

1、所有节点二进制部署docker环境

# 下载并安装docker
cd /mnt/sda
wget https://download.docker.com/linux/static/stable/aarch64/docker-27.2.1.tgz
tar xf docker-27.2.1.tgz --strip-components=1 -C /usr/bin docker/*

# 使用systemd管理docker
cat > /usr/lib/systemd/system/docker.service <<EOF
[Unit]
Description=Kylin Linux Advanced Server V10
Documentation=https://docs.docker.com,https://blog.ayou.ink
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
[Install]
WantedBy=multi-user.target
EOF

# 配置CgroupDriver使用systemd
mkdir /etc/docker/
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-opts": {"max-size": "200m"}
}
EOF

# 启动docker并加入开机自启
systemctl enable --now docker

# 确认CgroupDriver使用systemd
[root@node1 sda]# docker info | grep "Cgroup Driver"
 Cgroup Driver: systemd
[root@node1 sda]#

2、下载安装etcd和K8S相关组件

# 下载相关组件
wget https://cdn.dl.k8s.io/release/v1.23.17/kubernetes-server-linux-arm64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-arm64.tar.gz

# 解压etcd的二进制程序
tar -xf etcd-v3.5.6-linux-arm64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.6-linux-arm64/etcd{,ctl}

# 解压K8S的二进制程序
tar -xf kubernetes-server-linux-arm64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

# 拷贝相关组件到node节点
Nodes='node1 node2'
for NODE in $Nodes; do  scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
for NODE in $Nodes; do  scp /usr/local/bin/etcd{,ctl} $NODE:/usr/local/bin/ ; done

# 所有节点创建工作目录
mkdir -p /opt/cni/bin

git clone https://gh.vpnproxy.cn/https://github.com/dotbalo/k8s-ha-install.git
cd k8s-ha-install/
git checkout manual-installation-v1.23.x

三、生成K8S证书文件

1、master节点下载证书管理工具

wget https://gh.vpnproxy.cn/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssl_1.6.5_linux_arm64 -O /usr/local/bin/cfssl
wget https://gh.vpnproxy.cn/https://github.com/cloudflare/cfssl/releases/download/v1.6.5/cfssljson_1.6.5_linux_arm64 -O /usr/local/bin/cfssljson

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

# 所有节点创建etcd证书目录
mkdir /etc/etcd/ssl -p

# 所有节点创建kubernetes相关目录
mkdir -p /etc/kubernetes/pki

2、master节点生成etcd证书

我这里政务云分配的机器有限,只搭建高可用etcd

生成etcd证书时更具实际情况更改hostname字段

# 生成etcd CA证书和CA证书的key
cd /root/k8s-ha-install/pki
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca


# 颁发证书etcd证书
cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,master,node1,node2,10.76.62.207,10.76.62.208,10.76.62.209 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd


# 将证书复制到其他节点
Nodes='node1 node2'
cd /etc/etcd/ssl/
for NODE in $Nodes; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3、master节点生成apiserver证书

注意:生成apiserver的客户端证书时,“10.96.0.0”是k8s service的网段,如果说需要更改k8s service网段,那就需要更改“10.96.0.1”;

# 生成kubernetes证书
cd /root/k8s-ha-install/pki
cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca


# 生成apiserver的客户端证书
cfssl gencert   \
   -ca=/etc/kubernetes/pki/ca.pem   \
   -ca-key=/etc/kubernetes/pki/ca-key.pem   \
   -config=ca-config.json   \
   -hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,10.76.62.207,10.76.62.208,10.76.62.209  \
   -profile=kubernetes  \
   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver


# 生成apiserver的聚合证书
cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
cfssl gencert   -ca=/etc/kubernetes/pki/front-proxy-ca.pem   -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   -config=ca-config.json   -profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

4、master节点生成controller manager证书

生成集群项时 –server 参数更具实际情况指定

# 生成 controller-manage证书
cd /root/k8s-ha-install/pki
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager


# set-cluster:设置一个集群项
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://10.76.62.207:6443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig


# set-credentials 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig


# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig


# 使用某个环境当做默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

5、master节点生成scheduler证书

生成集群项时 –server 参数更具实际情况指定

# 生成 scheduler的证书
cd /root/k8s-ha-install/pki
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler


# set-cluster:设置一个集群项
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://10.76.62.207:6443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig


# set-credentials 设置一个用户项
kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig


# 设置一个环境项,一个上下文
kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig


# 使用某个环境当做默认环境
kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

6、生成admin的证书

# 生成admin的证书
cd /root/k8s-ha-install/pki
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin


# set-cluster:设置一个集群项
kubectl config set-cluster kubernetes \
   --certificate-authority=/etc/kubernetes/pki/ca.pem \
   --embed-certs=true \
   --server=https://10.76.62.207:6443 \
   --kubeconfig=/etc/kubernetes/admin.kubeconfig


# set-credentials 设置一个用户项
kubectl config set-credentials kubernetes-admin \
   --client-certificate=/etc/kubernetes/pki/admin.pem \
   --client-key=/etc/kubernetes/pki/admin-key.pem \
   --embed-certs=true \
   --kubeconfig=/etc/kubernetes/admin.kubeconfig


# 设置一个环境项,一个上下文
kubectl config set-context kubernetes-admin@kubernetes \
   --cluster=kubernetes \
   --user=kubernetes-admin \
   --kubeconfig=/etc/kubernetes/admin.kubeconfig


# 使用某个环境当做默认环境
kubectl config use-context kubernetes-admin@kubernetes \
   --kubeconfig=/etc/kubernetes/admin.kubeconfig

7、创建ServiceAccount Key

# ServiceAccount是k8s一种认证方式,创建ServiceAccount的时候会创建一个与之绑定的secret,这个secret会生成一个随机token
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

四、二进制高可用etcd配置

1.创建配置文件

以下配置文件master、node1、node2都需要配置

name、listen-peer-urls、listen-client-urls、initial-advertise-peer-urls、advertise-client-urls、

initial-cluster 字段请根据实际情况更改

cat > /etc/etcd/etcd.config.yml <<'EOF'
name: 'master'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.76.62.207:2380'
listen-client-urls: 'https://10.76.62.207:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.76.62.207:2380'
advertise-client-urls: 'https://10.76.62.207:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'master=https://10.76.62.207:2380,node1=https://10.76.62.208:2380,node2=https://10.76.62.209:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

2.所有节点启动服务

# 创建启动脚本
cat > /usr/lib/systemd/system/etcd.service <<'EOF'
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF


# 启动服务
mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd
systemctl status etcd

# 查看etcd状态
etcdctl --endpoints="10.76.62.207:2379,10.76.62.208:2379,10.76.62.209:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table

查看etcd集群状态

如图207节点现在为leader

屏幕截图 2024-09-30 160209

五、K8S组件配置

1.master节点Apiserver服务启动

–advertise-address、–etcd-servers  字段请请根据实际情况更改

# master节点创建工作目录
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes


# master节点创建配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=10.76.62.207 \
      --service-cluster-ip-range=10.96.0.0/12  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://10.76.62.207:2379,https://10.76.62.208:2379,https://10.76.62.209:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF


# 启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver && sleep 2 && systemctl status kube-apiserver

Apiserver服务启动成功

屏幕截图 2024-09-30 161156

2.master节点ControllerManager服务启动

# master节点创建配置文件
cat > /usr/lib/systemd/system/kube-controller-manager.service << 'EOF'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=172.16.0.0/12 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
      --node-cidr-mask-size=24
      
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

# 启动服务,查看状态
systemctl daemon-reload && systemctl enable --now kube-controller-manager && sleep 2 && systemctl status kube-controller-manager

ControllerManager服务启动成功

屏幕截图 2024-09-30 163340

3.master节点Scheduler服务启动

# master节点创建配置文件
cat > /usr/lib/systemd/system/kube-scheduler.service <<'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF


# 启动服务并查看状态
systemctl daemon-reload && systemctl enable --now kube-scheduler && sleep 2 && systemctl status kube-scheduler

Scheduler服务启动成功

屏幕截图 2024-09-30 163912

六、创建Bootstrapping自动颁发证书

1.master节点创建bootstrap-kubelet.kubeconfig文件

–server 字段请根据实际情况更改,–token字段是bootstrap-kubelet.kubeconfig文件中token-id和token-secret字段的拼接,可以自行修改但

请务必保持一致!!!

请务必保持一致!!!

请务必保持一致!!!

cd /root/k8s-ha-install/bootstrap

# set-cluster:设置一个集群项
kubectl config set-cluster kubernetes \
   --certificate-authority=/etc/kubernetes/pki/ca.pem \
   --embed-certs=true     --server=https://10.76.62.207:6443 \
   --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig


# set-credentials 设置一个用户项
kubectl config set-credentials tls-bootstrap-token-user \
   --token=c8ad9c.2e4d610cf3e7426e \
   --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig


# 设置一个环境项,一个上下文
kubectl config set-context tls-bootstrap-token-user@kubernetes \
   --cluster=kubernetes \
   --user=tls-bootstrap-token-user \
   --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig


# 使用某个环境当做默认环境
kubectl config use-context tls-bootstrap-token-user@kubernetes \
   --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

2.master节点拷贝管理证书

mkdir -p $HOME/.kube ; cp /etc/kubernetes/admin.kubeconfig $HOME/.kube/config

3.创建bootstrap

kubectl create -f bootstrap.secret.yaml

七、部署Node节点

1、拷贝证书

node节点会使用自动颁发证书的形式配置

cd /etc/kubernetes/

for NODE in node1 node2; do ssh $NODE "mkdir -p /etc/kubernetes/pki /etc/etcd/ssl"; for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/; done; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

2、Kubelet配置

# 所有节点创建工作目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/


# 所有节点配置kubelet service
cat >  /usr/lib/systemd/system/kubelet.service <<'EOF'
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF


# 所有节点配置kubelet service的配置文件
cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<'EOF'
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=swr.cn-north-4.myhuaweicloud.com/iabsdocker_containers/pause:3.6"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
EOF


# 所有节点创建kubelet的配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<'EOF'
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

# 所有节点启动kubelet
systemctl daemon-reload
systemctl enable --now kubelet
systemctl status kubelet


# 在master节点上查看node信息
kubectl get nodes

Node信息已经可查看

屏幕截图 2024-09-30 172723

3.kube-proxy配置

# 在master节点生成"/etc/kubernetes/kube-proxy.kubeconfig"配置文件
cd /root/k8s-ha-install

kubectl -n kube-system create serviceaccount kube-proxy

kubectl create clusterrolebinding system:kube-proxy \
   --clusterrole system:node-proxier \
   --serviceaccount kube-system:kube-proxy

SECRET=$(kubectl -n kube-system get sa/kube-proxy \
    --output=jsonpath='{.secrets[0].name}')

JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)

PKI_DIR=/etc/kubernetes/pki

K8S_DIR=/etc/kubernetes


# set-cluster:设置一个集群项
kubectl config set-cluster kubernetes \
   --certificate-authority=/etc/kubernetes/pki/ca.pem \
   --embed-certs=true \
   --server=https://10.76.62.207:6443 \
   --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig


# set-credentials 设置一个用户项
kubectl config set-credentials kubernetes \
   --token=${JWT_TOKEN} \
   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig


# 设置一个环境项,一个上下文
kubectl config set-context kubernetes \
   --cluster=kubernetes \
   --user=kubernetes \
   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig


# 使用某个环境当做默认环境
kubectl config use-context kubernetes \
   --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig


# 在master将kube-proxy的kube-proxy.kubeconfig文件发送到其他节点
for NODE in master node1 node2; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done


# 所有节点创建kube-proxy.conf配置文件
cat > /etc/kubernetes/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
	--v=2 \
	--log-dir=/var/log/kubernetes/ \
	--config=/etc/kubernetes/kube-proxy-config.yml"
EOF


# 请注意修改各节点的 hostnameOverride 字段
cat > /etc/kubernetes/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
 kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
hostnameOverride: master
clusterCIDR: 172.30.110.0/24
EOF


# 所有节点使用systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF


# 所有节点启动kube-proxy
systemctl daemon-reload
systemctl enable --now kube-proxy
systemctl status kube-proxy


重要提示:如果更改了集群Pod的网段,需要更改kube-proxy.conf的clusterCIDR参数,比如上面POD网段为"172.30.110.0/24"。

kube-proxy启动成功

image

八、部署网络插件

1.部署calico网络插件

温馨提示: 上述所有步骤均可以省略,就会出错。需要手动修改你自己集群的证书文件内容,我发你的你用不了!需要手动修改“etcd-key”,“etcd-cert”“etcd-ca”

– etcd证书文件存储路径: /etc/kubernetes/pki/etcd/

– base64编码文件内容: cat <file> | base64 -w 0

cd /root/k8s-ha-install/calico/

# 修改calico-etcd.yaml的以下位置
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://10.76.62.207:2379,https://10.76.62.207:2379,https://10.76.62.207:2379"#g' calico-etcd.yaml

ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`

sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

# 更改此处为自己的pod网段
POD_SUBNET="172.30.110.0/24"

# 注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段,也就是把192.168.x.x/16改成自己的集群网段,并打开注释,建议直接写IP地址网段,因此写的不是IP地址会报错,我有踩到坑。
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "POD_CIDR"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

最终执行创建网络:
kubectl apply -f calico-etcd.yaml

2.观察calico是否部署成功

 

九、附加组件部署

1.部署CoreDNS

 

© 版权声明
THE END
喜欢就支持一下吧
点赞15赞赏 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

夸夸
夸夸
还有吗!没看够!
取消
昵称表情代码图片

    暂无评论内容