企业级k8s集群部署

如需电子档请点赞评论回复需要 搭建k8s电子文档

远程提供企业K8s+kubesphere+istio+jenkins+prometheus组合搭建服务


企业级k8s集群部署


二进制包

注:推荐用二进制包部署Kubernetes集群,虽手动部署麻烦,但可以学习很多工作原理利于后期维护。

环境

可以使用VMware虚拟机,宿主机必须8G内存以上

• 服务器可以访问外网,有从网上拉取镜像的需求

单Master服务器规划:(注:部署时候根据具体环境进行IP地址调整即可

角色

IP

组件

k8s-master

192.168.3.110

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

k8s-node1

192.168.3.112

kubelet,kube-proxy,docker,etcd

k8s-node2

192.168.3.113

kubelet,kube-proxy,docker,etcd

1.3 操作系统初始化配置


# 提升安全生产环境不建议关闭防火墙 ,针对网段开放防火墙命令:
将10.0.0.0/24 10.244.0.0/16 192.168.3.0/24等网段主机都加入白名单
# firewall-cmd --add-source=10.0.0.0/24 --zone=trusted –permanent
# firewall-cmd --add-source=192.168.3.0/24 --zone=trusted –permanent
# firewall-cmd --add-source=10.244.0.0/16 --zone=trusted –permanent
# firewall-cmd --reload

当然为了调试可以先关闭防火墙。

systemctl stop firewalld 

systemctl disable firewalld 


# 关闭selinux 

sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久 

setenforce 0  # 临时 

# 关闭swap 

swapoff -a  # 临时 

sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久 

# 根据规划设置主机名 

hostnamectl set-hostname 

# 在master添加hosts 

cat >> /etc/hosts << EOF 

192.168.3.110 k8s-master1 

192.168.3.112 k8s-node1 

192.168.3.113 k8s-node2 

EOF 



# 将桥接的IPv4流量传递到iptables的链 

cat > /etc/sysctl.d/k8s.conf << EOF 

net.bridge.bridge-nf-call-ip6tables = 1 

net.bridge.bridge-nf-call-iptables = 1 

EOF 

sysctl --system  # 生效 



# 时间同步 ,确保时间同步很重要

yum install ntpdate -y 

ntpdate time.windows.com


二、Etcd集群k8s集群数据库系统

这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群

etcd1: 192.168.3.110 etcd2: 192.168.3.112 etcd3: 192.168.3.113

2.1 使用cfssl证书生成工具生产需要的证书

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

找任意一台服务器操作,这里用Master节点。

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.2 生成Etcd证书

1. 自签证书颁发机构(CA)

创建工作目录:

mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd

自签CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF

{

    "CN": "etcd CA",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "XiAn",

            "ST": "XiAn"

        }

    ]

}

EOF

生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

会生成ca.pem和ca-key.pem文件。

2. 使用自签CA签发Etcd HTTPS证书

创建证书申请文件:

cat > server-csr.json << EOF

{

    "CN": "etcd",

    "hosts": [

    "192.168.3.110",

    "192.168.3.112",

    "192.168.3.113"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "XiAn",

            "ST": "XiAn"

        }

    ]

}

EOF

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare etcd

会生成etcd.pem和etcd-key.pem文件。

2.3 从Github下载二进制文件3.5版本

https://github.com/etcd-io/etcd/releases/download/v3.5.1/ etcd-v3.5.1-linux-amd64.tar.gz

2.4 部署Etcd集群

以下在节点1上操作,然后将文件拷贝到其他集群机器

1. 创建工作目录并解压二进制包

mkdir /opt/etcd/{bin,cfg,ssl} -p

tar zxvf etcd-v3.4.9-linux-amd64.tar.gz

mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

2. 创建etcd配置文件

cat > /opt/etcd/cfg/etcd.conf << EOF

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/etcd-data/default.etcd" #这个目录可以自定义放置合适目录

ETCD_LISTEN_PEER_URLS="https://192.168.3.110:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.3.110:2379"



#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.3.110:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.3.110:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.3.110:2380,etcd-2=https://192.168.3.112:2380,etcd-3=https://192.168.3.113:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

EOF

3. systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target



[Service]

Type=notify

EnvironmentFile=/opt/etcd/cfg/etcd.conf

ExecStart=/opt/etcd/bin/etcd 

--cert-file=/opt/etcd/ssl/etcd.pem 

--key-file=/opt/etcd/ssl/etcd-key.pem 

--peer-cert-file=/opt/etcd/ssl/etcd.pem 

--peer-key-file=/opt/etcd/ssl/etcd-key.pem 

--trusted-ca-file=/opt/etcd/ssl/ca.pem 

--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem 

--logger=zap

Restart=on-failure

LimitNOFILE=65536



[Install]

WantedBy=multi-user.target

EOF

4. 拷贝刚才生成的证书

把刚才生成的证书拷贝到配置文件中的路径:

# cp ~/TLS/etcd/*pem /opt/etcd/ssl/

5. 启动并设置开机启动

systemctl daemon-reload

systemctl start etcd

systemctl enable etcd

6. 将上面节点1所有生成的文件拷贝到节点2和节点3

scp -r /opt/etcd/ root@192.168.3.112:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.3.112:/usr/lib/systemd/system/

scp -r /opt/etcd/ root@192.168.3.113:/opt/

scp /usr/lib/systemd/system/etcd.service root@192.168.3.113:/usr/lib/systemd/system/

注意修改节点2和节点3分别etcd.conf配置,按照下面提示的修改

vi /opt/etcd/cfg/etcd.conf


ETCD_NAME="etcd-1"   # 修改此处,节点2改为etcd-2,节点3改为etcd-3

ETCD_LISTEN_PEER_URLS="https://192.168.3.110:2380"   # 修改此处为当前服务器IP

ETCD_LISTEN_CLIENT_URLS="https://192.168.3.110:2379" # 修改此处为当前服务器IP

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.3.110:2380" # 修改此处为当前服务器IP

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.3.110:2379" # 修改此处为当前服务器IP

启动各节点的etcd服务

7. 查看集群状态

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.3.110:2379,https://192.168.3.112:2379,https://192.168.3.113:2379" endpoint health --write-out=table



+----------------------------+--------+-------------+-------+

|          ENDPOINT    | HEALTH |    TOOK     | ERROR |

+----------------------------+--------+-------------+-------+

| https://192.168.3.110:2379 |   true | 10.301506ms |    |

| https://192.168.3.113:2379 |   true | 12.87467ms |     |

| https://192.168.3.112:2379 |   true | 13.225954ms |    |

+----------------------------+--------+-------------+-------+

如果输出上面信息,就说明集群部署成功。

如果有问题看日志:/var/log/message

三、安装Docker

docker二进制下载地址:

https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

注:使用yum安装也行

集群所有机器都安装docker

3.1 解压二进制包

tar zxvf docker-19.03.9.tgz

mv docker/* /usr/bin

3.2 systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target



[Service]

Type=notify

ExecStart=/usr/bin/dockerd

ExecReload=/bin/kill -s HUP $MAINPID

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

TimeoutStartSec=0

Delegate=yes

KillMode=process

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s



[Install]

WantedBy=multi-user.target

EOF

3.3 创建配置文件并配置阿里云加速

mkdir /etc/docker

cat > /etc/docker/daemon.json << EOF

{

  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]

}

EOF

3.4 启动并设置开机启动

systemctl daemon-reload

systemctl start docker

systemctl enable docker

四、开始k8s-master主机部署

注意:这里生成kube-apiserver证书和etcd证书不是一套

cd ~/TLS/k8s


cat > ca-config.json << EOF

{

  "signing": {

    "default": {

      "expiry": "87600h"

    },

    "profiles": {

      "kubernetes": {

         "expiry": "87600h",

         "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ]

      }

    }

  }

}

EOF

cat > ca-csr.json << EOF

{

    "CN": "kubernetes",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "XiAn",

            "ST": "XiAn",

            "O": "k8s",

            "OU": "System"

        }

    ]

}

EOF

生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

会生成ca.pem和ca-key.pem文件。

2. 使用自签CA签发kube-apiserver HTTPS证书

创建证书申请文件:

cat > server-csr.json << EOF

{

    "CN": "kubernetes",

    "hosts": [

      "10.0.0.1",

      "127.0.0.1",

      "192.168.3.110",

      "192.168.3.112",

      "192.168.3.113",
"192.168.31.74",

      "192.168.31.88",

      "kubernetes",

      "kubernetes.default",

      "kubernetes.default.svc",

      "kubernetes.default.svc.cluster",

      "kubernetes.default.svc.cluster.local"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "XiAn",

            "ST": "XiAn",

            "O": "k8s",

            "OU": "System"

        }

    ]

}

EOF

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare k8s

会生成k8s.pem和k8s-key.pem文件。

4.2 从Github下载k8s需要的二进制文件

下载地址参考:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#downloads-for-v12013

Wget https://dl.k8s.io/v1.20.13/kubernetes-server-linux-amd64.tar.gz


4.3 解压k8s文件包

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 

tar zxvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin

cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin

cp kubectl /usr/bin/

部署kube-apiserver 创建配置文件

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF

KUBE_APISERVER_OPTS="--logtostderr=false 

--v=2 

--log-dir=/opt/kubernetes/logs 

--etcd-servers=https://192.168.3.110:2379,https://192.168.3.112:2379,https://192.168.3.113:2379 

--bind-address=192.168.3.110 

--secure-port=6443 

--advertise-address=192.168.3.110 

--allow-privileged=true 

--service-cluster-ip-range=10.0.0.0/24 

--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction 

--authorization-mode=RBAC,Node 

--enable-bootstrap-token-auth=true 

--token-auth-file=/opt/kubernetes/cfg/token.csv 

--service-node-port-range=30000-32767 

--kubelet-client-certificate=/opt/kubernetes/ssl/k8s.pem 

--kubelet-client-key=/opt/kubernetes/ssl/k8s-key.pem 

--tls-cert-file=/opt/kubernetes/ssl/k8s.pem  

--tls-private-key-file=/opt/kubernetes/ssl/k8s-key.pem 

--client-ca-file=/opt/kubernetes/ssl/ca.pem 

--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem 

--service-account-issuer=api 

--service-account-signing-key-file=/opt/kubernetes/ssl/k8s-key.pem 

--etcd-cafile=/opt/etcd/ssl/ca.pem 

--etcd-certfile=/opt/etcd/ssl/etcd.pem 

--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem 

--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem 

--proxy-client-cert-file=/opt/kubernetes/ssl/k8s.pem 

--proxy-client-key-file=/opt/kubernetes/ssl/k8s-key.pem 

--requestheader-allowed-names=kubernetes 

--requestheader-extra-headers-prefix=X-Remote-Extra- 

--requestheader-group-headers=X-Remote-Group 

--requestheader-username-headers=X-Remote-User 

--enable-aggregator-routing=true 

--audit-log-maxage=30 

--audit-log-maxbackup=3 

--audit-log-maxsize=100 

--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

EOF

把刚才生成的证书拷贝到配置文件中的路径:

#cp ~/TLS/k8s/*.pem  /opt/kubernetes/ssl/

TLS Bootstrapping 机制,对work-node加入进行自签证书用


创建上述配置文件中token文件:

cat > /opt/kubernetes/cfg/token.csv << EOF

7905c320e61075fce2d1c0b07eb630f3,kubelet-bootstrap,10001,"system:node-bootstrapper"

EOF

token 可以自行生产,百度下怎么生产

kube-apiserver服务

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes



[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf

ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS

Restart=on-failure



[Install]

WantedBy=multi-user.target

EOF

启动kube-apiserver

systemctl daemon-reload

systemctl start kube-apiserver 

systemctl enable kube-apiserver

kube-controller-manager

1. 创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false 

--v=2 

--log-dir=/opt/kubernetes/logs 

--leader-elect=true 

--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig 

--bind-address=127.0.0.1 

--allocate-node-cidrs=true 

--cluster-cidr=10.244.0.0/16 

--service-cluster-ip-range=10.0.0.0/24 

--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem 

--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  

--root-ca-file=/opt/kubernetes/ssl/ca.pem 

--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem 

--cluster-signing-duration=87600h0m0s"

EOF


2. 生成kubeconfig文件

生成kube-controller-manager证书:

# 切换工作目录

cd ~/TLS/k8s



# 创建证书请求文件

cat > kube-controller-manager-csr.json << EOF

{

  "CN": "system:kube-controller-manager",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "L": "XiAn", 

      "ST": "XiAn",

      "O": "system:masters",

      "OU": "System"

    }

  ]

}

EOF



# 生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

生成kubeconfig文件(以下是shell命令,直接在终端执行):

KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"

KUBE_APISERVER="https://192.168.3.110:6443"



kubectl config set-cluster kubernetes 

  --certificate-authority=/opt/kubernetes/ssl/ca.pem 

  --embed-certs=true 

  --server=${KUBE_APISERVER} 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials kube-controller-manager 

  --client-certificate=./kube-controller-manager.pem 

  --client-key=./kube-controller-manager-key.pem 

  --embed-certs=true 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-context default 

  --cluster=kubernetes 

  --user=kube-controller-manager 

  --kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

3. systemd管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes



[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf

ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure



[Install]

WantedBy=multi-user.target

EOF

启动服务

systemctl daemon-reload

systemctl start kube-controller-manager

systemctl enable kube-controller-manager

部署kube-scheduler

1. 创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF

KUBE_SCHEDULER_OPTS="--logtostderr=false 

--v=2 

--log-dir=/opt/kubernetes/logs 

--leader-elect 

--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig 

--bind-address=127.0.0.1"

EOF

2. 生成kubeconfig文件

生成kube-scheduler证书:

# 切换工作目录

cd ~/TLS/k8s



# 创建证书请求文件

cat > kube-scheduler-csr.json << EOF

{

  "CN": "system:kube-scheduler",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "L": "XiAn",

      "ST": "XiAn",

      "O": "system:masters",

      "OU": "System"

    }

  ]

}

EOF



# 生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

生成kubeconfig文件:

KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"

KUBE_APISERVER="https://192.168.3.110:6443"



kubectl config set-cluster kubernetes 

  --certificate-authority=/opt/kubernetes/ssl/ca.pem 

  --embed-certs=true 

  --server=${KUBE_APISERVER} 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials kube-scheduler 

  --client-certificate=./kube-scheduler.pem 

  --client-key=./kube-scheduler-key.pem 

  --embed-certs=true 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-context default 

  --cluster=kubernetes 

  --user=kube-scheduler 

  --kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

3. systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes



[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf

ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS

Restart=on-failure



[Install]

WantedBy=multi-user.target

EOF

4. 启动服务

systemctl daemon-reload

systemctl start kube-scheduler

systemctl enable kube-scheduler

5. 查看集群状态需要生产连接集群的证书


cat > admin-csr.json <

生成kubeconfig文件:

mkdir /root/.kube


KUBE_CONFIG="/root/.kube/config"

KUBE_APISERVER="https://192.168.3.110:6443"



kubectl config set-cluster kubernetes 

  --certificate-authority=/opt/kubernetes/ssl/ca.pem 

  --embed-certs=true 

  --server=${KUBE_APISERVER} 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials cluster-admin 

  --client-certificate=./admin.pem 

  --client-key=./admin-key.pem 

  --embed-certs=true 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-context default 

  --cluster=kubernetes 

  --user=cluster-admin 

  --kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

通过kubectl工具查看当前集群组件状态:

kubectl get cs

NAME                STATUS    MESSAGE             ERROR

scheduler             Healthy   ok                  

controller-manager       Healthy   ok                  

etcd-2               Healthy   {"health":"true"}   

etcd-1               Healthy   {"health":"true"}   

etcd-0               Healthy   {"health":"true"}  


6. 授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap 

--clusterrole=system:node-bootstrapper 

--user=kubelet-bootstrap

部署Worker Node 创建工作目录并拷贝二进制文件

在所有worker node创建工作目录:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 

从master节点拷贝:

cd kubernetes/server/bin

cp kubelet kube-proxy /opt/kubernetes/bin   # 本地拷贝

部署kubelet

1. 创建配置文件

cat > /opt/kubernetes/cfg/kubelet.conf << EOF

KUBELET_OPTS="--logtostderr=false 

--v=2 

--log-dir=/opt/kubernetes/logs 

--hostname-override=k8s-master1 

--network-plugin=cni 

--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig 

--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig 

--config=/opt/kubernetes/cfg/kubelet-config.yml 

--cert-dir=/opt/kubernetes/ssl 

--pod-infra-container-image=pause-amd64:3.0"

EOF

2. 配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: 0.0.0.0

port: 10250

readOnlyPort: 10255

cgroupDriver: cgroupfs

clusterDNS:

- 10.0.0.2

clusterDomain: cluster.local 

failSwapOn: false

authentication:

  anonymous:

    enabled: false

  webhook:

    cacheTTL: 2m0s

    enabled: true

  x509:

    clientCAFile: /opt/kubernetes/ssl/ca.pem 

authorization:

  mode: Webhook

  webhook:

    cacheAuthorizedTTL: 5m0s

    cacheUnauthorizedTTL: 30s

evictionHard:

  imagefs.available: 15%

  memory.available: 100Mi

  nodefs.available: 10%

  nodefs.inodesFree: 5%

maxOpenFiles: 1000000

maxPods: 110

EOF

3. 生成kubelet初次加入集群引导kubeconfig文件

KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"

KUBE_APISERVER="https://192.168.3.110:6443" # apiserver IP:PORT

TOKEN="7905c320e61075fce2d1c0b07eb630f3" # 与token.csv里保持一致



# 生成 kubelet bootstrap kubeconfig 配置文件

kubectl config set-cluster kubernetes 

  --certificate-authority=/opt/kubernetes/ssl/ca.pem 

  --embed-certs=true 

  --server=${KUBE_APISERVER} 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials "kubelet-bootstrap" 

  --token=${TOKEN} 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-context default 

  --cluster=kubernetes 

  --user="kubelet-bootstrap" 

  --kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4. systemd管理kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]

Description=Kubernetes Kubelet

After=docker.service



[Service]

EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf

ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS

Restart=on-failure

LimitNOFILE=65536



[Install]

WantedBy=multi-user.target

EOF

5. 启动服务

systemctl daemon-reload

systemctl start kubelet

systemctl enable kubelet

5.3 批准kubelet申请并加入集群

# 查看kubelet证书请求

kubectl get csr

NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION

node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A   6m3s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending



# 批准申请

kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A



# 查看节点

kubectl get node

NAME         STATUS     ROLES    AGE   VERSION

k8s-master1   NotReady      7s    v1.18.3

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

5.4 部署kube-proxy

1. 创建配置文件

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF

KUBE_PROXY_OPTS="--logtostderr=false 

--v=2 

--log-dir=/opt/kubernetes/logs 

--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

EOF

2. 配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF

kind: KubeProxyConfiguration

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 0.0.0.0

metricsBindAddress: 0.0.0.0:10249

clientConnection:

  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig

hostnameOverride: k8s-master1

clusterCIDR: 10.244.0.0/16

EOF

3. 生成kube-proxy.kubeconfig文件

# 切换工作目录

cd ~/TLS/k8s



# 创建证书请求文件

cat > kube-proxy-csr.json << EOF

{

  "CN": "system:kube-proxy",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "L": "XiAn",

      "ST": "XiAn",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

EOF



# 生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
生成kubeconfig文件:
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"

KUBE_APISERVER="https://192.168.3.110:6443"



kubectl config set-cluster kubernetes 

  --certificate-authority=/opt/kubernetes/ssl/ca.pem 

  --embed-certs=true 

  --server=${KUBE_APISERVER} 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-credentials kube-proxy 

  --client-certificate=./kube-proxy.pem 

  --client-key=./kube-proxy-key.pem 

  --embed-certs=true 

  --kubeconfig=${KUBE_CONFIG}

kubectl config set-context default 

  --cluster=kubernetes 

  --user=kube-proxy 

  --kubeconfig=${KUBE_CONFIG}

kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

4. systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF

[Unit]

Description=Kubernetes Proxy

After=network.target



[Service]

EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf

ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS

Restart=on-failure

LimitNOFILE=65536



[Install]

WantedBy=multi-user.target

EOF

5. 启动并设置开机启动

systemctl daemon-reload

systemctl start kube-proxy

systemctl enable kube-proxy

部署CNI网络最新版本是cni-plugins-linux-amd64-v1.0.1.tgz

二进制包下载地址:https://github.com/containernetworking/plugins/releases


# mkdir /opt/cni/bin /etc/cni/net.d
# tar zxvf cni-plugins-linux-amd64-v1.0.1.tgz–C /opt/cni/bin


确保kubelet启用CNI:


# cat /opt/kubernetes/cfg/kubelet.conf 
--network-plugin=cni



在Master执行:


kubectl apply –f kube-flannel.yaml
# kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-5xmhh   1/1     Running   6          171m
kube-flannel-ds-amd64-ps5fx   1/1     Running   0          150m


5.6 授权apiserver访问kubelet

应用场景:例如kubectl logs

cat > apiserver-to-kubelet-rbac.yaml << EOF

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  annotations:

    rbac.authorization.kubernetes.io/autoupdate: "true"

  labels:

    kubernetes.io/bootstrapping: rbac-defaults

  name: system:kube-apiserver-to-kubelet

rules:

  - apiGroups:

      - ""

    resources:

      - nodes/proxy

      - nodes/stats

      - nodes/log

      - nodes/spec

      - nodes/metrics

      - pods/log

    verbs:

      - "*"

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: system:kube-apiserver

  namespace: ""

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:kube-apiserver-to-kubelet

subjects:

  - apiGroup: rbac.authorization.k8s.io

    kind: User

    name: kubernetes

EOF



kubectl apply -f apiserver-to-kubelet-rbac.yaml

5.7 新增加Worker Node

1. 拷贝已部署好的Node相关文件到新节点

在Master节点将Worker Node涉及文件拷贝到新节点192.168.3.112/113

scp -r /opt/kubernetes root@192.168.3.112:/opt/



scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.3.112:/usr/lib/systemd/system



scp /opt/kubernetes/ssl/ca.pem root@192.168.3.112:/opt/kubernetes/ssl

2. 删除kubelet证书和kubeconfig文件

rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 

rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除

3. 修改主机名

vi /opt/kubernetes/cfg/kubelet.conf

--hostname-override=k8s-node1



vi /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: k8s-node1

4. 启动并设置开机启动

systemctl daemon-reload

systemctl start kubelet kube-proxy

systemctl enable kubelet kube-proxy

5. 在Master上批准新Node kubelet证书申请

# 查看证书请求

kubectl get csr

NAME           AGE   SIGNERNAME                    REQUESTOR           CONDITION

node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro   89s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending



# 授权请求

kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro

6. 查看Node状态

kubectl get node

NAME       STATUS   ROLES    AGE     VERSION

k8s-master1   Ready       47m     v1.20.4

k8s-node1    Ready       6m49s   v1.20.4

Node2(192.168.3.113 )节点同上。记得修改主机名!

六、部署Dashboard和CoreDNS

6.1 部署Dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
kubectl apply -f kubernetes-dashboard.yaml
# 查看部署
kubectl get pods,svc -n kubernetes-dashboard


访问地址:https://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。

6.2 部署CoreDNS

CoreDNS用于集群内部Service名称解析。

kubectl apply -f coredns.yaml 



kubectl get pods -n kube-system  

NAME                          READY   STATUS    RESTARTS   AGE 

coredns-5ffbfd976d-j6shb      1/1     Running   0          32s

DNS解析测试:

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh 

If you don't see a command prompt, try pressing enter. 



/ # nslookup kubernetes 

Server:    10.0.0.2 

Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local 



Name:      kubernetes 

Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

这样单Master集群就搭建完成了



企业级k8s集群部署

页面更新:2024-04-11

标签:集群   网段   企业级   节点   防火墙   主机名   证书   角色   地址   环境   文件

1 2 3 4 5

上滑加载更多 ↓
更多:

本站资料均由网友自行发布提供,仅用于学习交流。如有版权问题,请与我联系,QQ:4156828  

© CopyRight 2008-2024 All Rights Reserved. Powered By bs178.com 闽ICP备11008920号-3
闽公网安备35020302034844号

Top