2.1 安裝kube-proxy二進(jìn)制文件
### 1.Check if the install directory exists.
if [ ! -d $K8S_BIN_PATH ]; then
mkdir -p $K8S_BIN_PATH
fi
if [ ! -d $K8S_LOG_DIR/$KUBE_NAME ]; then
mkdir -p $K8S_LOG_DIR/$KUBE_NAME
fi
if [ ! -d $K8S_CONF_PATH ]; then
mkdir -p $K8S_CONF_PATH
fi
if [ ! -d $KUBE_CONFIG_PATH ]; then
mkdir -p $KUBE_CONFIG_PATH
fi
### 2.Install kube-proxy binary of kubernetes.
if [ ! -f $SOFTWARE/kubernetes-server-${VERSION}-linux-amd64.tar.gz ]; then
wget $DOWNLOAD_URL -P $SOFTWARE >>/tmp/install.log 2>&1
fi
cd $SOFTWARE && tar -xzf kubernetes-server-${VERSION}-linux-amd64.tar.gz -C ./
cp -fp kubernetes/server/bin/$KUBE_NAME $K8S_BIN_PATH
ln -sf $K8S_BIN_PATH/${KUBE_NAME} /usr/local/bin
chmod -R 755 $K8S_INSTALL_PATH
2.2 分發(fā)kubeconfig文件和證書(shū)文件
分發(fā)CA根證書(shū)
cd $CA_DIR
ansible worker_k8s_vgs -m copy -a src=ca.pem dest=$CA_DIR -b
分發(fā)kubeconfig認(rèn)證文件
kube-proxy使用 kubeconfig文件連接訪問(wèn) apiserver服務(wù),該文件提供了 apiserver 地址、嵌入的 CA 證書(shū)和 kube-proxy服務(wù)器證書(shū)以及私鑰:
cd $KUBE_CONFIG_PATH
ansible worker_k8s_vgs -m copy -a src= kube-proxy.kubeconfig dest=$KUBE_CONFIG_PATH -b
備注: 如果在前面小節(jié)已經(jīng)同步過(guò)各組件kubeconfig和證書(shū)文件,此處可以不必執(zhí)行此操作;
2.3 創(chuàng)建kube-proxy配置文件
cat >${K8S_CONF_PATH}/kube-proxy-config.yaml<<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
burst: 200
kubeconfig: ${KUBE_CONFIG_PATH}/kube-proxy.kubeconfig
qps: 100
bindAddress: ${LISTEN_IP}
healthzBindAddress: ${LISTEN_IP}:10256
metricsBindAddress: ${LISTEN_IP}:10249
clusterCIDR: ${CLUSTER_PODS_CIDR}
hostnameOverride: ${HOSTNAME}
mode: ipvs
portRange:
kubeProxyIPTablesConfiguration:
masqueradeAll: false
kubeProxyIPVSConfiguration:
scheduler: rr
excludeCIDRs: []
EOF
bindAddress: 監(jiān)聽(tīng)地址;
clientConnection.kubeconfig: 連接 apiserver 的 kubeconfig 文件;
clusterCIDR: kube-proxy 根據(jù) –cluster-cidr 判斷集群內(nèi)部和外部流量,指定 –cluster-cidr 或 –masquerade-all 選項(xiàng)后 kube-proxy 才會(huì)對(duì)訪問(wèn) Service IP 的請(qǐng)求做 SNAT;
hostnameOverride: 參數(shù)值必須與 kubelet 的值一致,否則 kube-proxy 啟動(dòng)后會(huì)找不到該 Node,從而不會(huì)創(chuàng)建任何 ipvs 規(guī)則;
mode: 使用 ipvs 模式;
2.4 創(chuàng)建kube-proxy 啟動(dòng)服務(wù)
cat >/usr/lib/systemd/system/${KUBE_NAME}.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_INSTALL_PATH}
ExecStart=${K8S_BIN_PATH}/${KUBE_NAME} \\\\\\\\
--config=${K8S_CONF_PATH}/kube-proxy-config.yaml \\\\\\\\
--alsologtostderr=true \\\\\\\\
--logtostderr=false \\\\\\\\
--log-dir=${K8S_LOG_DIR}/${KUBE_NAME} \\\\\\\\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
2.5 檢查服務(wù)運(yùn)行狀態(tài)
systemctl status kube-proxy|grep Active
確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因:
sudo journalctl -u kube-proxy
2.6 查看輸出的 metrics
注意:以下命令在 kube-scheduler 節(jié)點(diǎn)上執(zhí)行。kube-proxy 監(jiān)聽(tīng) 10249 和 10256 端口:兩個(gè)接口都對(duì)外提供 /metrics 和 /healthz 的訪問(wèn)。
sudo netstat -ntlp | grep kube-proxy
tcp 0 0 10.10.10.40:10249 0.0.0.0:* LISTEN 22604/kube-proxy
tcp 0 0 10.10.10.40:10256 0.0.0.0:* LISTEN 22604/kube-proxy
2.7 查看查看 ipvs 路由規(guī)則
sudo ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.20.40:8400 rr
-> 172.16.3.2:8080 Masq 1 0 0
-> 172.16.3.3:8080 Masq 1 0 0
-> 172.16.3.4:8080 Masq 1 0 0
TCP 192.168.20.40:8497 rr
-> 172.16.3.2:8500 Masq 1 0 0
-> 172.16.3.3:8500 Masq 1 0 0
-> 172.16.3.4:8500 Masq 1 0 0
TCP 10.10.10.40:8400 rr
-> 172.16.3.2:8080 Masq 1 0 0
-> 172.16.3.3:8080 Masq 1 0 0
-> 172.16.3.4:8080 Masq 1 0 0
至此整個(gè)集群基本部署完成,關(guān)于kubernetes集群監(jiān)控請(qǐng)參考:kubernetes集群安裝指南:kubernetes集群插件部署。kube-proxy腳本可以從此處獲取,
更多關(guān)于云服務(wù)器,域名注冊(cè),虛擬主機(jī)的問(wèn)題,請(qǐng)?jiān)L問(wèn)三五互聯(lián)官網(wǎng):www.shinetop.cn