k8s学习笔记

一、概念

1. 定义

k8s是Google开源的一个容器集群管理系统。

k8s用于容器化应用程序部署、扩展和管理。

k8s提供了容器编排,资源调度,自动修复,弹性伸缩(自动根据服务器的并发情况扩增或者缩减容器数量),自动部署,回滚处理,服务发现,负载均衡等一些列功能。

k8s目标是让容器化应用简单高效。

官网:https://kubernetes.io

官方中文文档:https://kubernetes.io/zh/docs/home

2. 核心对象

Kubernetes 对象

  • Pod

    • 最小可部署单元(k8s不能直接启动容器,而是需要通过Pod间接启动容器)
    • 一组容器(docker)集合
    • 一个Pod中的所有容器共享网络命名空间
    • Pod是短暂的,运行完就结束了
  • Service 服务

    • 将一组Pod(比如mysql Pod)关联起来,提供统一的入口
    • 防止pod失联,Pod地址发生改变,入口不受影响
  • Volume 数据卷

  • Namespace 命名空间

    • 用于隔离Pod的运行环境(默认情况Pod可以互相访问)
    • 使用场景:
      • ①为开发环境、测试环境、生产环境准备不同的命名空间
    • ②为不同的用户提供不同的隔离的运行环境
  • Controller 控制器

    Controller控制器基本对象构建并提供额外的功能和方便使用的特性,用于控制pod启动、停止、删除等。

3. 节点分类

  • master node:主控节点

  • worker node:工作节点

(1)master node相关的组件(程序)

apiserver:接收客户端操作k8s的指令,此外也是其他组件互调的桥梁。

schduler:从多个worker node的pod中选举一个来启动服务。

controller manager:管理控制器的组件,用于向work node的kubelet发送指令。

etcd:分布式键值对数据库。用于保存集群状态数据,比如pod、service等对象信息。

(2)worker node相关的组件(程序)

kubelet:向docker发送指令从而管理容器并汇报节点状态给apiserver。

kubeproxy:管理docker容器的网络。

Components of Kubernetes

二、部署单master集群

1. 集群规划

主机名 ip 组件
k8s-master1 192.168.222.101 kube-apiserver、kube-controller-manager、kube-scheduler、etcd
k8s-worker1 192.168.222.201 kubelet、kube-proxy、docker、etcd
k8s-worker2 192.168.222.202 kubelet、kube-proxy、docker、etcd

2. 系统环境

操作系统:centos7

k8s版本:1.16.2

docker版本:18.09.9-3

安装方式:离线二进制安装

3. 初始化服务器

(1)关闭防火墙

1
2
$ systemctl stop firewalld
$ systemctl disable firewalld

(2)关闭selinux

1
2
# 临时生效
$ setenforce 0
1
2
3
# 永久生效
$ vim /etc/selinux/config
SELINUX=disabled #修改SELINUX的值为disabled

(3)关闭交换分区

1
2
# 临时生效
$ swapoff -a
1
2
3
4
# 永久生效
$ vim /etc/fstab
# 如下注释掉swap的配置
#/dev/mapper/centos-swap swap swap defaults 0 0
1
2
#检测是否关闭成功
$ free -m

(4)配置主机名

1
2
#对三台机器分别修改主机名
$ hostnamectl set-hostname 主机名
  • k8s-master1
1
$ hostnamectl set-hostname k8s-master1
  • k8s-worker1
1
$ hostnamectl set-hostname k8s-worker1
  • k8s-worker2
1
$ hostnamectl set-hostname k8s-worker2

验证是否修改成功

1
$ hostname

(5)配置名称解析

三台主机都做如下配置

1
2
3
4
5
$ cat << EOF >> /etc/hosts
192.168.222.101 k8s-master1
192.168.222.201 k8s-worker1
192.168.222.202 k8s-worker2
EOF

然后互相ping查看是否都能通过主机名连接

1
2
$ ping 主机名
#如:ping k8s-master1

(6)配置时间同步

选择一个节点作为时间上游服务端,其他的节点作为时间下游客户端。比如这里选择k8s-master1为服务端。那么需要做如下配置:

① k8s-master1

1
$ yum install chrony -y
1
$ vim /etc/chrony.conf
1
2
3
4
5
6
7
8
9
#注释掉上游server相关的记录,并新加一条server记录
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0 iburst
#允许客户端访问的网段
allow 192.168.222.0/24
local stratum 10
1
2
3
4
# 启动
$ systemctl start chronyd
# 开机启动
$ systemctl enable chronyd
1
2
# 查看启动情况
ss -unl | grep 123

① k8s-worker1和k8s-worker2

1
$ yum install chrony -y
1
$ vim /etc/chrony.conf
1
2
#指定上游服务器为k8s-master1的ip
server 192.168.222.101 iburst
1
2
3
4
# 启动
$ systemctl start chronyd
# 开机启动
$ systemctl enable chronyd
1
2
3
4
5
6
7
8
# 查看时间是否与上游服务器同步
$ chronyc sources

210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
#出现^*说明同步
^* k8s-master1 10 6 177 35 +48us[+4654us] +/- 169us

4. 部署etcd

为了节省服务器资源,这里把etcd分别部署到k8s-master1、k8s-worker1、k8s-worker2里。

4.1 生成证书

给etcd颁发证书,可以通过cfssl或者openssl工具生成自签证书。这里使用k8s推荐的cfssl。

(1)下载并安装cfssl

可以在k8s-master1上执行

1
2
3
4
5
6
7
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
$ chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
$ mv cfssl_linux-amd64 /usr/local/bin/cfssl
$ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
$ mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

(2)在/root/k8s目录下编写三个json文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
$ vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}


$ vim ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}

#下面的hosts为三台主机的ip,(如果需要办法给更多个服务器,那么将ip追加到下面的hosts即可)
$ vim server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.222.101",
"192.168.222.201",
"192.168.222.202"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}

(3)生成证书命令

1
2
3
4
5
6
7
8
#创建ca机构(得到ca私钥ca-key.pem和ca公钥ca.pem,用于给ca机构自身,申请者需要携带公钥来申请服务颁发证书)
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#生成要颁发的证书(得到服务私钥server-key.pem和服务公钥server.pem,用于给其他服务器)
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

$ ls *pem(应该有4个pem结尾的文件)
$ ca-key.pem ca.pem server-key.pem server.pem

4.2 安装etcd

二进制包下载地址:https://github.com/coreos/etcd/releases/

有etcd的node上都要部署,注意文件夹的名字和位置,不是随便命名和放置的

(1)在三台服务器上都执行如下命令进行安装

1
2
3
4
5
#安装etcd
$ mkdir /opt/etcd/{bin,cfg,ssl} -p
$ wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
$ tar -zxvf etcd-v3.2.12-linux-amd64.tar.gz -C ./
$ mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

(2)三台服务器上分别创建etcd配置文件

  • k8s-master1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ touch /opt/etcd/cfg/etcd.conf && chmod 777 /opt/etcd/cfg/etcd.conf
$ vim /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.222.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.222.101:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.222.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.222.101:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.222.101:2380,etcd-2=https://192.168.222.201:2380,etcd-3=https://192.168.222.202:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#-------------------------------------解释-------------------------------------------
#ETCD_NAME 节点名称
#ETCD_DATA_DIR 数据目录
#ETCD_LISTEN_PEER_URLS 集群通信监听地址
#ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
#ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
#ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
#ETCD_INITIAL_CLUSTER 集群节点地址
#ETCD_INITIAL_CLUSTER_TOKEN 集群Token
#ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群
  • k8s-worker1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ touch /opt/etcd/cfg/etcd.conf && chmod 777 /opt/etcd/cfg/etcd.conf
$ vim /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.222.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.222.201:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.222.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.222.201:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.222.101:2380,etcd-2=https://192.168.222.201:2380,etcd-3=https://192.168.222.202:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#-------------------------------------解释-------------------------------------------
#ETCD_NAME 节点名称
#ETCD_DATA_DIR 数据目录
#ETCD_LISTEN_PEER_URLS 集群通信监听地址
#ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
#ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
#ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
#ETCD_INITIAL_CLUSTER 集群节点地址
#ETCD_INITIAL_CLUSTER_TOKEN 集群Token
#ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群
  • k8s-worker2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ touch /opt/etcd/cfg/etcd.conf && chmod 777 /opt/etcd/cfg/etcd.conf
$ vim /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.222.202:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.222.202:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.222.202:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.222.202:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.222.101:2380,etcd-2=https://192.168.222.201:2380,etcd-3=https://192.168.222.202:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#-------------------------------------解释-------------------------------------------
#ETCD_NAME 节点名称
#ETCD_DATA_DIR 数据目录
#ETCD_LISTEN_PEER_URLS 集群通信监听地址
#ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
#ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
#ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
#ETCD_INITIAL_CLUSTER 集群节点地址
#ETCD_INITIAL_CLUSTER_TOKEN 集群Token
#ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

(3)三台服务器上分别创建systemd管理etcd的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

(4)在k8s-master1主机里将之前生成的证书分别拷贝到各个节点的/opt/etcd/ssl目录里。

1
2
3
4
$ cd ~/k8s/
$ cp ca*pem server*pem /opt/etcd/ssl
$ scp ca*pem server*pem root@k8s-worker1:/opt/etcd/ssl
$ scp ca*pem server*pem root@k8s-worker2:/opt/etcd/ssl

(5)三台服务器上分别都启动etcd并设置开机启动

1
2
$ systemctl start etcd
$ systemctl enable etcd

(6)在都部署完成后,在任意节点执行如下命令检查一下集群的状态(若是出现cluster is healthy,说明部署成功!)

1
2
3
4
5
6
7
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.222.101:2379,https://192.168.222.201:2379,https://192.168.222.202:2379" cluster-health

#出现如下信息证明部署成功
member 239636ea954e7897 is healthy: got healthy result from https://192.168.222.101:2379
member 301bdd98b166b85c is healthy: got healthy result from https://192.168.222.202:2379
member b94d930437eda878 is healthy: got healthy result from https://192.168.222.201:2379
cluster is healthy

5. 部署master组件

需要部署的组件有:kube-apiserver、kube-controller-manager、kube-scheduler。

5.1 生成证书

在k8s-master1上执行命令:

1
2
3
4
5
6
#初始化安装目录
$ mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p
#备份原理生成的证书
$ mv ~/k8s/ ~/k8s-etcd-backup
$ mkdir ~/k8s
$ cd k8s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#创建ca机构并生成ca证书
$ vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}

$ vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 生成apiserver证书
$ vim server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.222.101",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 生成kube-proxy证书
$ vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

最终得到ca-key.pem、ca.pem、kube-proxy-key.pem、kube-proxy.pem、server-key.pem、server.pem 6个以pem结尾的文件,则成功。

1
$ cp ca*pem server*pem /opt/kubernetes/ssl

5.2 下载并安装kubernetes

1
2
3
4
5
6
$ cd ~/k8s
# 下载k8s二进制包(请通过科学上网或者其他方式下载)
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.16.2/kubernetes-server-linux-amd64.tar.gz
$ tar zxvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes/server/bin
$ cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin

5.3 部署kube-apiserver

(1)创建token文件

1
2
$ vim /opt/kubernetes/cfg/token.csv
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

第一列:随机字符串,自己可生成
第二列:用户名
第三列:UID
第四列:用户组

(2)创建组件的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ vim /opt/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.222.101:2379,https://192.168.222.201:2379,https://192.168.222.202:2379 \
--bind-address=192.168.222.101 \
--secure-port=6443 \
--advertise-address=192.168.222.101 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

参数说明:
logtostderr 启用日志
v 日志等级
etcd-servers etcd集群地址
bind-address 监听地址
secure-port https安全端口
advertise-address 集群通告地址
allow-privileged 启用授权
service-cluster-ip-range Service虚拟IP地址段
enable-admission-plugins 准入控制模块
authorization-mode 认证授权,启用RBAC授权和节点自管理
enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
token-auth-file token文件
service-node-port-range Service Node类型默认分配端口范围

(3)创建systemd管理组件的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
$ vim /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

(4)启动

1
2
3
$ systemctl daemon-reload
$ systemctl enable kube-apiserver
$ systemctl restart kube-apiserver

5.4 部署kube-scheduler

(1)创建组件的配置文件

1
2
3
4
5
6
7
$ vim /opt/kubernetes/cfg/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"

参数说明:
master 连接本地apiserver
leader-elect 当该组件启动多个时,自动选举(HA)
address服务监听的地址

(2)systemd管理组件的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
$ vim /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

(3)启动

1
2
3
$ systemctl daemon-reload
$ systemctl enable kube-scheduler
$ systemctl restart kube-scheduler

5.5 部署kube-controller-manager

(1)创建组件的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ vim /opt/kubernetes/cfg/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"

参数说明:
master 连接本地apiserver的ip地址
address kube-controller-manager监听的地址
allocate 是否支持CNI网络插件
cluster-cidr CNI网段
service-cluster-ip-range 给客户端分配的ip网络范围
leader-elect 当该组件启动多个时,自动选举(HA)

(2)systemd管理组件的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
$ vim /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

(3)启动

1
2
3
$ systemctl daemon-reload
$ systemctl enable kube-controller-manager
$ systemctl restart kube-controller-manager

5.6 查看集群组件状态

可以通过kubectl工具查看当前集群组件状态。

1
2
3
4
5
6
7
8
9
$ cp /opt/kubernetes/bin/kubectl /bin/
$ kubectl get cs
#会出现如下信息
NAME AGE
scheduler <unknown>
controller-manager <unknown>
etcd-2 <unknown>
etcd-0 <unknown>
etcd-1 <unknown>

5.7 启用tls基于bootstrap自动颁发证书

1
2
3
$ kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

6. 部署worker组件

需要部署的组件有:docker、kubelet、kube-proxy。

6.1 安装docker

由于官方下载速度比较慢,所以需要更改 Docker 安装的 yum 源,这里推荐用阿里镜像源:

1
2
$ yum install yum-utils -y
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

显示 docker 所有可安装版本:

1
$ yum list docker-ce --showduplicates | sort -r

安装指定版本 docker

注意:安装前一定要提前查询将要安装的 Kubernetes 版本是否和 Docker 版本对应。

1
$ yum install -y docker-ce-18.09.9-3.el7

启动并加入开机启动

1
2
$ sudo systemctl start docker
$ sudo systemctl enable docker

镜像下载加速

1
$ vim /etc/docker/daemon.json

添加以下内容:

1
2
3
4
5
6
{
"registry-mirrors": [
"https://dockerhub.azk8s.cn",
"https://hub-mirror.c.163.com"
]
}

可以配置多个加速镜像。由于镜像服务可能出现宕机,建议同时配置多个加速镜像。

1
2
3
4
5
6
#重新加载配制
$ systemctl daemon-reload
#重启docker
$ service docker restart
#检查加速器是否生效
$ docker info

6.2 部署kubelet和kube-proxy组件

(1)安装kubelet和kube-proxy组件

1
2
3
4
5
6
7
8
9
10
11
$ mkdir ~/k8s && cd ~/k8s
# 下载k8s二进制包(请通过科学上网或者其他方式下载)
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.16.2/kubernetes-server-linux-amd64.tar.gz
$ tar zxvf kubernetes-server-linux-amd64.tar.gz

#初始化安装目录
$ mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p

#找到kubelet和kube-proxy两个文件,将这两个文件拷贝到Node节点的/opt/kubernetes/bin目录下
cp ~/k8s/kubernetes/server/bin/kubelet /opt/kubernetes/bin/
cp ~/k8s/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin/

(2)从master节点复制证书到k8s-worker1和k8s-worker2节点

1
2
3
$ cd ~/k8s
$ scp ca.pem kube-proxy.pem kube-proxy-key.pem root@k8s-worker1:/opt/kubernetes/ssl/
$ scp ca.pem kube-proxy.pem kube-proxy-key.pem root@k8s-worker2:/opt/kubernetes/ssl/

(2)在k8s-worker1和k8s-worker2都创建如下kube-proxy.kubeconfig文件

1
$ vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
certificate-authority: /opt/kubernetes/ssl/ca.pem
server: https://192.168.222.101:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
client-key: /opt/kubernetes/ssl/kube-proxy-key.pem

server 指定matser的ip

(3)在k8s-worker1和k8s-worker2都创建如下kube-proxy.kubeconfig文件,注意hostnameOverride要指定为当前主机的主机名。

1
$ vim /opt/kubernetes/cfg/kube-proxy-config.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-worker1
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
scheduler: "rr"
iptables:
masqueradeAll: true

hostnameOverride 指定当前主机的主机名

(4)在k8s-worker1和k8s-worker2都创建如下kube-proxy.conf文件。

1
$ vim /opt/kubernetes/cfg/kube-proxy.conf
1
2
3
4
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

(5)在k8s-worker1和k8s-worker2都创建如下kubelet-config.yml文件。

1
$ vim /opt/kubernetes/cfg/kubelet-config.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

(6)在k8s-worker1和k8s-worker2都创建如下kubelet.conf文件,注意hostname-override要指定为当前主机的主机名。

1
$ vim /opt/kubernetes/cfg/kubelet.conf
1
2
3
4
5
6
7
8
9
10
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-worker1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

hostname-override 指定当前主机的主机名

(7)在k8s-worker1和k8s-worker2都创建如下bootstrap.kubeconfig文件。

1
$ vim /opt/kubernetes/cfg/bootstrap.kubeconfig
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
clusters:
- cluster:
certificate-authority: /opt/kubernetes/ssl/ca.pem
server: https://192.168.222.101:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: c47ffb939f5ca36231d9e3121a252940

server 指定matser的ip

(8)在k8s-worker1和k8s-worker2都配置如下系统启动服务配置文件。

1
$ vim /usr/lib/systemd/system/kube-proxy.service
1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
1
$ vim /usr/lib/systemd/system/kubelet.service
1
2
3
4
5
6
7
8
9
10
11
12
13
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Before=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

(9)k8s-worker1和k8s-worker2都启动kubelet和kube-proxy服务

1
2
3
4
$ systemctl start kube-proxy
$ systemctl enable kube-proxy
$ systemctl start kubelet
$ systemctl enable kubelet

(10)在master节点为worker节点颁发证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#查看worker的证书请求
$ kubectl get csr
#会显示如下信息
NAME AGE REQUESTOR CONDITION
node-csr-7Pxr6JrmsSHXx-P5sfhCwFVDZkXxC6iLx7Y3xHD9xlc 12m kubelet-bootstrap Pending
node-csr-KvP7Ebs95r4ROlf1jXmVxq-mb2xcS_DrKbAMbXUWu-k 12m kubelet-bootstrap Pending
node-csr-sygVc7ahN8Y-8Td_2ipEEmKgWgyJdrIKDTOMtOagRaU 26m kubelet-bootstrap Pending

#对worker请求token进行证书颁发操作
$ kubectl certificate approve node-csr-7Pxr6JrmsSHXx-P5sfhCwFVDZkXxC6iLx7Y3xHD9xlc
$ kubectl certificate approve node-csr-KvP7Ebs95r4ROlf1jXmVxq-mb2xcS_DrKbAMbXUWu-k
$ kubectl certificate approve node-csr-sygVc7ahN8Y-8Td_2ipEEmKgWgyJdrIKDTOMtOagRaU

#查看已经认证的worker节点信息
$ kubectl get node
#会显示如下信息
NAME STATUS ROLES AGE VERSION
k8s-worker1 NotReady <none> 31s v1.16.2
k8s-worker2 NotReady <none> 22s v1.16.2

6.3 安装网络插件

(1)在所有worker节点上安装cni网络插件

1
2
3
$ mkdir -pv /opt/cni/bin /etc/cni/net.d
$ wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz
$ tar xf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin

(2)在master上执行yaml脚本,实现在worker节点安装启动网络插件功能

1
$ vi ~/k8s/kube-flannel.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unsed in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"cniVersion": "0.2.0",
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: lizhenliang/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: lizhenliang/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

下载flannel镜像并启动容器和完成网络连接

1
$ kubectl apply -f ~/k8s/kube-flannel.yaml

注意:这个操作受限于网络,可能会需要5~10分钟才能执行成功,如果网上太慢,会导致超时。

查看worker节点的状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#查看worker节点的状态(n指命名空间)
$ kubectl get pods -n kube-system
#出现如下信息,待STATUS变成Running就表示启动网络插件成功了
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-kcqvj 0/1 Init:ImagePullBackOff 0 7m23s
kube-flannel-ds-amd64-xmmp9 0/1 Init:ImagePullBackOff 0 7m22s

$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-worker1 Ready <none> 97m v1.16.2
k8s-worker2 Ready <none> 97m v1.16.2

#查看worker节点信息(可以用于查看故障等信息)
$ kubectl describe node k8s-worker1

(3)授权apiserver可以访问kubelet

1
$ vi ~/k8s/apiserver-to-kubelet-rbac.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
1
$ kubectl apply -f ~/k8s/apiserver-to-kubelet-rbac.yaml

7. 通过k8s启动nginx容器

在master节点上执行操作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#创建deployment,通过deployment来创建和管理nginx容器
#k8s会随机选择worker节点进行镜像下载和启动,其中下面的myweb是docker容器名称,可以自定义名称,而image则是指定要使用的docker镜像版本
$ kubectl create deployment myweb --image=nginx:1.8

#查看一下deployment的状态
$ kubectl get deployment

#查看pode的状态
$ kubectl get pods

#暴露myweb容器的端口给宿主机
$ kubectl expose deployment myweb --port=80 --type=NodePort

#查看当前将80映射到了哪个端口
$ kubectl get svc

#比如通过上面的命令查到的宿主机端口是31320,那么访问集群任意worker节点的30828端口都能访问nginx
$ curl http://192.168.222.201:31320
$ curl http://192.168.222.202:31320

8. 配置web界面

8.1 官方dashboard

在matser上执行下面操作安装dashboard

1
$ vi ~/k8s/dashboard.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-beta4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.1
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
1
2
3
4
5
$ kubectl apply -f ~/k8s/dashboard.yaml
$ kubectl get pods -n kubernetes-dashboard

#比如通过上面的命令查到的宿主机端口是30001,那么浏览器访问https://192.168.222.202:30001
$ kubectl get svc -n kubernetes-dashboard

8.2 第三方dashboard

在matser上执行下面操作。

1
$ vi ~/k8s/start_kuboard.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuboard
namespace: kube-system
annotations:
k8s.eip.work/displayName: kuboard
k8s.eip.work/ingress: "false"
k8s.eip.work/service: NodePort
k8s.eip.work/workload: kuboard
labels:
k8s.eip.work/layer: monitor
k8s.eip.work/name: kuboard
spec:
replicas: 1
selector:
matchLabels:
k8s.eip.work/layer: monitor
k8s.eip.work/name: kuboard
template:
metadata:
labels:
k8s.eip.work/layer: monitor
k8s.eip.work/name: kuboard
spec:
nodeName: k8s-worker1
containers:
- name: kuboard
image: eipwork/kuboard:latest
imagePullPolicy: IfNotPresent
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

---
apiVersion: v1
kind: Service
metadata:
name: kuboard
namespace: kube-system
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 32567
selector:
k8s.eip.work/layer: monitor
k8s.eip.work/name: kuboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kuboard-user
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kuboard-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kuboard-user
namespace: kube-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kuboard-viewer
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kuboard-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: kuboard-viewer
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kuboard-viewer:kuboard-minimum-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kuboard-minimum-role
subjects:
- kind: ServiceAccount
name: kuboard-viewer
namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kuboard-minimum-role
rules:
- apiGroups:
- ''
resources:
- 'namespaces'
- 'nodes'
verbs:
- 'list'

修改nodeName的值为worker节点的主机名称,这里写为k8s-worker1

1
2
3
4
5
$ kubectl apply -f ~/k8s/start_kuboard.yaml
$ kubectl get pods -n kube-system

# 如查看的的宿主机端口为32567,所以可以访问http://192.168.222.201:32567
$ kubectl get svc -n kube-system

访问http://192.168.222.201:32567后出现如下页面,需要输入token。

image-20200504075956828

需要到master执行如下命令获取token。

1
$ kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d

输入token并登录之后得到如下页面。

image-20200504081015866

9. CoreDNS

CoreDNS 其实就是一个 DNS 服务,而 DNS 作为一种常见的服务发现手段,所以很多开源项目以及工程师都会使用 CoreDNS 为集群提供服务发现的功能。在k8s集群环境下,各个节点服务器的 IP 地址都可能会变动,那么就会需要频繁地修改配置,为了解决这个痛点,k8s利用了CoreDNS实现服务注册与发现。

在master节点如下操作:

1
$ vi ~/k8s/coredns.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: lizhenliang/coredns:1.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.0.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
1
2
$ kubectl apply -f ~/k8s/coredns.yaml
$ kubectl get pods -n kube-system | grep coredns


----------- 本文结束 -----------




如果你觉得我的文章对你有帮助,你可以打赏我哦~
0%