1、准备工作,三台linux系统虚拟机。
master(192.168.159.160)、node1(192.168.159.161)、node2(192.168.159.162)。
版本:centos7.3,docker1.13,kubenetes1.11.0
2、所有节点安装docker。
安装docker1.13版本,添加阿里云centos7镜像源即可(所有节点执行)
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y docker
systemctl enable docker && systemctl start docker
添加国内镜像加速
cat <<EOF > /etc/docker/daemon.json
{
"registry-mirrors": ["https://uf3mgws6.mirror.aliyuncs.com"]
}
EOF
如果需要安装docker ce版本参考https://yq.aliyun.com/articles/110806添加镜像源
注意:kubernetes1.11.0只兼容docker-ce-17.03以及以下版本,yum安装docker-ce-17.03,需要先手动安装 yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.0.ce-1.el7.centos.noarch.rpm
3、关闭selinux,防火墙、虚拟内存(所有节点执行)
setenforce 0
修改/etc/selinux/config将SELINUX=enforcing改为SELINUX=disabled
systemctl stop firewalld
systemctl disable firewalld
swapoff -a # 关闭Swap,机器重启后不生效
注释/etc/fstab中swap的挂载永久关闭Swap
4、RHEL / CentOS 7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题。您应该确保 net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1(所有节点执行)
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
5、添加kubernetes国内源(所有节点执行)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
6、所有节点安装kubelet、kubeadm和kubectl(所有节点安装)
kubeadm:引导群集的命令。
kubelet:在群集中的所有计算机上运行的组件,并执行诸如启动pod和容器之类的操作。
kubectl:命令行util与您的群集通信。
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
7、配置主节点上的kubelet使用的cgroup驱动程序
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
如果Docker cgroup驱动程序和kubelet配置不匹配,请更改kubelet配置以匹配Docker cgroup驱动程序cgroup-driver=systemd|cgroupfs
kubenetes1.11.0版本和docker1.13(yum安装)默认都是systemd,不需要做任何修改。
8、用kubeadm初始化Master
报错: [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
解决办法:echo 1 > /proc/sys/net/ipv4/ip_forward
报错:[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
解决办法:关闭防火墙,或者放开6443和10250端口
报错:unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://storage.useso.com/kubernetes-release/release/stable-1.11.txt: read tcp 192.168.159.148:60194->216.58.199.16:443: read: connection reset by peer
解决办法:添加--kubernetes-version=1.11.0
报错:[WARNING Hostname]: hostname "master" could not be reached
解决办法:echo "127.0.0.1 master" >> /etc/hosts
报错:[WARNING Swap]: running with swap on is not supported. Please disable swap
解决办法:
# 关闭Swap,机器重启后不生效swapoff -a
# 修改/etc/fstab永久关闭Swap
重置kubeadm安装状态:kubeadm reset
kubeadm init --kubernetes-version=1.11.0 --apiserver-advertise-address=192.168.159.160 --pod-network-cidr=10.244.0.0/16
注意:执行之前提前pull镜像(可以脚本pull),并docker tag 更改镜像名称。(k8s.gcr.io/pause-amd64 需要改成k8s.gcr.io/pause,其他按照默认名称即可,不知道镜像名称可以kubeadm init 会自动到google下载镜像,等到失败可以查看)
9、新建普通用户
useradd k8s && echo "password" | passwd --stdin k8s
su - k8s
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加自动补全功能
echo "source < (kubectl completion bash)" >> /home/k8s/.bashrc
10、安装pod网络
mkdir -p /etc/cni/net.d/
docker pull quay.io/coreos/flannel:v0.10.0-amd64
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml (在master切换 普通用户执行)
11、添加节点node1和node2
分别在node1和node2节点执行
kubeadm join 192.168.159.160:6443 --token 1t0syj.oih1kgks8llwfatu --discovery-token-ca-cert-hash sha256:80ae046b8b3952e40510b2a84256795eb00cfc3e4e11c6597ff9e40fb4c5d63e
注意:如果token和CA-hash之前执行kubeadm init 成功信息没有保存,可以执行
kubeadm token create --print-join-command重新生成token,或者使用kubeadm token list 查看之前的token(无法查看ca-hash)
查看节点状态kubectl get nodes (在master切换 普通用户执行)
节点状态NotReady,是因为每个节点都需要启动若干组件,这些组件都在pod中运行。
可以执行kubectl get pod --all-namespaces 查看pod状态
非Running状态需要查看具体原因,使用kubectl describe pod <pod name>
[k8s@master ~]$ kubectl describe pod coredns-78fcdf6894-ccjs8 --namespace=kube-system
Warning FailedCreatePodSandBox 1h kubelet, master Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox js8": NetworkPlugin cni failed to set up pod "coredns-78fcdf6894-ccjs8_kube-system" network: open /run/flannel/subnet.env: no such file or directory
此报错因为此文件不存在,可以执行
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF
[k8s@master ~]$ kubectl describe pod kube-proxy-9np7v --namespace=kube-system
Warning FailedCreatePodSandBox 1h (x18 over 1h) kubelet, node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused
此报错是因为node1节点pull国外镜像失败,手动pull即可。
[root@node1 ~]# docker pull registry.cn-shenzhen.aliyuncs.com/cookcodeblog/pause:3.1
[root@node1 ~]# docker tag registry.cn-shenzhen.aliyuncs.com/cookcodeblog/pause:3.1 k8s.gcr.io/pause:3.1
[root@node1 ~]# docker rmi registry.cn-shenzhen.aliyuncs.com/cookcodeblog/pause:3.1
[root@node1 ~]# docker pull registry.cn-shenzhen.aliyuncs.com/cookcodeblog/kube-proxy-amd64:v1.11.0
[root@node2 ~]# docker tag registry.cn-shenzhen.aliyuncs.com/cookcodeblog/kube-proxy-amd64:v1.11.0 k8s.gcr.io/kube-proxy-amd64:v1.11.0
[root@node2 ~]# docker rmi registry.cn-shenzhen.aliyuncs.com/cookcodeblog/kube-proxy-amd64:v1.11.0
节点node2也执行上面命令
[k8s@master ~]$ kubectl describe pod kube-proxy-fz4c4 --namespace=kube-system
Warning FailedCreatePodSandBox 15m (x26 over 1h) kubelet, node2 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: getsockopt: connection refused
上面kube-flannel-ds-**非运行状态是因为node1和node2还没有pull镜像quay.io/coreos/flannel:v0.10.0-amd64,可以等docker自己pull或者手动pull
最终node1和node2节点如下:
master节点如下:
到此,Kubernetes Cluster创建成功。