Page History
Table of Contents | ||
---|---|---|
|
Kubernetes Install
Installing Docker
Docker 설치는 공식 사이트 및 아래 주소를 참조하여 설치.
Docker Ref : https://docs.docker.com/install/linux/docker-ce/centos/
저자는 테스트 진행시 Docker 버전 17.03.2.ce 버전을 설치했음.
Code Block | ||
---|---|---|
| ||
# yum update # yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 # yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo # yum install -y docker # yum install --setopt=obsoletes=0 \ docker-ce-17.03.2.ce-1.el7.centos \ docker-ce-selinux-17.03.2.ce-1.el7.centos # systemctl enable docker && systemctl start docker |
Installing kubeadm, kubelet and kubectl
Kubeadm : 클러스터를 부트스트랩 하는 명령어.
Kubelet : 클러스터의 모든 시스템에서 실행되는 요소로 시작하여 컨테이너와 같은 작업을 수행.
Kubectl : 클러스터에서 CLI 으로 명령어 실행.
Kubernetes Ref : https://kubernetes.io/docs/setup/independent/install-kubeadm/
※ 쿠버네티스는 하루가 다르게 빠른 업데이트 및 고도화가 진행되고 있으므로 공식사이트를 반드시 참조하여 Install 이 진행 되어야 한다.
Code Block | ||
---|---|---|
| ||
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF # setenforce 0 # yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes # systemctl enable kubelet && systemctl start kubelet |
(Optional) Configure cgroup driver used by kubelet on Master Node
Kubelet에서 사용하는 Cgroup 드라이버가 Docker에서 사용하는 Cgroup 드라이버와 동일 해야 한다.
이는 Kubectl 버전이 1.10 버전 이하에만 해당됨.
Code Block | ||
---|---|---|
| ||
# docker info | grep -i cgroup # cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # systemctl daemon-reload # systemctl restart kubelet |
Docker를 사용할 때 Kubeadm은 자동으로 Kubelet의 cgroup 드라이버를 탐지하여 런타임중에 /var/lib/kubelet/kubeadm-flags.env 파일에 설정한다.
다른 CRI를 사용하는 경우 다음과 같이 cgroup-driver 값을 사용하여 /etc/default/kubelet 파일을 수정해야 하며, 내용은 다음과 같다.
Code Block | ||
---|---|---|
| ||
# KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=<value> |
CRIX의 cgroup 드라이버가 cgroupfs가 아니라면 kgrouplet의 기본값이기 때문에이 작업을 수행하면된다.
Kubeadm init (K8s Version 1.10)
Code Block | ||
---|---|---|
| ||
# kubeadm init [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. ... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root kubeadm join ... |
Kubeadm init (K8s Version 1.11)
Code Block | ||
---|---|---|
| ||
[init] using Kubernetes version: v1.11.2 [preflight] running pre-flight checks I0810 15:52:27.254403 14590 kernel_validator.go:81] Validating kernel version I0810 15:52:27.254663 14590 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [sbion-kubernetes69 kubernetes kubernetes.default ... [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [sbion-kubernetes69 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names ... [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 41.502496 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node ... [markmaster] Marking the node ... [patchnode] Uploading the CRI Socket information ... [bootstraptoken] using token: ... [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join ... |
Master Setup
Code Block | ||
---|---|---|
| ||
# mkdir -p $HOME/.kube # sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # sudo chown $(id -u):$(id -g) $HOME/.kube/config # kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} # kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} # kubectl get nodes NAME STATUS ROLES AGE VERSION localhost.localdomain NotReady master 4m v1.10.3 # kubectl cluster-info Kubernetes master is running at https://10.0.2.15:6443 KubeDNS is running at https://10.0.2.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. # kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-localhost.localdomain 1/1 Running 0 4m kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 4m kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 4m kube-system kube-dns-86f4d74b45-275td 0/3 Pending 0 5m kube-system kube-proxy-rp79j 1/1 Running 0 5m kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 4m |
위의 kube-dns 상태가 계속 Pending 값을 노출한다.
매번 설삽질하면서 느낀거지만.. 아무래도 결론은 쿠버마스터만 초기화 된 이후 미니언의 Join 및 설정이 없어서 나오는 메세지일 확률이 크다.
매뉴얼에서는 기본적으로 마스터 초기화후, 노드는 최소 3개까지 설정이 필요하다고 나온다.
Installing a pod network add-on
Pod 가 서로 통신 할수 있도록 네트워크 추가 기능을 설치한다.
(K8s Version 1.10)
Code Block | ||
---|---|---|
| ||
# export kubever=$(kubectl version | base64 | tr -d '') # kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" serviceaccount "weave-net" created clusterrole.rbac.authorization.k8s.io "weave-net" created clusterrolebinding.rbac.authorization.k8s.io "weave-net" created role.rbac.authorization.k8s.io "weave-net" created rolebinding.rbac.authorization.k8s.io "weave-net" created daemonset.extensions "weave-net" created |
(K8s Version 1.11)
Code Block | ||
---|---|---|
| ||
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.extensions/weave-net created |
위브넷 애드온 네트워크를 설치후, get nodes 상태가 NotReady에서 Ready로 변경됨.
Code Block |
---|
# kubectl get nodes NAME STATUS ROLES AGE VERSION localhost.localdomain Ready master 33m v1.10.3 |
Master Isolation
기본적으로 클러스터는 보안상의 이유로 마스터에 포드를 예약하지 않습니다. 마스터에서 광고 모음을 예약하려는 경우 (예 : 개발을위한 단일 기계 Kubernetes 클러스터의 경우
Code Block | ||
---|---|---|
| ||
# kubectl taint nodes --all node-role.kubernetes.io/master- node "test-01" untainted taint "node-role.kubernetes.io/master:" not found taint "node-role.kubernetes.io/master:" not found |
Joining your nodes
Code Block | ||
---|---|---|
| ||
# kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the follovs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_connt you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0810 16:30:39.529114 17494 kernel_validator.go:81] Validating kernel version I0810 16:30:39.529419 17494 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "180.70.98.69:6443" [discovery] Created cluster-info discovery client.... [discovery] Requesting info from.... [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roo [discovery] Successfully established connection with API Server.... [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-s [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "sbion- This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster |
토큰이 없으면 마스터 노드에서 다음 명령을 실행하여 얻을 수 있다.
Code Block | ||
---|---|---|
| ||
# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system: signing token generated by bootstrappers: 'kubeadm init'. kubeadm: default-node-token |
기본적으로 토큰은 24 시간 후에 만료되며, 현재 토큰이 만료 된 후에 노드를 클러스터에 조인하는 경우 마스터 노드에서 다음 명령을 실행하여 새 토큰을 생성한다.
Code Block | ||
---|---|---|
| ||
# kubeadm token create 5didvk.d09sbcov8ph2amjw |
■ Minikube
Kubernetes를 Local 에서 쉽게 구현, 실행할 수 있게 해주는 도구.
DNS, Dashboards, CNI, NodePorts, ConfigMaps 및 Secrets 등과 같은 kubernetes 기능을 지원.
Minikube에 내장 된 Docker 데몬을 사용.
Kubernetes 콤팩트 환경 구성 가능.
■ Minikube for Windows
BIOS에서 VT-x 또는 AMD-v 가상화를 활성화.
호스트에서 VirtualBox 설치.
- https://www.virtualbox.org/wiki/DownloadsVirtualBox windows 기능 켜기/끄기 에서 Hyper-V disable
Docker-toolbox for window 설치. (virtualbox 구성요소 해제, 없으면 같이 설치해도 무관) - https://docs.docker.com/toolbox/toolbox_install_windows/
Kubectl 설치
- Chocolatey 설치 https://chocolatey.org/ (관리자 콘솔)
- Choco install kubernetes-cli
- Kubectl version 으로 설치확인.
Minikube-windows-amd64.exe 다운로드
- https://github.com/kubernetes/minikube/releasesminikube.exe 파일을 windows system path 디렉토리로 이동 (예를 들어 C:\install 폴더에 위치 후, 해당 위치를 path 에 추가함)
minikube-installer.exe 파일 다운 및 실행
문제해결
crictl not found in system path 오류 해결법
Kubenetes의 kubeadm init
명령어를 실행했을 때 나오는 crictl not found in system path
오류는 경고(Warning) 메시지 이다.
무시하고 진행을 해도 되지만 경고 메시지를 아예 없애고 싶을 때는 crictl을 설치하면 된다.
crictl 소스 코드는 여기에서 참조할 수 있다.
Go 언어로 되어 있기 때문에 먼저 Go 언어를 설치해야 하며, 다음 아래의 명령어를 이용해서 crictl을 설치할 수 있다.
Code Block |
---|
go get github.com/kubernetes-incubator/cri-tools/cmd/crictl |
다음은 crictl 설치를 하면 오류가 해결된다.
설치하는 방법은 Suggestion을 실행하면 된다. 단, go 언어를 설치해야 명령어가 실행이 될것이다.
이 명령어는 사실 쿠버네티스를 실행하는데 위험요소는 아니면 일반적으로 패스 해도 무관함.
Code Block |
---|
crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl |
The connection to the server localhost:8080 was refused 오류 해결법
kubectl
명령어를 입력하면 다음 오류 메시지가 발생하는 경우.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
이 문제는 다음과 같은 명령으를 통한 해결.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
위에서 admin.conf
파일은 kubeadm init
명령어를 수행했을 때 생성.
즉, master
노드에서만 kubectl
명령어를 사용할 수 있으며, 다른 노드에서 kubectl
명령어를 사용하고 싶을 때는 master
노드에서 생성한
dmin.conf
파일을 복사해오면 일반 노드에서도 kubectl
명령어를 사용할 수 있음.
[ERROR docker] docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
즉, 도커 버전이 최대 17.03만 지원이 된다는 메세지 노출..어떻게 이럴수가 있는가...쿠버는 기본적으로 도커 최신버전을 지원하지 않는다.
설치된, 도커 17.12를 삭제하고 17.03을 강제 설치 진행이 필요하다.
Code Block |
---|
# yum list docker-ce --showduplicates | sort -r | grep 17.03 # yum install docker-ce-<VERSION STRING> # yum install --setopt=obsoletes=0 \ docker-ce-17.03.2.ce-1.el7.centos \ docker-ce-selinux-17.03.2.ce-1.el7.centos # systemctl start docker && systemctl enable docker |
[ERROR Swap]: running with swap on is not supported. Please disable swap
Swap 을 반드시 해지해야 한다.
Code Block |
---|
# swapoff -a # cat /etc/fstab # Created by anaconda on Thu Feb 8 11:02:14 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=4978c64f-38be-4e76-9061-132b6dc77015 / xfs defaults 0 0 UUID=152043c4-03ea-4755-acc0-9a13982cb755 /boot xfs defaults 0 0 UUID=8fefa47f-cd08-44aa-9f5c-c57e10eefb0c swap swap defaults 0 0 ### weblog backup ### SYSBACKUP1:/vol/vol1/Permanent/APPLICATION_LOG /LOG_BACKUP nfs defaults 1 2 ### system backup ### SYSBACKUP2:/vol/vol2/SYSTEM_UNIX_BAK /system_backup_UNIX nfs defaults 1 2 # cp fstab fstab_bacup # rm fstab # systemctl daemon-reload # systemctl restart kubelet |
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
위의 메세지는 환경설정파일을 제거 한 후 다시 초기화 실행을 한다.
권한 또는 사용자 그룹에 대한 설정 오류.
쿠버네티스는 기본적으로 도커와 같은 사용자그룹으로 권한이 지정되어야 한다.
Code Block |
---|
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" |
다음의 메세지로 오류를 해결.
Code Block |
---|
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" 를 Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" 로 변경 systemctl daemon-reload kubeadm reset kubeadm init |
브릿지 설정에 필요한 사항.
Code Block |
---|
In order to set /proc/sys/net/bridge/bridge-nf-call-iptables by editing /etc/sysctl.conf. There you can add [1] |
다음의 설정으로 오류를 해결할 수 있음.
Code Block |
---|
net.bridge.bridge-nf-call-iptables = 1 echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables |
또는 sudo sysctl -p 명령어를 이용하여 오류를 해결할 수 있다.