KubeSphere部署
LiuSw Lv6

KubeSphere部署

1.基础环境依赖

  • 可以基于linux裸操作系统部署
  • 可以在K8S的基础上部署(本次部署)

已经提前准备好Docker、harbor(镜像仓库)和K8S基础环境

2.安装前准备

  • (1).安装socat 到所有节点

其功能与有瑞士军刀之称的 Netcat 类似,可以在两个流之间建立通道)helm依赖于socat,将socat-1.7.3.2-2.el7.x86_64拷贝的所有节点,并安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
rpm -ivh socat-1.7.3.2-2.el7.x86_64

socat -V
# [root@master01 ~]# socat -V
# socat by Gerhard Rieger and contributors - see www.dest-unreach.org
# socat version 1.7.3.2 on Aug 4 2017 04:57:10
# running on Linux version #1 SMP Thu Nov 8 23:39:32 UTC 2018, release 3.10.0-957.el7.x86_64, machine x86_64
# features:
#define WITH_STDIO 1
#define WITH_FDNUM 1
#define WITH_FILE 1
#define WITH_CREAT 1
#define WITH_GOPEN 1
#define WITH_TERMIOS 1
#define WITH_PIPE 1
#define WITH_UNIX 1
  • (2).安装helm

(helm分为客户端(helm)和服务端(tiller))(helm可以理解为k8s的yum,可以用chart的方式在线安装deployment)

  1. 拷贝helm-v2.16.9-linux-amd64.tar到所有节点,解压后将linux-amd64下的helm拷贝到系统的/usr/local/bin/下,这样就可以使用helm命令
    1
    2
    tar -zxvf helm-v2.16.2-linux-amd64.tar.gz
    cp helm /usr/local/bin/
  2. 拷贝tiller_v2.16.2到一个master节点,通过 docker load命令将其中的镜像解压到本地
    1
    docker load < tiller_v2.16.2
  3. 重新tag tiller:v2.16.2镜像,并推送到镜像仓库
    1
    2
    docker tag 192.168.11.6:8083/kubesphere/tiller:v2.16.2  192.168.201.6:81/kubesphere/tiller:v2.16.2
    docker push 192.168.201.6:81/kubesphere/tiller:v2.16.2
  4. 将rbac-config.yaml拷贝到一个master节点,通过kubectl create命令创建tiller使用的service account: tiller并分配合适的角色给它(不执行此步的话tiller容器起不来)
    1
    kubectl create -f rbac-config.yaml
  5. 将tiller-deploy.yaml拷贝到一个master节点,编辑将上面push的tiller镜像地址填入
    vim tiller-deploy.yaml修改2处地址
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    spec:
    containers:
    - env:
    - name: TILLER_NAMESPACE
    value: kube-system
    - name: TILLER_HISTORY_MAX
    value: "0"
    image: 192.168.201.6:81/kubesphere/tiller:v2.16.2
    imagePullPolicy: IfNotPresent
    ---
    dnsPolicy: ClusterFirst
    imagePullSecrets:
    - name: 192.168.201.6
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
    seLinuxOptions: {}
    serviceAccount: tiller
    serviceAccountName: tiller
    terminationGracePeriodSeconds: 30
  6. 通过kubectl create命令创建tiller的deploy和pod
    1
    kubectl apply -f  tiller-deploy.yaml  -n kube-system
    查询pod已经正常运行,helm也获取到了server
    1
    2
    kubectl get pod -n kube-system
    helm version
  • (3).安装NFS文件系统

注意:NFS文件系统只负责存储资源共享,和k8s没有其他关系,后面要部署的存储持久化也只是调用NFS的服务地址而已

  1. 拷贝nfs-utils-1.3.0-0.66.el7.x86_64和rpcbind-0.2.0-49.el7.x86_64的rpm安装包到nfs服务器
  2. 安装
    1
    2
    3
    4
    #服务器安装
    rpm -ivh nfs-utils-1.3.0-0.66.el7.x86_64
    #服务器客户端都安装
    rpm -ivh rpcbind-0.2.0-49.el7.x86_64
  3. 在nfs服务器上修改exports配置文件,填写需要共享的路径和权限(*代表所有服务器都可以访问)
    1
    2
    3
    vim /etc/exports
    # [root@master01 ~]#vi /etc/exports
    # /nfs/data *(rw,no_root_squash,sync)
  4. 启动并设置自启动nfs和rpcbind服务
    1
    2
    3
    4
    systemctl  enable  nfs
    systemctl enable rpcbind
    systemctl start nfs
    systemctl start rpcbind
  5. 在任意非nfs服务器系统上执行 showmount -e <nfs服务器ip>,可以得到共享信息即可
    1
    2
    3
    4
    showmount -e 192.168.201.6
    # [root@master02 ~]# showmount -e 192.168.201.6
    # Export list for 192.168.201.6:
    # /nfs/data *
  • (4).持久化存储部署(基于nfs文件系统的动态pv)
  1. 拷贝并导入nfs-client-provisioner镜像到任意一个节点

    1
    docker load < nfs-client-provisioner_v3.1.0-k8s1.11
  2. tag并上传到harbor

    1
    2
    docker tag  192.168.11.6:8083/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11 192.168.201.6:81/kubesphere/nfs-6.55/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
    docker push 192.168.201.6:81/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
  3. 因为有了helm,可以用helm离线方式安装nfs-client-provisioner,上传nfs-client-provisioner-1.2.8.tgz,并解压缩,编辑values.yaml,更改仓库地址及nfs地址

    1
    2
    3
    tar  -zxvf  nfs-client-provisioner-1.2.8.tgz
    cd nfs-client-provisioner
    vim valuse.yaml
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # Default values for nfs-client-provisioner.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
    replicaCount: 1
    strategyType: Recreate
    image:
    repository: 192.168.201.6:81/kubesphere/nfs-client-provisioner
    tag: v3.1.0-k8s1.11
    pullPolicy: IfNotPresent
    nfs:
    server: 192.168.201.6
    path: /nfs/data
    mountOptions:
    # For creating the StorageClass automatically:
    storageClass:
    create: true
    # Set a provisioner name. If unset, a name will be generated.
    # provisionerName:
    # Set StorageClass as the default StorageClass
    # Ignored if storageClass.create is false
    defaultClass: false
    1
    2
    cd nfs-client-provisioner
    helm install ./
  4. 在values.yaml所在文件夹内执行helm install ./ 成功安装后查看pod已经running,查看sc资源已经有了nfs-client,代表成功

    1
    2
    3
    4
    kubectl get sc
    # [root@master01 ~]# kubectl get sc
    # NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    # nfs-client (default) cluster.local/fancy-gnat-nfs-client-provisioner Delete Immediate true 2d5h
    1
    2
    3
    4
    kubectl get pod -n default
    # [root@master01 nfs-client-provisioner]# kubectl get pod -n default
    # NAME READY STATUS RESTARTS AGE
    # fancy-gnat-nfs-client-provisioner-598546c7d9-bcsft 1/1 Running 0 2d5h
  5. 按照kubesphere教程要求设置默认存储(重要)

    1
    kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

    到此准备工作全部完成,可以开始离线安装kubesphere了

3.在k8s集群上部署

  • 在线部署

执行以下命令以开始安装:

1
2
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

检查安装日志:

1
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
  • 离线部署
  1. 将7.2G的全包上传到服务器中,解压缩后得到如下tar包

    1
    tar  -zxvf  kubesphere-images-v2.1.1.tar.gz
  2. 将tar包内的镜像全部load到当前服务器
    可能会失败,多尝试几次

    1
    2
    3
    4
    5
    6
    7
    docker load < ks_minimal_images.tar
    docker load < openpitrix_images.tar
    docker load < ks_logging_images.tar
    docker load < ks_devops_images.tar
    docker load < istio_images.tar
    docker load < ks_notification_images.tar
    docker load < example_images.tar

    load后本地会加载大量的镜像,不要慌,kubeshpere提供工具自动tag并上传到harbor(镜像仓库)

  3. 使用kubesphere工具tag并上传镜像

  • (1).将ks-installer.tar.gz拷贝到服务器并解压缩,进入ks-installer/scrip目录后编辑create_project_harbor.sh,将harbor地址填入(用户不能用admin)
1
2
3
4
vim create_project_harbor.sh
# url="http://192.168.201.6:81"
# user="admin"
# passwd="Harbor12345"

新版的harbor仓库接口可能更换了,脚本不好使可以去web界面手动创建项目,大概30-40个项目。

  • (2).执行create_project_harbor.sh,这一步会在harbor中创建一系列的项目

  • (3).执行./push-image-list.sh 192.168.201.6:81(后面的IP为harbor地址)将镜像推送到harbor,因为镜像很多,需要大概半小时(根据网络速度不同)

注意,这里的kubesphere/jenkins-uc镜像,程序没有自动tar上传,但是安装kubesphere时用到了,请自行tag和推送

1
2
docker tag kubesphere/jenkins-uc:v2.1.1  192.168.201.6:81/kubesphere/jenkins-uc:v2.1.1
docker push 192.168.201.6:81/kubesphere/jenkins-uc:v2.1.1

上传完毕后请去harbor检查镜像情况

  • (4).编辑ks-installer目录下的kubesphere-minimal.yaml,修改harbor地址和修改ks-installer镜像地址

这里采取最小化安装,安装完成后需要开启其他功能,可以自己选择增加

vi kubesphere-minimal.yaml 修改相应的仓库地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
---
apiVersion: v1
kind: Namespace
metadata:
name: kubesphere-system

---
apiVersion: v1
data:
ks-config.yaml: |
---

persistence:
storageClass: ""

etcd:
monitoring: False
endpointIps: 192.168.201.6,192.168.201.7,192.168.201.8
port: 2379
tlsEnable: True

common:
mysqlVolumeSize: 20Gi
minioVolumeSize: 20Gi
etcdVolumeSize: 20Gi
openldapVolumeSize: 2Gi
redisVolumSize: 2Gi

metrics_server:
enabled: False

console:
enableMultiLogin: True
port: 30880

monitoring:
prometheusReplicas: 1
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
grafana:
enabled: False

logging:
enabled: True
elasticsearchMasterReplicas: 1
elasticsearchDataReplicas: 1
logsidecarReplicas: 2
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
containersLogMountedPath: ""
kibana:
enabled: False

openpitrix:
enabled: False

devops:
enabled: True
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
sonarqube:
enabled: False
postgresqlVolumeSize: 8Gi

servicemesh:
enabled: False

notification:
enabled: False

alerting:
enabled: False

kind: ConfigMap
metadata:
name: ks-installer
namespace: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ks-installer
namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: ks-installer
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apps
resources:
- '*'
verbs:
- '*'
- apiGroups:
- extensions
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- '*'
verbs:
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apiregistration.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apiextensions.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- tenant.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- certificates.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- devops.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- monitoring.coreos.com
resources:
- '*'
verbs:
- '*'
- apiGroups:
- logging.kubesphere.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- jaegertracing.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- storage.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- admissionregistration.k8s.io
resources:
- '*'
verbs:
- '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ks-installer
subjects:
- kind: ServiceAccount
name: ks-installer
namespace: kubesphere-system
roleRef:
kind: ClusterRole
name: ks-installer
apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
app: ks-install
spec:
replicas: 1
selector:
matchLabels:
app: ks-install
template:
metadata:
labels:
app: ks-install
spec:
serviceAccountName: ks-installer
containers:
- name: installer
image: 192.168.201.6:81/kubesphere/ks-installer:v2.1.1
imagePullPolicy: "Always"
  • (5).确认helm 存储持久化 kubelet的正确,确认 node全部正常,kube-system的容器全部正常
1
2
3
4
helm version
kubectl get nodes
kubectl get sc
kubectl get pod -n kube-system
  • (6).执行安装命令
1
kubectl apply -f kubesphere-minimal.yaml

命令执行成功,pod为Running时,观察日志

1
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

日志可能会报一些错误,观察错误模块,到对应的容器内查看并解决错误,node或者master可能会拉取镜像到本地不成功,这时候需要手动拉取

4.验证

看到以下界面完成安装,等待所有pod完成后登录验证
http://localhost:30880

账号:admin
密码:P@88w0rd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#####################################################
### Welcome to KubeSphere! ###
#####################################################

Console: http://localhost:30880
Account: admin
Password: P@88w0rd

NOTES:
1. After logging into the console, please check the
monitoring status of service components in
the "Cluster Status". If the service is not
ready, please wait patiently. You can start
to use when all components are ready.
2. Please modify the default password after login.

#####################################################

5.遇到的问题

  • (1).拉取镜像失败,手动进行pull后重新tag
    1
    2
    docker pull 192.168.201.6:81/kubesphere/node-exporter:ks-v0.16.0
    docker tag 192.168.201.6:81/kubesphere/node-exporter:ks-v0.16.0 kubesphere/node-exporter:ks-v0.16.0
 评论