KubeSphere部署

KubeSphere部署

1.基础环境依赖

  • 可以基于linux裸操作系统部署
  • 可以在K8S的基础上部署(本次部署)

已经提前准备好Docker、harbor(镜像仓库)和K8S基础环境

2.安装前准备

  • (1).安装socat 到所有节点

其功能与有瑞士军刀之称的 Netcat 类似,可以在两个流之间建立通道)helm依赖于socat,将socat-1.7.3.2-2.el7.x86_64拷贝的所有节点,并安装

rpm -ivh socat-1.7.3.2-2.el7.x86_64

socat -V
# [root@master01 ~]# socat -V
# socat by Gerhard Rieger and contributors - see www.dest-unreach.org
# socat version 1.7.3.2 on Aug  4 2017 04:57:10
#    running on Linux version #1 SMP Thu Nov 8 23:39:32 UTC 2018, release 3.10.0-957.el7.x86_64, machine x86_64
# features:
  #define WITH_STDIO 1
  #define WITH_FDNUM 1
  #define WITH_FILE 1
  #define WITH_CREAT 1
  #define WITH_GOPEN 1
  #define WITH_TERMIOS 1
  #define WITH_PIPE 1
  #define WITH_UNIX 1
  • (2).安装helm

(helm分为客户端(helm)和服务端(tiller))(helm可以理解为k8s的yum,可以用chart的方式在线安装deployment)

  1. 拷贝helm-v2.16.9-linux-amd64.tar到所有节点,解压后将linux-amd64下的helm拷贝到系统的/usr/local/bin/下,这样就可以使用helm命令
tar -zxvf helm-v2.16.2-linux-amd64.tar.gz
cp helm /usr/local/bin/
  1. 拷贝tiller_v2.16.2到一个master节点,通过 docker load命令将其中的镜像解压到本地
docker load < tiller_v2.16.2
  1. 重新tag tiller:v2.16.2镜像,并推送到镜像仓库
docker tag 192.168.11.6:8083/kubesphere/tiller:v2.16.2  192.168.201.6:81/kubesphere/tiller:v2.16.2
docker push 192.168.201.6:81/kubesphere/tiller:v2.16.2
  1. 将rbac-config.yaml拷贝到一个master节点,通过kubectl create命令创建tiller使用的service account: tiller并分配合适的角色给它(不执行此步的话tiller容器起不来)
kubectl create -f rbac-config.yaml
  1. 将tiller-deploy.yaml拷贝到一个master节点,编辑将上面push的tiller镜像地址填入
    vim tiller-deploy.yaml修改2处地址
spec:
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: 192.168.201.6:81/kubesphere/tiller:v2.16.2
        imagePullPolicy: IfNotPresent
 ---
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: 192.168.201.6
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        seLinuxOptions: {}
      serviceAccount: tiller
      serviceAccountName: tiller
      terminationGracePeriodSeconds: 30
  1. 通过kubectl create命令创建tiller的deploy和pod
kubectl apply -f  tiller-deploy.yaml  -n kube-system

查询pod已经正常运行,helm也获取到了server

kubectl get pod -n kube-system
helm version
  • (3).安装NFS文件系统

注意:NFS文件系统只负责存储资源共享,和k8s没有其他关系,后面要部署的存储持久化也只是调用NFS的服务地址而已

  1. 拷贝nfs-utils-1.3.0-0.66.el7.x86_64和rpcbind-0.2.0-49.el7.x86_64的rpm安装包到nfs服务器
  2. 安装
#服务器安装
rpm -ivh nfs-utils-1.3.0-0.66.el7.x86_64 
#服务器客户端都安装
rpm -ivh rpcbind-0.2.0-49.el7.x86_64
  1. 在nfs服务器上修改exports配置文件,填写需要共享的路径和权限(*代表所有服务器都可以访问)
vim /etc/exports
# [root@master01 ~]#vi /etc/exports
# /nfs/data *(rw,no_root_squash,sync)
  1. 启动并设置自启动nfs和rpcbind服务
systemctl  enable  nfs
systemctl  enable  rpcbind
systemctl  start  nfs
systemctl  start  rpcbind
  1. 在任意非nfs服务器系统上执行 showmount -e <nfs服务器ip>,可以得到共享信息即可
showmount -e 192.168.201.6
# [root@master02 ~]# showmount -e 192.168.201.6
# Export list for 192.168.201.6:
# /nfs/data     *
  • (4).持久化存储部署(基于nfs文件系统的动态pv)
  1. 拷贝并导入nfs-client-provisioner镜像到任意一个节点
docker load < nfs-client-provisioner_v3.1.0-k8s1.11
  1. tag并上传到harbor
docker tag  192.168.11.6:8083/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11 192.168.201.6:81/kubesphere/nfs-6.55/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
docker push 192.168.201.6:81/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
  1. 因为有了helm,可以用helm离线方式安装nfs-client-provisioner,上传nfs-client-provisioner-1.2.8.tgz,并解压缩,编辑values.yaml,更改仓库地址及nfs地址
tar  -zxvf  nfs-client-provisioner-1.2.8.tgz
cd  nfs-client-provisioner
vim  valuse.yaml
# Default values for nfs-client-provisioner.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
strategyType: Recreate
image:
    repository: 192.168.201.6:81/kubesphere/nfs-client
    provisioner
    tag: v3.1.0-k8s1.11
    pullPolicy: IfNotPresent
nfs:
    server: 192.168.201.6
    path: /nfs/data
      mountOptions:
# For creating the StorageClass automatically:
storageClass:
    create: true
    # Set a provisioner name. If unset, a name will be generated.
    # provisionerName:
    # Set StorageClass as the default StorageClass
    # Ignored if storageClass.create is false
    defaultClass: false
  1. 在values.yaml所在文件夹内执行helm install ./ 成功安装后查看pod已经running,查看sc资源已经有了nfs-client,代表成功
kubectl get sc
# [root@master01 ~]# kubectl get sc
# NAME                   PROVISIONER                                       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
# nfs-client (default)   cluster.local/fancy-gnat-nfs-client-provisioner   Delete          Immediate           true                   2d5h
kubectl get pod -n default
# [root@master01 nfs-client-provisioner]# kubectl get pod -n default
# NAME                                                 READY   STATUS    RESTARTS   AGE
# fancy-gnat-nfs-client-provisioner-598546c7d9-bcsft   1/1     Running   0          2d5h
  1. 按照kubesphere教程要求设置默认存储(重要)
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

到此准备工作全部完成,可以开始离线安装kubesphere了

3.在k8s集群上部署

  • 在线部署

执行以下命令以开始安装:

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

检查安装日志:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
  • 离线部署
  1. 将7.2G的全包上传到服务器中,解压缩后得到如下tar包
tar  -zxvf  kubesphere-images-v2.1.1.tar.gz
  1. 将tar包内的镜像全部load到当前服务器
    可能会失败,多尝试几次
docker load < ks_minimal_images.tar
docker load < openpitrix_images.tar
docker load < ks_logging_images.tar
docker load < ks_devops_images.tar
docker load < istio_images.tar
docker load < ks_notification_images.tar
docker load < example_images.tar

load后本地会加载大量的镜像,不要慌,kubeshpere提供工具自动tag并上传到harbor(镜像仓库)

  1. 使用kubesphere工具tag并上传镜像
  • (1).将ks-installer.tar.gz拷贝到服务器并解压缩,进入ks-installer/scrip目录后编辑create_project_harbor.sh,将harbor地址填入(用户不能用admin)
vim create_project_harbor.sh
# url="http://192.168.201.6:81"
# user="admin"
# passwd="Harbor12345"

新版的harbor仓库接口可能更换了,脚本不好使可以去web界面手动创建项目,大概30-40个项目。

  • (2).执行create_project_harbor.sh,这一步会在harbor中创建一系列的项目

  • (3).执行./push-image-list.sh 192.168.201.6:81(后面的IP为harbor地址)将镜像推送到harbor,因为镜像很多,需要大概半小时(根据网络速度不同)

注意,这里的kubesphere/jenkins-uc镜像,程序没有自动tar上传,但是安装kubesphere时用到了,请自行tag和推送

docker tag kubesphere/jenkins-uc:v2.1.1  192.168.201.6:81/kubesphere/jenkins-uc:v2.1.1
docker push  192.168.201.6:81/kubesphere/jenkins-uc:v2.1.1 

上传完毕后请去harbor检查镜像情况
– (4).编辑ks-installer目录下的kubesphere-minimal.yaml,修改harbor地址和修改ks-installer镜像地址

这里采取最小化安装,安装完成后需要开启其他功能,可以自己选择增加

vi kubesphere-minimal.yaml 修改相应的仓库地址

---
apiVersion: v1
kind: Namespace
metadata:
  name: kubesphere-system

---
apiVersion: v1
data:
  ks-config.yaml: |
    ---

    persistence:
      storageClass: ""

    etcd:
      monitoring: False
      endpointIps: 192.168.201.6,192.168.201.7,192.168.201.8
      port: 2379
      tlsEnable: True

    common:
      mysqlVolumeSize: 20Gi
      minioVolumeSize: 20Gi
      etcdVolumeSize: 20Gi
      openldapVolumeSize: 2Gi
      redisVolumSize: 2Gi

    metrics_server:
      enabled: False

    console:
      enableMultiLogin: True
      port: 30880

    monitoring:
      prometheusReplicas: 1
      prometheusMemoryRequest: 400Mi
      prometheusVolumeSize: 20Gi
      grafana:
        enabled: False

    logging:
      enabled: True
      elasticsearchMasterReplicas: 1
      elasticsearchDataReplicas: 1
      logsidecarReplicas: 2
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      containersLogMountedPath: ""
      kibana:
        enabled: False

    openpitrix:
      enabled: False

    devops:
      enabled: True
      jenkinsMemoryLim: 2Gi
      jenkinsMemoryReq: 1500Mi
      jenkinsVolumeSize: 8Gi
      jenkinsJavaOpts_Xms: 512m
      jenkinsJavaOpts_Xmx: 512m
      jenkinsJavaOpts_MaxRAM: 2g
      sonarqube:
        enabled: False
        postgresqlVolumeSize: 8Gi

    servicemesh:
      enabled: False

    notification:
      enabled: False

    alerting:
      enabled: False

kind: ConfigMap
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ks-installer
  namespace: kubesphere-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: ks-installer
rules:
- apiGroups:
  - ""
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - extensions
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - rbac.authorization.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - apiextensions.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - tenant.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - certificates.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - devops.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - monitoring.coreos.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - logging.kubesphere.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - jaegertracing.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - storage.k8s.io
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - '*'
  verbs:
  - '*'

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ks-installer
subjects:
- kind: ServiceAccount
  name: ks-installer
  namespace: kubesphere-system
roleRef:
  kind: ClusterRole
  name: ks-installer
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    app: ks-install
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ks-install
  template:
    metadata:
      labels:
        app: ks-install
    spec:
      serviceAccountName: ks-installer
      containers:
      - name: installer
        image: 192.168.201.6:81/kubesphere/ks-installer:v2.1.1
        imagePullPolicy: "Always"
  • (5).确认helm 存储持久化 kubelet的正确,确认 node全部正常,kube-system的容器全部正常
helm version
kubectl get nodes
kubectl get sc
kubectl get pod -n kube-system
  • (6).执行安装命令
kubectl apply -f kubesphere-minimal.yaml

命令执行成功,pod为Running时,观察日志

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

日志可能会报一些错误,观察错误模块,到对应的容器内查看并解决错误,node或者master可能会拉取镜像到本地不成功,这时候需要手动拉取

4.验证

看到以下界面完成安装,等待所有pod完成后登录验证
http://localhost:30880

账号:admin
密码:P@88w0rd

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://localhost:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After logging into the console, please check the
     monitoring status of service components in
     the "Cluster Status". If the service is not
     ready, please wait patiently. You can start
     to use when all components are ready.
  2. Please modify the default password after login.

#####################################################

5.遇到的问题

  • (1).拉取镜像失败,手动进行pull后重新tag
docker pull 192.168.201.6:81/kubesphere/node-exporter:ks-v0.16.0
docker tag 192.168.201.6:81/kubesphere/node-exporter:ks-v0.16.0 kubesphere/node-exporter:ks-v0.16.0

留下评论