CDH6部署文档
LiuSw Lv6

CDH6部署文档

1.虚拟机准备

设置hadoop101、hadoop102、hadoop103的主机对应内存分别是:8G、8G、8G (看条件分配)

设置主机名并添加到/etc/hosts文件内

1
2
3
hostnamectl set-hostname hadoop1
hostnamectl set-hostname hadoop2
hostnamectl set-hostname hadoop3

2.SSH免密登录

  • 1.生成公钥和私钥
1
ssh-keygen -t rsa

然后敲(三个回车),就会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)*

  • 2.将公钥拷贝到要免密登录的目标机器上
1
2
3
ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
  • 3.重复1和2的操作,配置hadoop1、hadoop2、hadoop3三台服务器相互免密登录

3.关闭防火墙

  • 1.查看防火墙状态
1
systemctl status firewalld
  • 2.关闭防火墙
1
2
systemctl stop firewalld
systemctl disable firewalld

4.安装JDK(重要)

  • 1.导入rpm包安装

所有服务器都安装

1
2
3
4
5
6
7
8
9
10
# 安装jdk
rpm -ivh oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
# 添加环境变量
echo '# jdk' >> /etc/profile
echo 'export JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera' >> /etc/profile
echo 'export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib' >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
# 加载环境变量
source /etc/profile
java -version

5.安装MySQL

  • 1.查看MySQL是否安装
1
2
rpm -qa|grep mysql
# mysql-libs-5.1.73-7.el6.x86_64
  • 2.若有安装将其卸载
1
rpm -e --nodeps mysql-libs-5.1.73-7.el6.x86_64
  • 3.删除原有MySql依赖(也适用安装出错干掉原来的)
1
yum remove mysql-libs
  • 4.安装mysql初始化数据

上传rpm包并安装(8.0或者5.7)

1
2
3
4
5
6
7
8
9
ls
# mysql-community-client-8.0.22-1.el7.x86_64.rpm
# mysql-community-client-plugins-8.0.22-1.el7.x86_64.rpm
# mysql-community-common-8.0.22-1.el7.x86_64.rpm
# mysql-community-libs-8.0.22-1.el7.x86_64.rpm
# mysql-community-libs-compat-8.0.22-1.el7.x86_64.rpm
# mysql-community-server-8.0.22-1.el7.x86_64.rpm

yum localinstall *.rpm -y

/etc/my.cnf样例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[mysqld]
datadir=/var/lib/mysql/data/
socket=/var/lib/mysql/mysql.sock
transaction-isolation = READ-COMMITTED
# Disabling symbolic-links is recommended to prevent assorted security risks;
# to do so, uncomment this line:
#lower_case_table_names=1
symbolic-links = 0
key_buffer_size = 32M
max_allowed_packet = 32M
thread_stack = 256K
thread_cache_size = 64
#query_cache_limit = 8M
#query_cache_size = 64M
#query_cache_type = 1
max_connections = 550
#expire_logs_days = 10
#max_binlog_size = 100M
#log_bin should be on a disk with enough free space.
#Replace '/var/lib/mysql/mysql_binary_log' with an appropriate path for your
#system and chown the specified folder to the mysql user.
log_bin=/var/lib/mysql/binlog/mysql-bin
#In later versions of MySQL, if you enable the binary log and do not set
#a server_id, MySQL will not start. The server_id must be unique within
#the replicating group.
server_id=11
binlog_format = mixed
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M
[mysqld_safe]
prompt=mysql5729_db01 [\\d]>
sql_mode=STRICT_ALL_TABLES
socket=/var/lib/mysql/mysql.sock

初始化mysql,初始化完成后注意查看密码

1
2
3
4
mysqld --defaults-file=/etc/my.cnf --initialize

# 设置忽略大小写,mysql8.0需要在初始化时设置(安装cdh不需要忽略)
mysqld --defaults-file=/etc/my.cnf --initialize --lower-case-table-names=1

初始化后修改密码(以下方式选用一个)

1
2
3
4
5
6
7
8
9
10
11
12
set password='Root@123';
FLUSH PRIVILEGES;

# 或者
alter user 'root'@'%' identified with mysql_native_password by '密码';
UPDATE user SET Password=PASSWORD('root@123') where USER='root';

-- 8.0版本直接mysql -uroot -p连接
UPDATE user SET authentication_string=PASSWORD('root@123') where USER='root';
alter user 'boer'@'%' IDENTIFIED BY 'Boer@123';
alter user 'root'@'localhost' IDENTIFIED BY 'Root@123';
FLUSH PRIVILEGES;

更改权限(以下方式选用一个)

1
2
3
4
5
6
7
8
-- Mysql默认不允许远程登录,所以需要开启远程访问权限
update user set host = '%' where user = 'root';
FLUSH PRIVILEGES;

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'Root@123' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;

主从设置(可选)

1
略过
  • 5.创建相关数据库

创建库语句

mysql5.7

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
-- mysql5.7
-- scm
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY 'Scm@147258';

-- amon
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON amon.* TO 'amon'@'%' IDENTIFIED BY 'Amon@147258';

-- rman
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON rman.* TO 'rman'@'%' IDENTIFIED BY 'Rman@147258';

-- hue
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON hue.* TO 'hue'@'%' IDENTIFIED BY 'Hue@147258';

-- hive
CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON metastore.* TO 'hive'@'%' IDENTIFIED BY 'Hive@147258';

-- sentry
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY 'Sentry@147258';

-- nav
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY 'Nav@147258';

-- navms
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY 'Navms@147258';

-- oozie
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY 'Oozie@147258';

-- flush
FLUSH PRIVILEGES;

mysql8.0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
-- mysql8.0
-- scm
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'scm'@'%' IDENTIFIED BY 'Scm@147258';
grant all privileges on scm.* to 'scm'@'%' ;

-- amon
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'amon'@'%' IDENTIFIED BY 'Amon@147258';
grant all privileges on amon.* to 'amon'@'%' ;

-- rman
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'rman'@'%' IDENTIFIED BY 'Rman@147258';
grant all privileges on rman.* to 'rman'@'%' ;


-- hue
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'hue'@'%' IDENTIFIED BY 'Hue@147258';
grant all privileges on hue.* to 'hue'@'%' ;


-- hive
CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'hive'@'%' IDENTIFIED BY 'Hive@147258';
grant all privileges on metastore.* to 'hive'@'%' ;


-- sentry
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'sentry'@'%' IDENTIFIED BY 'Sentry@147258';
grant all privileges on sentry.* to 'sentry'@'%' ;


-- nav
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'nav'@'%' IDENTIFIED BY 'Nav@147258';
grant all privileges on nav.* to 'nav'@'%' ;


-- navms
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'navms'@'%' IDENTIFIED BY 'Navms@147258';
grant all privileges on navms.* to 'navms'@'%' ;


-- oozie
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE USER 'oozie'@'%' IDENTIFIED BY 'Oozie@147258';
grant all privileges on oozie.* to 'oozie'@'%' ;


-- flush
FLUSH PRIVILEGES;

6.上传连接mysql的jar包

上传到服务器上并更改名称

1
2
3
4
ls
# mysql-connector-java-8.0.16.jar

cp mysql-connector-java-8.0.16.jar /usr/share/java/mysql-connector-java.jar

7.安装 cloudera-manager

创建cloudera-manager目录,存放cdh安装文件。解压压缩包

1
2
3
4
5
6
mkdir /opt/cloudera-manager
tar -zxvf cm6.3.1-redhat7.tar.gz
cd cm6.3.1/RPMS/x86_64/
mv cloudera-manager-agent-6.3.1-1466458.el7.x86_64.rpm /opt/cloudera-manager/
mv cloudera-manager-server-6.3.1-1466458.el7.x86_64.rpm /opt/cloudera-manager/
mv cloudera-manager-daemons-6.3.1-1466458.el7.x86_64.rpm /opt/cloudera-manager/

rpm安装,所有节点都安装cloudera-manager-daemons和cloudera-manager-agent

1
2
3
4
5
6
rpm -ivh cloudera-manager-daemons-6.3.1-1466458.el7.x86_64.rpm 

# 安装agent依赖
yum install -y perl bind-utils psmisc cyrus-sasl-plain cyrus-sasl-gssapi fuse portmap fuse-libs /lib/lsb/init-functions httpd mod_ssl openssl-devel python-psycopg2 MySQL-python libxslt

rpm -ivh cloudera-manager-agent-6.3.1-1466458.el7.x86_64.rpm

修改agent配置文件

1
2
3
vim /etc/cloudera-scm-agent/config.ini

server_host=hadoop101

主节点安装cloudera-manager-server

1
rpm -ivh cloudera-manager-server-6.3.1-1466458.el7.x86_64.rpm 

修改 server的db.properties

1
2
3
4
5
6
7
8
vim /etc/cloudera-scm-server/db.properties

com.cloudera.cmf.db.type=mysql
com.cloudera.cmf.db.host=hadoop101:3306
com.cloudera.cmf.db.name=scm
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=Scm@147258
com.cloudera.cmf.db.setupType=EXTERNAL

上传parcel包

1
2
3
4
5
ls /opt/cloudera/parcel-repo

# CDH-6.3.2-1.cdh6.3.2.p0.1605554-el7.parcel
# CDH-6.3.2-1.cdh6.3.2.p0.1605554-el7.parcel.sha
# manifest.json

8.初始化cm库

1
/opt/cloudera/cm/schema/scm_prepare_database.sh mysql -h hadoop101 scm scm Scm@147258

9.启动Server与Agent

启动主节点的Server和所有节点的Agent

启动Server命令

1
service cloudera-scm-server.service start

启动Agent命令

1
systemctl start cloudera-scm-agent

启动Server查看日志,等待几分钟后无问题登录

1
tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log

10.登录界面

1
2
3
默认端口号:7180
默认用户名:admin
默认密码:admin

安装完成

11.安装前后遇到的问题

  • 问题1

问题描述

1
2
3
4
5
6
7
8
9
10
11
12
13
Failed to add storage directory [DISK]file:/data1/dfs/dn
java.io.IOException: Incompatible clusterIDs in /data1/dfs/dn: namenode clusterID = cluster8; datanode clusterID = cluster7
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:722)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:286)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:399)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:379)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:544)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1740)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1676)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
at java.lang.Thread.run(Thread.java:748)

解决方式:

1
2
3
4
5
6
7
8
9
10
vi  /data/dfs/dn/current/VERSION

# 修改相对应的ID
#Wed Feb 03 16:14:04 CST 2021
storageID=DS-2f4324ff-a104-46ef-a966-c908e19dda2c
clusterID=cluster7
cTime=0
datanodeUuid=20e21b1d-02ee-4f13-bc3a-630eec78580a
storageType=DATA_NODE
layoutVersion=-57
  • 问题2

问题描述

安装过程界面提示警告

1
2
已启用透明大页面压缩,可能会导致重大性能问题。请运行“echo never > /sys/kernel/mm/transparent_hugepage/defrag”和“echo never > /sys/kernel/mm/transparent_hugepage/enabled”以禁用此设置,然后将同一命令添加到 /etc/rc.local 等初始化脚本中,
以便在系统重启时予以设置。以下主机将受到影响:

解决方式

1
2
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
  • 问题3

问题描述

1
主机 hadoop4 上的内存被调拨过度。总内存分配额是 210.0 GiB 个字节,但是 RAM 只有 251.7 GiB 个字节(其中的 50.3 GiB 个字节是保留给系统使用的)。如需获得分配详细信息,请访问“主机”页面上的“资源”选项卡。重新配置主机上的角色以降低总内存分配额。请注意:Java 最大堆大小乘以 1.3 等于近似的 JVM 开销。

解决方式

1
内存调拨过度验证阈值改成 0.9
  • 问题4

问题描述

1
2
3
4
5
6
7
8
9
10
11
Fatal error during KafkaServer startup. Prepare to shutdown
kafka.common.InconsistentBrokerIdException: Configured broker.id 56 doesn't match stored broker.id 102 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
at kafka.server.KafkaServer.getBrokerIdAndOfflineDirs(KafkaServer.scala:707)
at kafka.server.KafkaServer.startup(KafkaServer.scala:212)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:42)
at kafka.Kafka$.main(Kafka.scala:75)
at com.cloudera.kafka.wrap.Kafka$$anonfun$1.apply(Kafka.scala:92)
at com.cloudera.kafka.wrap.Kafka$$anonfun$1.apply(Kafka.scala:92)
at com.cloudera.kafka.wrap.Kafka$.runMain(Kafka.scala:103)
at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:95)
at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala)

解决方式

1
2
3
4
5
6
7
8
9
# 修改meta.properties文件的相应ID值
find / -name meta.properties
# /var/local/kafka/data/meta.properties

vi /var/local/kafka/data/meta.properties
#
# Sat Jan 30 13:34:53 CST 2021
# version=0
# broker.id=65
  • 问题5

问题描述

archive.cloudera.com主机名未配置【此报错,可忽略】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
2021-01-22 14:42:14,126 ERROR ParcelUpdateService:com.cloudera.parcel.components.ParcelDownloaderImpl: (11 skipped) Unable to retrieve remote parcel repository manifest
java.util.concurrent.ExecutionException: java.net.UnknownHostException: archive.cloudera.com: 未知的名称或服务
at com.ning.http.client.providers.netty.future.NettyResponseFuture.abort(NettyResponseFuture.java:231)
at com.ning.http.client.providers.netty.request.NettyRequestSender.abort(NettyRequestSender.java:422)
at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequestWithNewChannel(NettyRequestSender.java:290)
at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequestWithCertainForceConnect(NettyRequestSender.java:142)
at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequest(NettyRequestSender.java:117)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.execute(NettyAsyncHttpProvider.java:87)
at com.ning.http.client.AsyncHttpClient.executeRequest(AsyncHttpClient.java:506)
at com.ning.http.client.AsyncHttpClient$BoundRequestBuilder.execute(AsyncHttpClient.java:229)
at com.cloudera.parcel.components.ParcelDownloaderImpl.getRepositoryInfoFuture(ParcelDownloaderImpl.java:592)
at com.cloudera.parcel.components.ParcelDownloaderImpl.getRepositoryInfo(ParcelDownloaderImpl.java:544)
at com.cloudera.parcel.components.ParcelDownloaderImpl.syncRemoteRepos(ParcelDownloaderImpl.java:357)
at com.cloudera.parcel.components.ParcelDownloaderImpl$1.run(ParcelDownloaderImpl.java:464)
at com.cloudera.parcel.components.ParcelDownloaderImpl$1.run(ParcelDownloaderImpl.java:459)
at com.cloudera.cmf.persist.ReadWriteDatabaseTaskCallable.call(ReadWriteDatabaseTaskCallable.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: archive.cloudera.com: 未知的名称或服务
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at java.net.InetAddress.getByName(InetAddress.java:1076)
at com.ning.http.client.NameResolver$JdkNameResolver.resolve(NameResolver.java:28)
at com.ning.http.client.providers.netty.request.NettyRequestSender.remoteAddress(NettyRequestSender.java:358)
at com.ning.http.client.providers.netty.request.NettyRequestSender.connect(NettyRequestSender.java:369)
at com.ning.http.client.providers.netty.request.NettyRequestSender.sendRequestWithNewChannel(NettyRequestSender.java:283)
... 15 more

解决方式

向hosts文件添加127.0.0.1 archive.cloudera.com,重启 cloudera-scm-server

1
2
3
4
5
cat >> /etc/hosts/ <<EOF
127.0.0.1 archive.cloudera.com
EOF
systemctl restart cloudera-scm-server
tailf /var/log/cloudera-scm-server/cloudera-scm-server.log
  • 问题6

问题描述

问题分析:提示datanode的clusterID()和namenode的clusterID不匹配;

产生原因:可能是在安装CDH的时候,第一次格式化dfs后,启动了hadoop,中途又重新执行了某些步骤导致重复dfs格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID保持不变

1
2
3
4
5
6
7
8
9
10
11
12
13
Failed to add storage directory [DISK]file:/data1/dfs/dn
java.io.IOException: Incompatible clusterIDs in /data1/dfs/dn: namenode clusterID = cluster8; datanode clusterID = cluster7
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:722)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:286)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:399)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:379)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:544)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1740)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1676)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
at java.lang.Thread.run(Thread.java:748)

解决方式

根据报错地址去查看current下的VERSION文件,更改clusterID

1
2
3
4
5
6
7
8
cat /data1/dfs/dn/current/VERSION
#Sat Jan 30 11:15:36 CST 2021
storageID=DS-f9072fbc-3fb9-4600-a3c3-35e465a5cd45
clusterID=cluster8
cTime=0
datanodeUuid=b34f6c62-9f50-488e-a5b4-85046595bcb7
storageType=DATA_NODE
layoutVersion=-57
 评论