<span>HBase2.x安装</span>
zookeeper的安装
下载安装包
下载地址:
上传到服务器并解压
tar -zxvf hbase-2.3.4-bin.tar.gz -C /usr/local/
修改配置文件
配置文件目录在conf文件夹中。
(1) 修改hbase-env.sh
export JAVA_HOME=/usr/local/jdk1.8
export HBASE_MANAGES_ZK=false
其中HBASE_MANAGES_ZK=false
表示指定使用自己搭建的zk集群,而不是hbase自带的zk集群。
(2) 修改hbase-site.xml
<configuration>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop01,hadoop02,hadoop03</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://bigdata02:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
修改regionservers
vi regionservers
bigdata02
bigdata03
bigdata04
修改backup-masters
该文件是不存在的,先自行创建
vi backup-masters
bigdata04
将HBase安装包分发到其他节点
scp -r hbase-2.3.4/ bigdata03:/usr/local/
scp -r hbase-2.3.4/ bigdata04:/usr/local/
配置环境变量
vi /etc/profile
export HBASE_HOME=/usr/local/hbase-2.3.4
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$PATH
source /etc/profile
启动Hbase
启动顺序,hadoop集群-->zk集群-->hbase HBase启动命令 start-hbase.sh
start-hbase.sh
[root@bigdata02 hbase-2.3.4]# start-hbase.sh
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/soft/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hbase-2.3.4/lib/client-facing-thirdparty/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
running master, logging to /usr/local/hbase-2.3.4/logs/hbase-root-master-bigdata02.out
bigdata03: running regionserver, logging to /usr/local/hbase-2.3.4/bin/../logs/hbase-root-regionserver-bigdata03.out
bigdata04: running regionserver, logging to /usr/local/hbase-2.3.4/bin/../logs/hbase-root-regionserver-bigdata04.out
bigdata02: running regionserver, logging to /usr/local/hbase-2.3.4/bin/../logs/hbase-root-regionserver-bigdata02.out
bigdata04: running master, logging to /usr/local/hbase-2.3.4/bin/../logs/hbase-root-master-bigdata04.out
通过浏览器验证启动结果
排错
Q: HMaster进程未启动 A:需要去hbase安装目录下logs文件里查看具体错误信息
cat hbase-root-master-bigdata02.log
如果报的是failed on connection e xception: java.net.ConnectException: Connection refused;请保持core-site中fs.defaultFS
配置的端口号和hbase-site.xml中配置的hbase.rootdir
端口号相同。 core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata02:9000</value>
</property>
fs.defaultFS:接收Client连接的RPC端口,用于获取文件系统metadata信息 hbase-site.xml
<property> <name>hbase.tmp.dir</name> <value>hdfs://bigdata02:9000/hbase</value> </property>