关于hadoop:HBase无法在HDFS中创建其目录

HBase can't creates its directory in HDFS

我正在按照本教程来安装hbasehadoop,但是我遇到了问题。

一切都很好,直到最后一步

HBase creates its directory in HDFS. To see the created directory,
browse to Hadoop bin and type the following command.

$ ./bin/hadoop fs -ls /hbase If everything goes well, it will give you
the following output.

Found 7 items drwxr-xr-x - hbase users 0 2014-06-25 18:58 /hbase/.tmp

...

但是当我运行此命令时,我会得到/hbase :No such file or directory

这是我的配置

Hadoop配置

core-site.xml

1
2
3
4
5
6
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

hdfs-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<configuration>
   <property>
      <name>dfs.replication</name >
      <value>1</value>
   </property>

   <property>
      <name>dfs.name.dir</name>
      <value>file:///home/marc/hadoopinfra/hdfs/namenode</value>
   </property>

   <property>
      <name>dfs.data.dir</name>
      <value>file:///home/marc/hadoopinfra/hdfs/datanode</value>
   </property>
</configuration>

mapred-site.xml

1
2
3
4
5
6
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

yarn-site.xml

1
2
3
4
5
6
7
8
9
10
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>

Hbase配置
hbase-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<configuration>
   <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:8030/hbase</value>
</property>
   <property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/marc/zookeeper</value>
   </property>
   <property>
       <name>hbase.cluster.distributed</name>
       <value>true</value>
    </property>
</configuration>

我可以浏览http:// localhost:50070和http:// localhost:8088 / cluster

如何解决此问题?

编辑

根据Saurabh Suman的回答,我创建了hbase文件夹,但它保持为空。

在hbase-marc-master-marc-pc.log中,我有以下异常。有关系吗?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
2017-07-01 20:31:59,349 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN]
    at org.apache.hadoop.ipc.Client.call(Client.java:1411)
    at org.apache.hadoop.ipc.Client.call(Client.java:1364)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:748)
2017-07-01 20:31:59,351 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled.  Available:[TOKEN]
    at org.apache.hadoop.ipc.Client.call(Client.java:1411)
    at org.apache.hadoop.ipc.Client.call(Client.java:1364)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429)
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693)
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189)
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803)
    at java.lang.Thread.run(Thread.java:748)


该日志表明HBase成为活动主服务器时存在问题,因此它开始关闭。

我的假设是HBase无法正常启动,因此它没有自行创建/hbase目录。此外,这就是/hbase目录仍然为空的原因。

我在虚拟机上重现了您的错误,并使用此修改后的设置对其进行了修复。

OS CentOS Linux版本7.2.1511

虚拟化软件Vagrant和Virtualbox

Java

1
2
3
4
java -version
openjdk version"1.8.0_131"
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)

core-site.xml(HDFS)

1
2
3
4
5
6
<configuration>
   <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:8020</value>
   </property>
</configuration>

hbase-site.xml(HBase)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<configuration>
   <property>
      <name>hbase.rootdir</name>
      <value>file:/home/hadoop/HBase/HFiles</value>
   </property>

   <property>
      <name>hbase.zookeeper.property.dataDir</name>
      <value>/home/hadoop/zookeeper</value>
   </property>
   <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
   </property>
   <property>
      <name>hbase.rootdir</name>
      <value>hdfs://localhost:8020/hbase</value>
   </property>
</configuration>

目录所有者和权限调整

1
2
3
4
5
6
7
8
sudo su # Become root user
cd /usr/local/

chown -R hadoop:root hadoop
chmod -R 755 hadoop

chown -R hadoop:root Hbase
chmod -R 755 Hbase

结果

使用此设置启动HBase之后,它会自动创建/hbase目录并将其填充内容。

1
2
3
4
5
6
7
8
9
[hadoop@localhost conf]$ hdfs dfs -ls /hbase
Found 7 items
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/.tmp
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/MasterProcWALs
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/WALs
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/data
-rw-r--r--   1 hadoop supergroup         42 2017-07-03 14:26 /hbase/hbase.id
-rw-r--r--   1 hadoop supergroup          7 2017-07-03 14:26 /hbase/hbase.version
drwxr-xr-x   - hadoop supergroup          0 2017-07-03 14:26 /hbase/oldWALs


我们只需要编辑配置文件中无法自行创建的内容。因此,您需要在HDFS中手动创建目录。
hdfs dfs -mkdir /hbase