关于hadoop:从 / :9000的调用在连接异常时失败:java.net.ConnectException:连接被拒绝

Call From / to :9000 failed on connection exception: java.net.ConnectException: Connection refused

我尝试部署测试hadoop集群环境。当我启动它时,所有日志都正确,但是无法运行任何hadoop命令,我发现没有监听9000端口。

运行hadoop命令(错误,所有命令都是相同的错误):

1
2
3
hadoop-2.5.0/bin$ ./hdfs dfs -ls /
14/08/15 10:19:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From master-hadoop/172.17.65.225 to master-hadoop:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Namenode 9000端口未监听

1
2
hadoop-2.5.0/bin$ sudo netstat -ntap | grep 9000
Terminal console doesn't output anything.

hadoop配置:

core-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master-hadoop:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/hadoop/hadoop/tmp</value>
    </property>
</configuration>

hdfs-site:xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<configuration>
    <property>
        <name>dfs.namenode.rpc-address</name>
        <value>master-hadoop:9001</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>secondary-hadoop:50090</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/hadoop/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.namenode.data.dir</name>
        <value>file:/home/hadoop/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>

mapred-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master-hadoop:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master-hadoop:19888</value>
    </property>
</configuration>

yarn- site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce-shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master-hadoop:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master-hadoop:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master-hadoop:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master-hadoop:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master-hadoop:8088</value>
    </property>
</configuration>

Namenode / etc / hosts:

1
2
3
4
5
6
7
8
172.17.65.225  master-hadoop
127.0.0.1       master-hadoop

::1      master-hadoop localhost

172.17.65.151  slave1-hadoop
172.17.65.14   slave2-hadoop
172.17.65.117  secondary-hadoop

Namenode格式日志:

hadoop-2.5.0 / bin $ ./hdfs namenode-格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
14/08/15 10:16:16 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master-hadoop/172.17.65.225
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.5.0
STARTUP_MSG:   classpath = /home/hadoop/hadoop/hadoop-2.5.0/etc/hadoop:/home/hadoop/hadoop/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar:
...
:/home/hadoop/hadoop/hadoop-2.5.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z
STARTUP_MSG:   java = 1.7.0_21
************************************************************/

14/08/15 10:16:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/08/15 10:16:16 INFO namenode.NameNode: createNameNode [-format]
14/08/15 10:16:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-4d27991c-4852-407c-9c6b-70df76994d13
14/08/15 10:16:16 INFO namenode.FSNamesystem: fsLock is fair:true
14/08/15 10:16:16 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/08/15 10:16:16 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/08/15 10:16:16 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
14/08/15 10:16:16 INFO blockmanagement.BlockManager: The block deletion will start around 2014 Aug 15 10:16:16
14/08/15 10:16:16 INFO util.GSet: Computing capacity for map BlocksMap
14/08/15 10:16:16 INFO util.GSet: VM type       = 32-bit
14/08/15 10:16:16 INFO util.GSet: 2.0% max memory 888.9 MB = 17.8 MB
14/08/15 10:16:16 INFO util.GSet: capacity      = 2^22 = 4194304 entries
14/08/15 10:16:16 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/08/15 10:16:16 INFO blockmanagement.BlockManager: defaultReplication         = 2
14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxReplication             = 512
14/08/15 10:16:16 INFO blockmanagement.BlockManager: minReplication             = 1
14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/08/15 10:16:16 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/08/15 10:16:16 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/08/15 10:16:16 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/08/15 10:16:16 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
14/08/15 10:16:16 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
14/08/15 10:16:16 INFO namenode.FSNamesystem: supergroup          = supergroup
14/08/15 10:16:16 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/08/15 10:16:16 INFO namenode.FSNamesystem: HA Enabled: false
14/08/15 10:16:16 INFO namenode.FSNamesystem: Append Enabled: true
14/08/15 10:16:17 INFO util.GSet: Computing capacity for map INodeMap
14/08/15 10:16:17 INFO util.GSet: VM type       = 32-bit
14/08/15 10:16:17 INFO util.GSet: 1.0% max memory 888.9 MB = 8.9 MB
14/08/15 10:16:17 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/08/15 10:16:17 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/08/15 10:16:17 INFO util.GSet: Computing capacity for map cachedBlocks
14/08/15 10:16:17 INFO util.GSet: VM type       = 32-bit
14/08/15 10:16:17 INFO util.GSet: 0.25% max memory 888.9 MB = 2.2 MB
14/08/15 10:16:17 INFO util.GSet: capacity      = 2^19 = 524288 entries
14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/08/15 10:16:17 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/08/15 10:16:17 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/08/15 10:16:17 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/08/15 10:16:17 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/08/15 10:16:17 INFO util.GSet: VM type       = 32-bit
14/08/15 10:16:17 INFO util.GSet: 0.029999999329447746% max memory 888.9 MB = 273.1 KB
14/08/15 10:16:17 INFO util.GSet: capacity      = 2^16 = 65536 entries
14/08/15 10:16:17 INFO namenode.NNConf: ACLs enabled? false
14/08/15 10:16:17 INFO namenode.NNConf: XAttrs enabled? true
14/08/15 10:16:17 INFO namenode.NNConf: Maximum size of an xattr: 16384
14/08/15 10:16:17 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1935486596-172.17.65.225-1408068977173
14/08/15 10:16:17 INFO common.Storage: Storage directory /home/hadoop/hadoop/dfs/name has been successfully formatted.
14/08/15 10:16:17 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/08/15 10:16:17 INFO util.ExitUtil: Exiting with status 0
14/08/15 10:16:17 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master-hadoop/172.17.65.225
************************************************************/

hadoop-hadoop-namenode-master-hadoop.log

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
2014-08-15 10:17:48,855 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master-hadoop/172.17.65.225
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.5.0
STARTUP_MSG:   classpath = /home/hadoop/hadoop/hadoop-2.5.0/etc/hadoop:/home/hadoop/hadoop/hadoop-2.5.0/share/hadoop/common/lib/jersey-json-1.9.jar:
...
...
:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z
STARTUP_MSG:   java = 1.7.0_21
************************************************************/

2014-08-15 10:17:48,870 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2014-08-15 10:17:48,880 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2014-08-15 10:17:49,117 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2014-08-15 10:17:49,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2014-08-15 10:17:49,209 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2014-08-15 10:17:49,211 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://master-hadoop:9000
2014-08-15 10:17:49,211 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use master-hadoop:9000 to access this namenode/service.
2014-08-15 10:17:49,389 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-15 10:17:54,555 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
2014-08-15 10:17:54,556 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2014-08-15 10:17:54,605 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-08-15 10:17:54,609 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2014-08-15 10:17:54,620 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2014-08-15 10:17:54,622 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2014-08-15 10:17:54,622 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2014-08-15 10:17:54,623 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-08-15 10:17:54,653 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2014-08-15 10:17:54,655 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2014-08-15 10:17:54,676 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2014-08-15 10:17:54,676 INFO org.mortbay.log: jetty-6.1.26
2014-08-15 10:17:54,883 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2014-08-15 10:17:54,948 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@`0.0.0.0`:50070
2014-08-15 10:17:59,984 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2014-08-15 10:17:59,984 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2014-08-15 10:18:00,023 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2014-08-15 10:18:00,062 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2014-08-15 10:18:00,062 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2014-08-15 10:18:00,065 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2014-08-15 10:18:00,066 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2014 Aug 15 10:18:00
2014-08-15 10:18:00,068 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2014-08-15 10:18:00,068 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2014-08-15 10:18:00,069 INFO org.apache.hadoop.util.GSet: 2.0% max memory 888.9 MB = 17.8 MB
2014-08-15 10:18:00,069 INFO org.apache.hadoop.util.GSet: capacity      = 2^22 = 4194304 entries
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 2
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2014-08-15 10:18:00,087 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2014-08-15 10:18:00,092 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2014-08-15 10:18:00,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2014-08-15 10:18:00,279 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2014-08-15 10:18:00,279 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2014-08-15 10:18:00,280 INFO org.apache.hadoop.util.GSet: 1.0% max memory 888.9 MB = 8.9 MB
2014-08-15 10:18:00,280 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2014-08-15 10:18:00,297 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2014-08-15 10:18:00,305 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2014-08-15 10:18:00,305 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2014-08-15 10:18:00,306 INFO org.apache.hadoop.util.GSet: 0.25% max memory 888.9 MB = 2.2 MB
2014-08-15 10:18:00,306 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2014-08-15 10:18:00,308 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2014-08-15 10:18:00,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2014-08-15 10:18:00,310 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 888.9 MB = 273.1 KB
2014-08-15 10:18:00,312 INFO org.apache.hadoop.util.GSet: capacity      = 2^16 = 65536 entries
2014-08-15 10:18:00,316 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2014-08-15 10:18:00,316 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2014-08-15 10:18:00,317 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2014-08-15 10:18:00,355 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/hadoop/dfs/name/in_use.lock acquired by nodename 21145@master-hadoop
2014-08-15 10:18:00,433 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /home/hadoop/hadoop/dfs/name/current
2014-08-15 10:18:00,433 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2014-08-15 10:18:00,488 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2014-08-15 10:18:00,534 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2014-08-15 10:18:00,534 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /home/hadoop/hadoop/dfs/name/current/fsimage_0000000000000000000
2014-08-15 10:18:00,542 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2014-08-15 10:18:00,543 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2014-08-15 10:18:00,689 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2014-08-15 10:18:00,689 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 372 msecs
2014-08-15 10:18:00,902 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to master-hadoop:9001
2014-08-15 10:18:00,909 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2014-08-15 10:18:00,923 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9001
2014-08-15 10:18:00,954 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2014-08-15 10:18:00,963 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2014-08-15 10:18:00,964 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2014-08-15 10:18:00,982 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of under-replicated blocks = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of  over-replicated blocks = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of blocks being written    = 0
2014-08-15 10:18:00,994 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 31 msec
2014-08-15 10:18:01,010 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-08-15 10:18:01,011 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
2014-08-15 10:18:01,131 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: master-hadoop/172.17.65.225:9001
2014-08-15 10:18:01,132 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2014-08-15 10:18:01,142 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2014-08-15 10:18:01,142 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning because of pending operations
2014-08-15 10:18:01,147 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 5 millisecond(s).
2014-08-15 10:18:02,566 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.17.65.14, datanodeUuid=e3b6ade5-3534-4f5f-99fa-959bbbd9dce9, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0) storage e3b6ade5-3534-4f5f-99fa-959bbbd9dce9
2014-08-15 10:18:02,571 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/172.17.65.14:50010
2014-08-15 10:18:02,648 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-8dee85fa-82b5-40a6-98e4-db44cca23371 for DN 172.17.65.14:50010
2014-08-15 10:18:02,698 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-8dee85fa-82b5-40a6-98e4-db44cca23371,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale
2014-08-15 10:18:02,698 INFO BlockStateChange: BLOCK* processReport: from storage DS-8dee85fa-82b5-40a6-98e4-db44cca23371 node DatanodeRegistration(172.17.65.14, datanodeUuid=e3b6ade5-3534-4f5f-99fa-959bbbd9dce9, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0), blocks: 0, hasStaleStorages: false, processing time: 3 msecs
2014-08-15 10:18:05,783 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:09,235 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:14,099 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:14,578 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.17.65.151, datanodeUuid=43bc6f34-b8ad-4355-9fe4-9951f40e982a, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0) storage 43bc6f34-b8ad-4355-9fe4-9951f40e982a
2014-08-15 10:18:14,578 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/172.17.65.151:50010
2014-08-15 10:18:14,628 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new storage ID DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1 for DN 172.17.65.151:50010
2014-08-15 10:18:14,660 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from DatanodeStorage[DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1,DISK,NORMAL] after starting up or becoming active. Its block contents are no longer considered stale
2014-08-15 10:18:14,660 INFO BlockStateChange: BLOCK* processReport: from storage DS-1e42ed67-0da7-476a-9e67-d778cd56b2b1 node DatanodeRegistration(172.17.65.151, datanodeUuid=43bc6f34-b8ad-4355-9fe4-9951f40e982a, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4d27991c-4852-407c-9c6b-70df76994d13;nsid=995055688;c=0), blocks: 0, hasStaleStorages: false, processing time: 1 msecs
2014-08-15 10:18:19,074 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2014-08-15 10:18:31,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2014-08-15 10:18:31,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2014-08-15 10:19:01,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2014-08-15 10:19:01,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.17.65.117
2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2014-08-15 10:19:01,521 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2014-08-15 10:19:01,522 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 55
2014-08-15 10:19:01,536 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 69
2014-08-15 10:19:01,538 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoop/dfs/name/current/edits_inprogress_0000000000000000001 -> /home/hadoop/hadoop/dfs/name/current/edits_0000000000000000001-0000000000000000002
2014-08-15 10:19:01,542 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2014-08-15 10:19:02,094 WARN org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
2014-08-15 10:19:02,969 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Transfer took 0.05s at 0.00 KB/s
2014-08-15 10:19:02,969 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000002 size 353 bytes.
2014-08-15 10:19:03,014 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0
2014-08-15 10:19:31,143 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2014-08-15 10:19:31,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2014-08-15 10:20:01,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2014-08-15 10:20:01,145 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).

Namenode jps:

1
2
3
/hadoop-2.5.0/logs$ jps
21145 NameNode
21409 ResourceManager

次要jps:

Datanode1 jps:

1
2
/hadoop-2.5.0$ jps
7350 DataNode

Datanode2 jps:

1
2
/hadoop-2.5.0$ jps
11784 DataNode

我遇到同样的问题。

使用hdfs namenode -format时,需要检查名称节点信息SHUTDOWN_MSG,例如SHUTDOWN_MSG

> of core-site.xml

但是我认为这不是一个好的解决方案,也许有更好的解决方法。


也许您可以尝试如下操作:

  • edit /etc/hosts
  • 127.0.0.1 master-hadoop-> 127.0.0.1 localhost

  • 停止所有hadoop服务
  • ./sbin/stop-dfs.sh

    ./sbin/stop-yarn.sh

  • 重新启动所有hadoop服务
  • ./sbin/start-dfs.sh

    ./sbin/start-yarn.sh