Click here to Skip to main content
15,890,506 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
We are getting some warnings in our mapreduce job while reading and writing data from datanode, it is not aborting the job though. This error comes up at several places in the job. Looks like an issue with timeout variables in hdfs-site.xml and hbase-site.xml files.

What timeout values should I change in these properties file and why?

Below is the extract from our log file. Any help would be of great help.

Read error:

filename: trace_log-2015_02_13.gz
extracted name: extractedLogtrace_log-2015_02_13.log
15/02/13 12:51:39 INFO input.FileInputFormat: Total input paths to process : 1
15/02/13 12:51:39 INFO util.NativeCodeLoader: Loaded the native-hadoop     library
15/02/13 12:51:39 WARN snappy.LoadSnappy: Snappy native library not loaded
15/02/13 12:51:39 INFO mapred.JobClient: Running job: job_201410072206_7921
15/02/13 12:51:40 INFO mapred.JobClient:  map 0% reduce 0%
15/02/13 12:51:42 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
15/02/13 12:52:00 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block blk_-2121649173137352050_631454java.net.SocketTimeoutException: 69000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/D1:2011 remote=/D1:2010]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.DataInputStream.readFully(Unknown Source)
at java.io.DataInputStream.readLong(Unknown Source)
at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3161)
15/02/13 12:52:00 WARN hdfs.DFSClient: Error Recovery for     blk_-2121649173137352050_631454 bad datanode[0] D1:2010
15/02/13 12:52:00 WARN hdfs.DFSClient: Error Recovery for block blk_-2121649173137352050_631454 in pipeline D1:2010, D2:2010, D0:2010: bad datanode D1:2010

Write error:

java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/D1:2010 remote=/
D1:2011] at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159) at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198) at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392) at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490) at
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202) at    org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
at java.lang.Thread.run(Thread.java:662)
Posted

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900