10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node:
java.io.IOException: No live nodes contain current block. Will get new
block locations from namenode and retry...
[8]Solution:
Make sure you have configured Hadoop's
conf/hdfs-site.xml
setting the
xceivers
value to at least the following:
Be sure to restart your HDFS after making the above configuration.dfs.datanode.max.xcievers 4096
Error:
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
Solution:
Apache HBase is a database. It uses a lot of files all at the same time.
The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems
is insufficient (On mac os x its 256). Any significant amount of loading will
lead you to.
You may also notice errors such as... 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)
You should also up the hbase users'
nproc
setting; under load, a low-nproc
setting could manifest as OutOfMemoryError
To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct.
UserPriviledgedAction:Give chown rights to hduser
session 0*0 for server null :network issue
Clock sync error:set same time in master n all slaves to synchronise the clocks
Unable to read additional data from clientsessionid:Error comes when slaves are removed and data is not replicated properly.Add the slaves back to recover data.
If you are on Ubuntu you will need to make the following changes:
In the file
/etc/security/limits.conf
add
a line like: hadoop - nofile 32768Replace
hadoop
with whatever user is running
Hadoop and HBase. If you have separate users, you will need 2
entries, one for each user. In the same file set nproc hard and soft
limits. For example: hadoop soft/hard nproc 32000.
In the file
/etc/pam.d/common-session
add
as the last line in the file: session required pam_limits.soOtherwise the changes in
/etc/security/limits.conf
won't be
applied.Don't forget to log out and back in again for the changes to take effect!
HBase will lose data unless it is running on an HDFS that has a durable
sync
implementation. DO NOT use
Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT
have this attribute. Currently only Hadoop versions 0.20.205.x or any
release in excess of this version -- this includes hadoop-1.0.0 -- have a
working, durable sync
[7]. Sync has to be explicitly enabled by setting
dfs.support.append
equal
to true on both the client side -- in hbase-site.xml
-- and on the serverside in hdfs-site.xml
(The sync
facility HBase needs is a subset of the append code path).
add to hbase-sitedfs.support.append true
hbase and to be able to create secondary indexes
org.apache.hadoop.hbase.regionserver.tableindexed.IndexedRegionServer
hbase and to be able to create secondary indexes
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
ReplyDelete