Tuesday 5 March 2013

Secondary namenode in hadoop

You might think that the SecondaryNameNode is a hot backup daemon for the NameNode. You’d be wrong. The SecondaryNameNode is a poorly understood component of the HDFS architecture, but one which provides the important function of lowering NameNode restart time. This blog post describes how to configure this daemon in a large-scale environment. The default Hadoop configuration places an instance of the SecondaryNameNode on the same node as the NameNode. A more scalable configuration involves configuring the SecondaryNameNode on a different machine.

About the SecondaryNameNode

The NameNode is responsible for the reliable storage and interactive lookup and modification of the metadata for HDFS. To maintain interactive speed, the filesystem metadata is stored in the NameNode’s RAM. Storing the data reliably necessitates writing it to disk as well. To ensure that these writes do not become a speed bottleneck, instead of storing the current snapshot of the filesystem every time, a list of modifications is continually appended to a log file called the EditLog. Restarting the NameNode involves replaying the EditLog to reconstruct the final system state.
The SecondaryNameNode periodically compacts the EditLog into a “checkpoint;” the EditLog is then cleared. A restart of the NameNode then involves loading the most recent checkpoint and a shorter EditLog containing only events since the checkpoint. Without this compaction process, restarting the NameNode can take a very long time. Compaction ensures that restarts do not incur unnecessary downtime.
The duties of the SecondaryNameNode end there; it cannot take over the job of serving interactive requests from the NameNode. Although, in the event of the loss of the primary NameNode, an instance of the NameNode daemon could be manually started on a copy of the NameNode metadata retrieved from the SecondaryNameNode.

Why should this run on a separate machine?

  1. Scalability. Creating the system snapshot requires about as much memory as the NameNode itself occupies. Since the memory available to the NameNode process is a primary limit on the size of the distributed filesystem, a large-scale cluster will require most or all of the available memory for the NameNode.
  2. Durability. When the SecondaryNameNode creates a checkpoint, it does so in a separate copy of the filesystem metadata. Moving this process to another machine also creates a copy of the metadata file on an independent machine, increasing its durability.

Configuring the SecondaryNameNode on a remote host

An HDFS instance is started on a cluster by logging in to the NameNode machine and running $HADOOP_HOME/bin/start-dfs.sh (or start-all.sh). This script starts a local instance of the NameNode process, logs into every machine listed in the conf/slaves file and starts an instance of the DataNode process, and logs into every machine listed in the conf/masters file and starts an instance of the SecondaryNameNode process. The masters file does not govern which nodes become NameNodes or JobTrackers; those are started on the machine(s) where bin/start-dfs.sh and bin/start-mapred.sh are executed. A more accurate filename might be “secondaries,” but that’s not currently the case.
  1. Put each machine where you intend to run a SecondaryNameNode in the conf/masters file, one per line. (Note: currently, only one SecondaryNameNode may be configured in this manner.)
  2. Modify the conf/hadoop-site.xml file on each of these machines to include the following property:
    
      dfs.http.address
      namenode.host.address:50070
      
        The address and the base port where the dfs namenode web ui will listen on.
        If the port is 0 then the server will start on a free port.
      
    
    
This second step is less obvious than the first and works around a subtlety in Hadoop’s data transfer architecture. Traffic between the DataNodes and the NameNode occurs over a custom RPC protocol; the port for this protocol is specified in the URI supplied to the fs.default.name property. The NameNode also runs a Jetty web servlet engine on port 50070. This servlet engine generates status pages detailing the NameNode’s operation. It also communicates with the SecondaryNameNode. The SecondaryNameNode actually performs an HTTP GET request to retrieve the current FSImage (checkpoint) and EditLog from the NameNode; it uses HTTP POST to upload the new checkpoint back to the NameNode. The conf/hadoop-default.xml file sets dfs.http.address to 0.0.0.0:50070; the NameNode listens on this host mask and port (by default, all inbound interfaces on port 50070), and the SecondaryNameNode attempts to use the same value as an address to connect to. It special-cases 0.0.0.0 as “localhost.” Running the SecondaryNameNode on a different machine requires telling that machine where to reach the NameNode.
Usually this setting could be placed in the hadoop-site.xml file used by all daemons on all nodes. In an environment such as Amazon EC2, though, where a node is known by multiple addresses (one public IP and one private IP), it is preferable to have the SecondaryNameNode connect to the NameNode over the private (unmetered bandwidth) IP address, while you connect to the public IP address for status pages. Specifying dfs.http.address as anything other than 0.0.0.0 on the NameNode will cause it to bind to only one address instead of all available ones.
In conclusion, larger deployments of HDFS will require a remote SecondaryNameNode, but doing so requires a subtle configuration tweak, to ensure that the SecondaryNameNode can communicate back to the remote NameNode.

Common errors n solutions

Error:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... [8]
Solution:
 Make sure you have configured Hadoop's conf/hdfs-site.xml setting the xceivers value to at least the following:
      
        dfs.datanode.max.xcievers
        4096
      
      
Be sure to restart your HDFS after making the above configuration.


Error:
 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
 
Solution:
Apache HBase is a database. It uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient (On mac os x its 256). Any significant amount of loading will lead you to. You may also notice errors such as...
      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
      
Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)
You should also up the hbase users' nproc setting; under load, a low-nproc setting could manifest as OutOfMemoryError
To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct.

UserPriviledgedAction:Give chown rights to hduser
session 0*0 for server null :network issue
Clock sync error:set same time in master n all slaves to synchronise the clocks
Unable to read additional data from clientsessionid:Error comes when slaves are removed and data is not replicated properly.Add the slaves back to recover data.


If you are on Ubuntu you will need to make the following changes:
In the file /etc/security/limits.conf add a line like:
hadoop  -       nofile  32768
Replace hadoop with whatever user is running Hadoop and HBase. If you have separate users, you will need 2 entries, one for each user. In the same file set nproc hard and soft limits. For example:
hadoop soft/hard nproc 32000
.
In the file /etc/pam.d/common-session add as the last line in the file:
session required  pam_limits.so
Otherwise the changes in /etc/security/limits.conf won't be applied.
Don't forget to log out and back in again for the changes to take effect!



HBase will lose data unless it is running on an HDFS that has a durable sync implementation. DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have a working, durable sync [7]. Sync has to be explicitly enabled by setting dfs.support.append equal to true on both the client side -- in hbase-site.xml -- and on the serverside in hdfs-site.xml (The sync facility HBase needs is a subset of the append code path).
  
    dfs.support.append
    true
  
        
 add to hbase-site


 
  hbase.regionserver.class
        org.apache.hadoop.hbase.ipc.IndexedRegionInterface
            This configuration is required to enable indexing on
            hbase and to be able to create secondary indexes
           

       

       
            hbase.regionserver.impl
           
            org.apache.hadoop.hbase.regionserver.tableindexed.IndexedRegionServer
           
            This configuration is required to enable indexing on
            hbase and to be able to create secondary indexes