Friday 27 September 2013

DynamoDB : Pros and Cons



Dynamo DB
Pros
·         Scalable, Simple, Distributed, Flexible and offers Tunable consistency.
·         Hosted by Amazon and receives Fully Managed Service from Amazon.
·         Plans are cheap – a free tier allows more than 40 million database operations/month and pricing is based on throughput
·         Automatic data replication across multiple AWS availability zones to protect data and provide high uptime
·          Average service-side latencies in single-digit milliseconds
·          Single API call allows you to atomically increment or decrement numerical attributes
·          Uses secure algorithms to keep your data safe
·          AWS Management Console monitors your table operational metrics and  receives amazons infrastructure support
·          Tightly integrated with Amazon Elastic MapReduce (Amazon EMR)
·         Composite key support
·         Offers Conditional updates
·         Supports Hadoop integration M/R, Hive
·         It has the property of distributed hash tables hence more performance booster as compared to MySQL
·         Consistent performance
·         Low learning curve
·         It comes with a decent object mapper for Java


Cons

·         Poor documentation
·         Limited data types. : It doesn’t accept binary data. You cannot store images, byte arrays etc. You can get around it by encoding everything in string using base64 encoding, but base64 encoding produces bigger data. And you are counting your bytes as you cannot hit the 64KB limit!
·         Poor query comparison operators
·         Unable to do complex queries
·         64KB limit on row size
·         1MB limit on querying
·         Deployable Only on AWS
·         Indexes on column values are not supported. Secondary indexes also are not supported.
·         Integrated caching is not well explained in the document
·         When you create a table programatically (or even using AWS Console), the table doesn’t become available instantly
·         In RDBMS we get ACID guarantee, but in Dynamo-db there is no such guarantee.
·         Dynamo is an expensive and extremely low latency solution, If you are trying to store more than 64KB per item
·         Indexing - Changing or adding keys on-the-fly is impossible without creating a new table.
·         Queries - Querying data is extremely limited. Especially if you want to query non-indexed data. Joins are of course impossible so you have to manage complex data relations on your code/cache layer.
·         Backup - tedious backup procedure as compared to the slick backup of RDS
·         GUI - bad UX, limited search, no fun.
·         Speed - Response time is problematic compared to RDS. You find yourself building elaborate caching mechanism to compensate for it in places you would have settled for RDS's internal caching.
·         A big limitation of DynamoDB and other non-relational database is the lack of multiple indices.
·         DynamoDB is great for lookups by key, not so good for queries, and abysmal for queries with multiple predicates. (Esp. for Eventlog tables)
·         DynamoDB does not support transactions in the traditional SQL sense. Each write operation is atomic to an item. A write operation either successfully updates all of the item's attributes or none of its attributes.
·         Once you hit the read or write limit, your requests are denied until enough time has elapsed.
·         No Server-side scripts
·         No triggers
·         No Foreign Keys

Monday 16 September 2013

Difference between varchar and nvarchar

Differences :

1
Character Data Type

Varchar - Non-Unicode Data
NVarchar - Unicode Data

2 Character Size

Varchar - 1 byte
NVarchar - 2 bytes

3 Maximum Length

Varchar
- 8,000 bytes
NVarchar - 4,000 bytes

4 Storage Size

Varchar - Actual Length (in bytes)
NVarchar - 2 times Actual Length (in bytes)

5    Example



DECLARE @FirstName AS VARCHAR(50) = ‘Techie’
SELECT @FirstName AS FirstName,
DATALENGTH(@FirstName) AS
Length

Result:
FirstName Length
Techie 6


 DECLARE @FirstName AS NVARCHAR(50)= ‘Techie’ SELECT @FirstName AS FirstName, DATALENGTH(@FirstName) AS Length
Result:
FirstName Length
Techie 12




 * The abbreviation for Varchar is Variable Length character String.

* The abbreviation of NVarchar is uNicode Variable Length character String.

Monday 9 September 2013

Difference between Table Variable and temporary table

Difference between Table Variable and temporary table

Feature Table Variables Temporary Tables
Scope Current batch Current session, nested stored procedures. Global: all sessions.
Usage UDFs, Stored Procedures, Triggers, Batches. Stored Procedures, Triggers, Batches.
Creation DECLARE statement only. CREATE TABLE statement.
SELECT INTO statement.
Table name Maximum 128 characters. Maximum 116 characters.
Column data types Can use user-defined data types.
Can use XML collections.
User-defined data types and XML collections must be in tempdb to use.
Collation String columns inherit collation from current database. String columns inherit collation from tempdb database.
Indexes Can only have indexes that are automatically created with PRIMARY KEY & UNIQUE constraints as part of the DECLARE statement. Indexes can be added after the table has been created.
Constraints PRIMARY KEY, UNIQUE, NULL, CHECK, but they must be incorporated with the creation of the table in the DECLARE statement. FOREIGN KEY not allowed. PRIMARY KEY, UNIQUE, NULL, CHECK. Can be part of the CREATE TABLE statement, or can be added after the table has been created. FOREIGN KEY not allowed.
Post-creation DDL (indexes, columns) Statements are not allowed. Statements are allowed.
Data insertion INSERT statement (SQL 2000: cannot use INSERT/EXEC). INSERT statement, including INSERT/EXEC.
SELECT INTO statement.
Insert explicit values into identity columns (SET IDENTITY_INSERT). The SET IDENTITY_INSERT statement is not supported. The SET IDENTITY_INSERT statement is supported.
Truncate table Not allowed. Allowed.
Destruction Automatically at the end of the batch. Explicitly with DROP TABLE statement. Automatically when session ends. (Global: also when other sessions have no statements using table.)
Transactions Last only for length of update against the table variable. Uses less than temporary tables. Last for the length of the transaction. Uses more than table variables.
Stored procedure recompilations Not applicable. Creating temp table and data inserts cause procedure recompilations.
Rollbacks Not affected (Data not rolled back). Affected (Data is rolled back).
Statistics Optimizer cannot create any statistics on columns, so it treats table variable has having 1 record when creating execution plans. Optimizer can create statistics on columns. Uses actual row count for generation execution plan.
Pass to stored procedures SQL 2008 only, with predefined user-defined table type. Not allowed to pass, but they are still in scope to nested procedures.
Explicitly named objects (indexes, constraints). Not allowed. Allowed, but be aware of multi-user issues.
Dynamic SQL Must declare table variable inside the dynamic SQL. Can use temporary tables created prior to calling the dynamic sql.

Wednesday 4 September 2013

Introduction to Table-Valued Parameters with Example

Table-Valued Parameters 
To use a Table Valued Parameters we need follow steps shown below:
  1. Create a table type and define the table structure
  2. Declare a stored procedure that has a parameter of table type.
  3. Declare a table type variable and reference the table type.
  4. Using the INSERT statement and occupy the variable.
  5. We can now pass the variable to the procedure.
For Example,
Let’s create a Department Table and pass the table variable to insert data using procedure. In our example we will create Department table and afterward we will query it and see that all the content of table value parameter is inserted into it.
Department:
CREATE TABLE Department
(
DepartmentID INT PRIMARY KEY,
DepartmentName VARCHAR(30)
)
GO
1. Create a TABLE TYPE and define the table structure:
CREATE TYPE DeptType AS TABLE
(
DeptId INT, DeptName VARCHAR(30)
);
GO

2. Declare a STORED PROCEDURE that has a parameter of table type:
CREATE PROCEDURE InsertDepartment
@InsertDept_TVP DeptType
READONLY
AS
INSERT INTO
Department(DepartmentID,DepartmentName)
SELECT * FROM @InsertDept_TVP;
GO
Important points  to remember :
-  Table-valued parameters must be passed as READONLY parameters to SQL routines. You cannot perform DML operations like UPDATE, DELETE, or INSERT on a table-valued parameter in the body of a routine.
-  You cannot use a table-valued parameter as target of a SELECT INTO or INSERT EXEC statement. A table-valued parameter can be in the FROM clause of SELECT INTO or in the INSERT EXEC string or stored-procedure.
3. Declare a table type variable and reference the table type.
DECLARE @DepartmentTVP AS DeptType;
4. Using the INSERT statement and occupy the variable.
INSERT INTO @DepartmentTVP(DeptId,DeptName)
VALUES (1,'Accounts'),
(
2,'Purchase'),
(
3,'Software'),
(
4,'Stores'),
(
5,'Maarketing');
5. We can now pass the variable to the procedure and Execute.
EXEC InsertDepartment @DepartmentTVP;
GO

Let’s see if the Data are inserted in the Department Table
Conclusion:
Table-Valued Parameters is a new parameter type in SQL SERVER 2008 that provides efficient way of passing the table type variable than using the temporary table or passing so many parameters. It helps in using complex business logic in single routine. They reduce Round Trips to the server making the performance better.

Monday 2 September 2013

Optimizing SQL Query

  • Optimizing the WHERE clause - there are many cases where index access path of a column of the WHERE clause is not used even if the index on that column has already been created. Avoid such cases to make best use of the indexes, which will ultimately improve the performance. Some of these cases are: COLUMN_NAME IS NOT NULL (ROWID for a null is not stored by an index), COLUMN_NAME NOT IN (value1, value2, value3, ...), COLUMN_NAME != expression, COLUMN_NAME LIKE'%pattern' (whereas COLUMN_NAME LIKE 'pattern%' uses the index access path), etc. Usage of expressions or functions on indexed columns will prevent the index access path to be used. So, use them wisely!
  • Using WHERE instead of HAVING - usage of WHERE clause may take advantage of the index defined on the column(s) used in the WHERE clause.
  • Using the leading index columns in WHERE clause - the WHERE clause may use the complex index access path in case we specify the leading index column(s) of a complex index otherwise the WHERE clause won't use the indexed access path.
  • Indexed Scan vs Full Table Scan - Indexed scan is faster only if we are selcting only a few rows of a table otherwise full table scan should be preferred. It's estimated that an indexed scan is slower than a full table scan if the SQL statement is selecting more than 15% of the rows of the table. So, in all such cases use the SQL hints to force full table scan and suppress the use of pre-defined indexes. Okay... any guesses why full table scan is faster when a large percentage of rows are accessed? Because an indexed scan causes multiple reads per row accessed whereas a full table scan can read all rows contained in a block in a single logical read operation.
  • Using ORDER BY for an indexed scan - the optimizer uses the indexed scan if the column specified in the ORDER BY clause has an index defined on it. It'll use indexed scan even if the WHERE doesn't contain that column (or even if the WHERE clause itself is missing). So, analyze if you really want an indexed scan or a full table scan and if the latter is preferred in a particular scenario then use 'FULL' SQL hint to force the full table scan.
  • Minimizing table passes - it normally results in a better performance for obvious reasons.
  • Joining tables in the proper order - the order in which tables are joined normally affects the number of rows processed by that JOIN operation and hence proper ordering of tables in a JOIN operation may result in the processing of fewer rows, which will in turn improve the performance. The key to decide the proper order is to have the most restrictive filtering condition in the early phases of a multiple table JOIN. For example, in case we are using a master table and a details table then it's better to connect to the master table first to connecting to the details table first may result in more number of rows getting joined.
  • Simple is usually faster - yeah... instead of writing a very complex SQL statement, if we break it into multiple simple SQL statements then the chances are quite high that the performance will improve. Make use of the EXPLAIN PLAN and TKPROF tools to analyze both the conditions and stick to the complex SQL only if you're very sure about its performance.
  • Using ROWID and ROWNUM wherever possible - these special columns can be used to improve the performance of many SQL queries. The ROWID search is the fastest for Oracle database and this luxury must be enjoyed wherever possible. ROWNUM comes really handy in the cases where we want to limit the number of rows returned.
  • Usage of explicit cursors is better - explicit cursors perform better as the implicit cursors result in an extra fetch operation. Implicit cursosrs are opened the Oracle Server for INSERT, UPDATE, DELETE, and SELECT statements whereas the explicit cursors are opened by the writers of the query by explicitly using DECLARE, OPEN, FETCH, and CLOSE statements.
  • Reducing network traffic - Arrays and PL/SQL blocks can be used effectively to reduce the network traffic especially in the scenarios where a huge amount of data requires processing. For example, a single INSERT statement can insert thousands of rows if arrays are used. This will obviously result into fewer DB passes and it'll in turn improve performance by reducing the network traffic. Similarly, if we can club multiple SQL statements in a single PL/SQL block then the entire block can be sent to Oracle Server involving a single network communication only, which will eventually improve performance by reducing the network traffic.
  • Using Oracle parallel query option - Since Oracle 8, even the queries based on indexed range scans can use this parallel query option if the index is partitioned. This feature can result in an improved performance in certain scenarios.

Wednesday 28 August 2013

Optimising query performance in SQL server 2008

Turn on SQL Profiler

This one’s easy, because most good developers already do it. If you turn on the profiler and execute your page, you should be able to see all the traffic caused by a single request. This is important, because you will be able to see a number of things. Firstly, you can look for candidates for queries that can be combined. You may also see situations that shouldn’t need to be done each time you request the page. You may also see situations where the exact same query is executed multiple times.

Review query plans to ensure optimally performing sql statements.

People have different ways of analysing and improving queries that they need to execute. One I like to analyse is the query plan. Now, the hardcore people look at the text output of the query plan, however I prefer the graphical view. From SQL Server Management Studio, select Query from the menu, then choose Show Actual Execution Plan. The next time you execute the query or stored procedure, it gives a graphical representation of the query execution in one of the tabs adjacent to the results and messages tab. The rule of thumb is to look at the relative expense of each subsection of the query and see if you can improve the performance of the more expensive parts. You work from top to bottom and right to left and aim to replace to icons (which represent underying query choices) with more efficent ones.

Check the order of columns in your where clause

Ensure the order of columns in your “where” clause is the same as in the order within your index, otherwise it may not choose your index. I’ve seen plenty of cases where scans are performed instead of seeks simply because the order of the columns in the where clause are not optimal.

Ensure the where clause is ordered most restrictive to least restrictive.

This will make sure that the most efficient path is taken when matching data between indexes in your query. By restrictive, I mean that the data is more uniquely selectable. So a column with different data in every row is more restrictive than a column with much of the same data in every row. Also consider the size of the table, so that a table with less data in it may be selected first in a join over a table with more data in it. This can be a bit of a balancing act.

Remove Temp Tables

The creation of temp tables adds to the overhead required to run your overall query. In some scenarios, I have removed temporary tables and replaced them with fixed tables and had significant performance improvement.
If the temp table is created to enable to merging of data from similar data sources, then prefer a union instead. Unions, in general, are far far cheaper than temp tables.
#Temp tables are created in tempdb. @Temp tables are created in memory first, but the moment there is memory pressure, they spill over into tempdb as well. tempdb requires disk writes and reads, and so will be slower than accessing the original table data.


Remove Cursors

These are one of the most expensive statements you can use. There are special cases where they should be used, but it’s better to train yourself to use standard set based statements than cursors.

Reduce the number of joins in your queries

If you significantly reduce the number of joins in your query, you will have a vast improvement in speed of the query. There are a couple of ways to do this. You could stage the data in a denormalised table, or in Enterprise edition you can create a view and put an index on that view. Again, it depends on how immediate your requirement is for having the latest data. It is often acceptable for reporting to build these tables overnight due to the fact that a single day often has no impact on the benefits associated with a particular report.


Remove all declared variables

If everything is compilable within your stored proc, then there will be no need for the query engine to perform extra work to determine how the query will look after the declared variables are taken into consideration. So it will perform most optimally. So how do you remove declared variables? Well, for starters, you can pass them as parameters of your stored procedure call. If you find you do need to have declared variables, you can create a second stored procedure that does the main work and pass the declared variables into that. So the passed variables become parameters in the second query.

Remove CTEs

CTEs are kind of like temporary tables. But they are disasterously inefficient.

Identify key lookups

Within the query plan, identify Key Lookups (these used to be called Bookmarks). If there are any Key Lookups, they can be removed from the query plan by adding columns as include columns to the non-clustered index on the right of the Key Lookup. Include columns are fantastic, because they ultimately mean that when the candidate rows are found in the index, there will be no requirement to go back to the original table and retrieve the included columns because they are already in the index. So in this case, the speed of the query approaches that of a clustered index.







Thursday 27 June 2013

Excel sheets protection

Excel sheets are mostly used for reporting purposes and reports mostly 
based on confidential data, so security is very important for Excel 
based reports. 
 
Here is the code in visual studio to protect the excel sheet 
 1) Open vb console application and add this code
 
Module Module1

    Sub Main()


        
        Dim strfilename As String = "c:\dwreport.xls"
        
        Dim strnewfilename As String = "c:\protectedhehadwreport.xls"

        Dim strPassword As String = "yourpassword"

       
        Dim objExcel As New Microsoft.Office.Interop.Excel.Application

        Dim objWorkbook As Microsoft.Office.Interop.Excel.Workbook

        objWorkbook = objExcel.Workbooks.Open(strFilename)

        objExcel.DisplayAlerts = False

        objWorkbook.SaveAs(strnewfilename, Password:=strPassword)

        objWorkbook.Close()

        objExcel = Nothing

        Return

   
    End Sub

End Module
 
 
 
2) Go to project, add references and add microsoft excel libraries from .COM


Tuesday 18 June 2013

Infographics and SEO

There are many reasons why infographics could be good to market your content.
 The Audience can quickly grasp loads of information and they don’t have to engage in long pictureless descriptions.
Nothing makes it faster to get the point than a picture. Almost everything can be made into infographics no matter how technical it may be. There are lots of possibilities that you can explore by using them on your website.
 The graphics are more memorable than chunk of text.
The first written languages were picture languages. Homo sapiens has a deeply embedded picture recognition and memorising mechanism that so far nothing else can beat it.
 Your infographic can be easily reposted therefore you get a link to your site which is good for your SEO!
Because the infographic can be embedded, it is really easy to attract more people who want to share it on their own blog. It is more likely that you get a relevant link as well.
If your infographic is well designed people are more likely to share it, so you get more traffic.
The number one shared media is pictures so you can’t go wrong with it!
Infographic is more likely to go viral.
Great posts cause great attention but Infographics cause a lot of attention in the first place so you get the double chance to go viral.
 By breaking down the topic in graphics you can make your content to be more easily accessible to people who do not know the subject much.
This is probably the greatest feature of infographics. The information can be designed in such an apprehensive way that you can instantly turn anyone into a person who knows and can engage with the key points in the subject area.
They can  be shared easily through social sites such as Facebook, Twitter, Digg, Reddit, StumbleUpon and more. 

Monday 10 June 2013

To increase the length of username in orangehrm

To increase the length of username in orangehrm
C:\xampp\htdocs\orangehrm-3.0.1\symfony\plugins\orangehrmAdminPlugin\lib\form\systemuserform.php is edited for increased length

D:\xampp\htdocs\orangehrm-3.0.1\symfony\web\webres_513fd0981da216.38969927\orangehrmAdminPlugin\js\systemusersuccess.js

edit for more length

C:\xampp\htdocs\orangehrm-3.0.1\symfony\plugins\orangehrmLeavePlugin\modules\leave\templates\mail\en_US\apply

change the mail format

Tuesday 5 March 2013

Secondary namenode in hadoop

You might think that the SecondaryNameNode is a hot backup daemon for the NameNode. You’d be wrong. The SecondaryNameNode is a poorly understood component of the HDFS architecture, but one which provides the important function of lowering NameNode restart time. This blog post describes how to configure this daemon in a large-scale environment. The default Hadoop configuration places an instance of the SecondaryNameNode on the same node as the NameNode. A more scalable configuration involves configuring the SecondaryNameNode on a different machine.

About the SecondaryNameNode

The NameNode is responsible for the reliable storage and interactive lookup and modification of the metadata for HDFS. To maintain interactive speed, the filesystem metadata is stored in the NameNode’s RAM. Storing the data reliably necessitates writing it to disk as well. To ensure that these writes do not become a speed bottleneck, instead of storing the current snapshot of the filesystem every time, a list of modifications is continually appended to a log file called the EditLog. Restarting the NameNode involves replaying the EditLog to reconstruct the final system state.
The SecondaryNameNode periodically compacts the EditLog into a “checkpoint;” the EditLog is then cleared. A restart of the NameNode then involves loading the most recent checkpoint and a shorter EditLog containing only events since the checkpoint. Without this compaction process, restarting the NameNode can take a very long time. Compaction ensures that restarts do not incur unnecessary downtime.
The duties of the SecondaryNameNode end there; it cannot take over the job of serving interactive requests from the NameNode. Although, in the event of the loss of the primary NameNode, an instance of the NameNode daemon could be manually started on a copy of the NameNode metadata retrieved from the SecondaryNameNode.

Why should this run on a separate machine?

  1. Scalability. Creating the system snapshot requires about as much memory as the NameNode itself occupies. Since the memory available to the NameNode process is a primary limit on the size of the distributed filesystem, a large-scale cluster will require most or all of the available memory for the NameNode.
  2. Durability. When the SecondaryNameNode creates a checkpoint, it does so in a separate copy of the filesystem metadata. Moving this process to another machine also creates a copy of the metadata file on an independent machine, increasing its durability.

Configuring the SecondaryNameNode on a remote host

An HDFS instance is started on a cluster by logging in to the NameNode machine and running $HADOOP_HOME/bin/start-dfs.sh (or start-all.sh). This script starts a local instance of the NameNode process, logs into every machine listed in the conf/slaves file and starts an instance of the DataNode process, and logs into every machine listed in the conf/masters file and starts an instance of the SecondaryNameNode process. The masters file does not govern which nodes become NameNodes or JobTrackers; those are started on the machine(s) where bin/start-dfs.sh and bin/start-mapred.sh are executed. A more accurate filename might be “secondaries,” but that’s not currently the case.
  1. Put each machine where you intend to run a SecondaryNameNode in the conf/masters file, one per line. (Note: currently, only one SecondaryNameNode may be configured in this manner.)
  2. Modify the conf/hadoop-site.xml file on each of these machines to include the following property:
    
      dfs.http.address
      namenode.host.address:50070
      
        The address and the base port where the dfs namenode web ui will listen on.
        If the port is 0 then the server will start on a free port.
      
    
    
This second step is less obvious than the first and works around a subtlety in Hadoop’s data transfer architecture. Traffic between the DataNodes and the NameNode occurs over a custom RPC protocol; the port for this protocol is specified in the URI supplied to the fs.default.name property. The NameNode also runs a Jetty web servlet engine on port 50070. This servlet engine generates status pages detailing the NameNode’s operation. It also communicates with the SecondaryNameNode. The SecondaryNameNode actually performs an HTTP GET request to retrieve the current FSImage (checkpoint) and EditLog from the NameNode; it uses HTTP POST to upload the new checkpoint back to the NameNode. The conf/hadoop-default.xml file sets dfs.http.address to 0.0.0.0:50070; the NameNode listens on this host mask and port (by default, all inbound interfaces on port 50070), and the SecondaryNameNode attempts to use the same value as an address to connect to. It special-cases 0.0.0.0 as “localhost.” Running the SecondaryNameNode on a different machine requires telling that machine where to reach the NameNode.
Usually this setting could be placed in the hadoop-site.xml file used by all daemons on all nodes. In an environment such as Amazon EC2, though, where a node is known by multiple addresses (one public IP and one private IP), it is preferable to have the SecondaryNameNode connect to the NameNode over the private (unmetered bandwidth) IP address, while you connect to the public IP address for status pages. Specifying dfs.http.address as anything other than 0.0.0.0 on the NameNode will cause it to bind to only one address instead of all available ones.
In conclusion, larger deployments of HDFS will require a remote SecondaryNameNode, but doing so requires a subtle configuration tweak, to ensure that the SecondaryNameNode can communicate back to the remote NameNode.

Common errors n solutions

Error:
10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... [8]
Solution:
 Make sure you have configured Hadoop's conf/hdfs-site.xml setting the xceivers value to at least the following:
      
        dfs.datanode.max.xcievers
        4096
      
      
Be sure to restart your HDFS after making the above configuration.


Error:
 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
 
Solution:
Apache HBase is a database. It uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient (On mac os x its 256). Any significant amount of loading will lead you to. You may also notice errors such as...
      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
      
Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)
You should also up the hbase users' nproc setting; under load, a low-nproc setting could manifest as OutOfMemoryError
To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct.

UserPriviledgedAction:Give chown rights to hduser
session 0*0 for server null :network issue
Clock sync error:set same time in master n all slaves to synchronise the clocks
Unable to read additional data from clientsessionid:Error comes when slaves are removed and data is not replicated properly.Add the slaves back to recover data.


If you are on Ubuntu you will need to make the following changes:
In the file /etc/security/limits.conf add a line like:
hadoop  -       nofile  32768
Replace hadoop with whatever user is running Hadoop and HBase. If you have separate users, you will need 2 entries, one for each user. In the same file set nproc hard and soft limits. For example:
hadoop soft/hard nproc 32000
.
In the file /etc/pam.d/common-session add as the last line in the file:
session required  pam_limits.so
Otherwise the changes in /etc/security/limits.conf won't be applied.
Don't forget to log out and back in again for the changes to take effect!



HBase will lose data unless it is running on an HDFS that has a durable sync implementation. DO NOT use Hadoop 0.20.2, Hadoop 0.20.203.0, and Hadoop 0.20.204.0 which DO NOT have this attribute. Currently only Hadoop versions 0.20.205.x or any release in excess of this version -- this includes hadoop-1.0.0 -- have a working, durable sync [7]. Sync has to be explicitly enabled by setting dfs.support.append equal to true on both the client side -- in hbase-site.xml -- and on the serverside in hdfs-site.xml (The sync facility HBase needs is a subset of the append code path).
  
    dfs.support.append
    true
  
        
 add to hbase-site


 
  hbase.regionserver.class
        org.apache.hadoop.hbase.ipc.IndexedRegionInterface
            This configuration is required to enable indexing on
            hbase and to be able to create secondary indexes
           

       

       
            hbase.regionserver.impl
           
            org.apache.hadoop.hbase.regionserver.tableindexed.IndexedRegionServer
           
            This configuration is required to enable indexing on
            hbase and to be able to create secondary indexes
           
       

Thursday 17 January 2013

Executing Eclipse project

To open eclipse go to root and cd location of eclipse
type eclipse
Eclipse should open up create the classes

Create a MapReduce prooject and java class and export it to jar file and save it in some location.
Let this location be /usr/mr

type on hadoop bin
hadoop jar /usr/mr/wc.jar  [src folder]  [target folder]
hadoop jar /usr/mr/wc.jar  [i/p file]  //when output is getting saved in hbase tables or hdfs files mentioned in the code.
hadoop jar /usr/mr/wc.jar  //when both the locations are provided beforehand

Wednesday 16 January 2013

Thrift installation

Apache Thrift Installation Tutorial:
The official documentation for installing/using Apache Thrift is currently somewhat lacking. The following are step-by-step instructions on installing Apache Thrift and getting the sample project code running on either a fresh Ubuntu 10.10 installation
1. Install the necessary dependencies.
# sudo apt-get install libssl-dev libboost-dev flex bison g++
2. Download Apache Thrift 0.7 at:
# wget http://archive.apache.org/dist/thrift/0.7.0/thrift-0.7.0.tar.gz
3. Untar the tarball to your project directory:
# cd ~/project
# tar –xzvf ~/thrift-0.7.0.tar.gz
4. Run configure (turning off support for other unused languages)
# cd thrift-0.7.0
# chmod u+x configure install-sh
# ./configure --prefix=${HOME}/project --exec-prefix=${HOME}/project --with-python=no --with-erlang=no --with-java=no --with-php=no --with-csharp=no --with-ruby=no
# make
# make install
5. Download the sample code from the course website:
# cd
# wget http://www.cs.uwaterloo.ca/~bernard/courses/cs454/sample-0.1.1.tar.gz
# cd project
# tar –xzvf ~/sample-0.1.1.tar.gz
6. Compile the WatDHT.thrift file:
# cd sample
# ~/project/bin/thrift --strict --gen cpp WatDHT.thrift
7. Replace the first 4 lines of the Makefile in the sample code with the following:
CXX = g++
CPPFLAGS = -g -fpermissive -Wall -I. -I${HOME}/project/include -I${HOME}/project/include/thrift -Igen-cpp
LDFLAGS = -L${HOME}/project/lib -lthrift -lpthread -lcrypto
LD = g++
8. Compile the sample code:
# make
9. Add to LD_LIBRARY_PATH (assuming you are using bash as your shell):
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HOME}/project/lib