Sunday 30 December 2012

SSRS Reporting for months

In SSRS,

For reporting ,formatting can b changed for a text box by right clicking a text box and going to format.Go to 3 ellipses near format code and choose currency to display $ before costs.

By default ,if you generate monthly reports, it will generate report in alphabetical order eg april,august,december like that.To order it in proper format,right click the text box
go to properties turn on the interactive sort option.
Now right click again and go to edit group,In sorting give the column datepart(mm,date)  and display would be datename(mm,date) .



Tuesday 18 December 2012

Shared network drive and SSIS

When we copy a big file from a server to a remote location (NAS drive or something) through ssis,there comes a network connection unavailable error.
To fix this go to my computer and map a network drive .Add that remote drive.Now the job willl execute properly through SSIS.

Executing a package is easy but running a job from sql server gives error.To resolve that


Use the UNC path when specifying the destination-- the SQL Agent doesn't have a concept of "mapped" "drives".
Also, SQL Agent typically runs as "Local Service" or "Local System" and, as such, doesn't have rights to remote shares on other computers.
You have a couple of choices:
  • Run SQL Agent as a role account in the domain. Grant that account permission to write to the directory / share where you'd like the backups stored.
  • Run SQL Agent as "Network Service". It will authenticate to the sharing server with the domain computer account of the machine the service is running on. Grant that account permission to write to the directory / share where you'd like the backup stored.
  • If you don't have a domain, create an account with the same username and password on both the machine hosting SQL Agent and the machine hosting the backup files. Change SQL Agent to run as this "role" account, and grant that account permission to write to the directory / share where you'd like the backup stored. (The "poor man's domain"...)
  •  
  •  
  •  or else create a batch file pkgexec.bat having "C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\dtexec.exe" /file c:\SceneVisitReports.dtsx  and schedule in windows task manager c:\>pkgexec.bat

Monday 17 December 2012

How to copy reports from sql server to a remote disk drive

For SSIS package to work with remote disk drive go to my computer and map a network drive to a local drive

Open a new notepad and write a command
copy c:\*.txt \\kp\nas\report and save it as "filename.bat"

In SSIS package
execute process task ,put this bat file in executables and run it.


The Excel Connection Manager is not supported in the 64-bit version of SSIS

The Excel Connection Manager is not supported in the 64-bit version of SSIS


When i am using SSIS Excel connection manager in my 64bit dev environment it is giving me the following error..
[Connection manager "Excel Connection Manager"] Error: SSIS Error Code DTS_E_OLEDB_EXCEL_NOT_SUPPORTED: The Excel Connection Manager is not supported in the 64-bit version of SSIS, as no OLE DB provider is available.
To fix this:
Go to Project –> Project Properties –> then Set Run64BitRuntime = False.
thats it…

Tuesday 11 December 2012

Increasing java heap space

While running jasper reports i got error of insufficient java heap space.
Here is the solution for same.

In linux systems,set this

export JVM_ARGS="-Xms1024m -Xmx1024m"

In windows

Running Java applications in computers takes some memory during the process which is known as Java memory (Java heap). Frequently, it is necessary to increase that heap to prevent throttling the performance of the application.
  1. Go to Control Panel. Click on "Start" button. Then click on "Control Panel."

  2. 2
    Find Programs
    Find Programs
    Select Programs. In the left side of Control Panel click on "Programs." Pleas click on the "Programs" written with green color, not the "Uninstall a program," which is in blue color.
  3. 3
    Find Java
    Find Java
    Go to Java settings. In the next dialog click on "Java," usually at the bottom of the other Programs; "Java Control Panel" dialog pop-up opens.
  4. 4
    Find Java Tab
    Find Java Tab
    Select "Java" tab. Inside the Java tab, click on "View" button. It opens the "Java Runtime Environment Settings"
  5. 5
    View Java Runtime Environment
    View Java Runtime Environment
    Change amount of heap. In the "Runtime Parameters" column change the value, or if it is blank decide for the new value, of the Java memory.
  6. 6
    Modify Runtime Parameter
    Modify Runtime Parameter
    Modify the parameter. To modify the parameter, double click in the "Runtime Parameters" column and,
    • type -Xmx512m that assigns 512MB memory for the Java.
    • type -Xmx1024m that assigns 1GB memory for the Java.
    • type -Xmx2048m that assigns 2GB memory for the Java.
    • type -Xmx3072m that assigns 3GB memory for the Java, and so on.
    • Please note, it begins with a minus sign and ends to an m.
    • Also note, there is no blank space between characters.
  7. 7
    Close the dialogue box. Click on "OK" button on the "Java Runtime Environment Settings" to close it.
  8. 8
    Close Java dialogue box. "Apply" button in the "Java Control Panel" has been enabled now. You should click on "Apply" to finalise the new Java memory. Then click on the "OK" button.
  9. 9
    Close the Windows 7 Control Panel.

Monday 10 December 2012

More about HDFS

HDFS Permissions and Security

Starting with Hadoop 0.16.1, HDFS has included a rudimentary file permissions system. This permission system is based on the POSIX model, but does not provide strong security for HDFS files. The HDFS permissions system is designed to prevent accidental corruption of data or casual misuse of information within a group of users who share access to a cluster. It is not a strong security model that guarantees denial of access to unauthorized parties.
HDFS security is based on the POSIX model of users and groups. Each file or directory has 3 permissions (read, write and execute) associated with it at three different granularities: the file's owner, users in the same group as the owner, and all other users in the system. As the HDFS does not provide the full POSIX spectrum of activity, some combinations of bits will be meaningless. For example, no file can be executed; the +x bits cannot be set on files (only directories). Nor can an existing file be written to, although the +w bits may still be set.
Security permissions and ownership can be modified using the bin/hadoop dfs -chmod, -chown, and -chgrp operations described earlier in this document; they work in a similar fashion to the POSIX/Linux tools of the same name.
Determining identity - Identity is not authenticated formally with HDFS; it is taken from an extrinsic source. The Hadoop system is programmed to use the user's current login as their Hadoop username (i.e., the equivalent of whoami). The user's current working group list (i.e, the output of groups) is used as the group list in Hadoop. HDFS itself does not verify that this username is genuine to the actual operator.
Superuser status - The username which was used to start the Hadoop process (i.e., the username who actually ran bin/start-all.sh or bin/start-dfs.sh) is acknowledged to be the superuser for HDFS. If this user interacts with HDFS, he does so with a special username superuser. This user's operations on HDFS never fail, regardless of permission bits set on the particular files he manipulates. If Hadoop is shutdown and restarted under a different username, that username is then bound to the superuser account.
Supergroup - There is also a special group named supergroup, whose membership is controlled by the configuration parameter dfs.permissions.supergroup.
Disabling permissions - By default, permissions are enabled on HDFS. The permission system can be disabled by setting the configuration option dfs.permissions to false. The owner, group, and permissions bits associated with each file and directory will still be preserved, but the HDFS process does not enforce them, except when using permissions-related operations such as -chmod.

Additional HDFS Tasks

Rebalancing Blocks

New nodes can be added to a cluster in a straightforward manner. On the new node, the same Hadoop version and configuration (conf/hadoop-site.xml) as on the rest of the cluster should be installed. Starting the DataNode daemon on the machine will cause it to contact the NameNode and join the cluster. (The new node should be added to the slaves file on the master server as well, to inform the master how to invoke script-based commands on the new node.)
But the new DataNode will have no data on board initially; it is therefore not alleviating space concerns on the existing nodes. New files will be stored on the new DataNode in addition to the existing ones, but for optimum usage, storage should be evenly balanced across all nodes.
This can be achieved with the automatic balancer tool included with Hadoop. The Balancer class will intelligently balance blocks across the nodes to achieve an even distribution of blocks within a given threshold, expressed as a percentage. (The default is 10%.) Smaller percentages make nodes more evenly balanced, but may require more time to achieve this state. Perfect balancing (0%) is unlikely to actually be achieved.
The balancer script can be run by starting bin/start-balancer.sh in the Hadoop directory. The script can be provided a balancing threshold percentage with the -threshold parameter; e.g., bin/start-balancer.sh -threshold 5. The balancer will automatically terminate when it achieves its goal, or when an error occurs, or it cannot find more candidate blocks to move to achieve better balance. The balancer can always be terminated safely by the administrator by running bin/stop-balancer.sh.
The balancing script can be run either when nobody else is using the cluster (e.g., overnight), but can also be run in an "online" fashion while many other jobs are on-going. To prevent the rebalancing process from consuming large amounts of bandwidth and significantly degrading the performance of other processes on the cluster, the dfs.balance.bandwidthPerSec configuration parameter can be used to limit the number of bytes/sec each node may devote to rebalancing its data store.

Copying Large Sets of Files

When migrating a large number of files from one location to another (either from one HDFS cluster to another, from S3 into HDFS or vice versa, etc), the task should be divided between multiple nodes to allow them all to share in the bandwidth required for the process. Hadoop includes a tool called distcp for this purpose.
By invoking bin/hadoop distcp src dest, Hadoop will start a MapReduce task to distribute the burden of copying a large number of files from src to dest. These two parameters may specify a full URL for the the path to copy. e.g., "hdfs://SomeNameNode:9000/foo/bar/" and "hdfs://OtherNameNode:2000/baz/quux/" will copy the children of /foo/bar on one cluster to the directory tree rooted at /baz/quux on the other. The paths are assumed to be directories, and are copied recursively. S3 URLs can be specified with s3://bucket-name/key.

Decommissioning Nodes

In addition to allowing nodes to be added to the cluster on the fly, nodes can also be removed from a cluster while it is running, without data loss. But if nodes are simply shut down "hard," data loss may occur as they may hold the sole copy of one or more file blocks.
Nodes must be retired on a schedule that allows HDFS to ensure that no blocks are entirely replicated within the to-be-retired set of DataNodes.
HDFS provides a decommissioning feature which ensures that this process is performed safely. To use it, follow the steps below:
Step 1: Cluster configuration. If it is assumed that nodes may be retired in your cluster, then before it is started, an excludes file must be configured. Add a key named dfs.hosts.exclude to your conf/hadoop-site.xml file. The value associated with this key provides the full path to a file on the NameNode's local file system which contains a list of machines which are not permitted to connect to HDFS.
Step 2: Determine hosts to decommission. Each machine to be decommissioned should be added to the file identified by dfs.hosts.exclude, one per line. This will prevent them from connecting to the NameNode.
Step 3: Force configuration reload. Run the command bin/hadoop dfsadmin -refreshNodes. This will force the NameNode to reread its configuration, including the newly-updated excludes file. It will decommission the nodes over a period of time, allowing time for each node's blocks to be replicated onto machines which are scheduled to remain active.
Step 4: Shutdown nodes. After the decommission process has completed, the decommissioned hardware can be safely shutdown for maintenance, etc. The bin/hadoop dfsadmin -report command will describe which nodes are connected to the cluster.
Step 5: Edit excludes file again. Once the machines have been decommissioned, they can be removed from the excludes file. Running bin/hadoop dfsadmin -refreshNodes again will read the excludes file back into the NameNode, allowing the DataNodes to rejoin the cluster after maintenance has been completed, or additional capacity is needed in the cluster again, etc.

Verifying File System Health

After decommissioning nodes, restarting a cluster, or periodically during its lifetime, you may want to ensure that the file system is healthy--that files are not corrupted or under-replicated, and that blocks are not missing.
Hadoop provides an fsck command to do exactly this. It can be launched at the command line like so:
  bin/hadoop fsck [path] [options]
If run with no arguments, it will print usage information and exit. If run with the argument /, it will check the health of the entire file system and print a report. If provided with a path to a particular directory or file, it will only check files under that path. If an option argument is given but no path, it will start from the file system root (/). The options may include two different types of options:
Action options specify what action should be taken when corrupted files are found. This can be -move, which moves corrupt files to /lost+found, or -delete, which deletes corrupted files.
Information options specify how verbose the tool should be in its report. The -files option will list all files it checks as it encounters them. This information can be further expanded by adding the -blocks option, which prints the list of blocks for each file. Adding -locations to these two options will then print the addresses of the DataNodes holding these blocks. Still more information can be retrieved by adding -racks to the end of this list, which then prints the rack topology information for each location. (See the next subsection for more information on configuring network rack awareness.) Note that the later options do not imply the former; you must use them in conjunction with one another. Also, note that the Hadoop program uses -files in a "common argument parser" shared by the different commands such as dfsadmin, fsck, dfs, etc. This means that if you omit a path argument to fsck, it will not receive the -files option that you intend. You can separate common options from fsck-specific options by using -- as an argument, like so:
  bin/hadoop fsck -- -files -blocks
The -- is not required if you provide a path to start the check from, or if you specify another argument first such as -move.
By default, fsck will not operate on files still open for write by another client. A list of such files can be produced with the -openforwrite option.

Sunday 9 December 2012

Some tips for big data cluster

To disable the iptables in linux box

service iptables status
service iptables save
service iptables stop
chkconfig iptables off
service network restart

To leave the safe mode
hadoop dfsadmin -safemode leave

To stop a daemon of hbase
hbase-daemon.sh stop master

To kill a job
hadoop job -kill job_201209271339_0006

install java jdk rpm

rpm -Uvh jdk1.6.0_33-linux-i586.rpm

Uninstall java
rpm -qa ! grep jdk
rpm -e jdk1.6.0_33-fcs

To view task log url ,replace taskid by attemptid in the log url.

Common errors
UserPriviledgedAction:Give chown rights to hduser
session 0*0 for server null :network issue
Clock sync error:set same time in master n all slaves to synchronise the clocks
Unable to read additional data from clientsessionid:Error comes when slaves are removed and data is not replicated properly.Add the slaves back to recover data.

Create Auxlib in hive and copy these jar files in that
Zookeeper
hive contrib
hbase jar
hbase hive jar
guava jar


For talend opensource many jar files are missing and they can be taken from talend data integration tool.

For talend to connect to hive hive-site.xml and hive-defaullt.xml should be on classpath.For talend or jaspersoft to work,start the thrift server using command

hive --service hiveserver

Add these lines in bashrc to set the classpath

for i in hivehome/lib/*.jar;do
classpath=$classpath:$i
done
classpath=hadoophome/hadoopcore jar
classpath=hivehome/conf

In talend mysql java connector was missing and it requires 5.0 version only to be added to plugins.Go to modules add it and refresh.

HBase connection to talend was noot happening coz localhost could not b resolved.Go to c:/windows/system32/drivers/etc/hosts and add these lines

127.0.0.1 localhost
IP  master
IP1 slave1
IP2 slave2

similarly add hosts entry of ur system in master and slaves

hbase connection is case sensitive and will throw null ptr exception if case is not taken into consideration.

namenode goes for safe mode even before jobtracker so add
dfs.namenode.threshold.percent=0 so that namenode doesnt go for safe mode.

while writing a hive load statement use fields terminated by clause to avoid adding equal no. of null columns.

copying data from eventlog to hbase hive integration creates reducer size to zero as some rowkeys comes a s null.

delete temp files to remove any data that might cause hindrance in starting the cluster.

To ssh without using password do this in root
$ chmod go-w $HOME $HOME/.ssh
$ chmod 600 $HOME/.ssh/authorized_keys


and go to hduser
$ chown `whoami` $HOME/.ssh/authorized_keys

change machine name to slave1,slave2 by
echo slave1 > /proc/sys/kernel/hostname


The most common problem that causes public keys to fail are permissions in the $HOME directory. Your $HOME directory cannot be writable by any user except the owner. Additionally, the .ssh directory and the authorized_keys file cannot be writable except by the owner. The ssh protocol will not report the problem but will silently ignore the authorized_keys file if any permissions are wrong.
To fix the destination public key handshake, you can do this (logged in as the remote user):
    chmod 755 $HOME $HOME/.ssh
    chmod 600 $HOME/.ssh/*

Alternatively, you can just remove the write capability with:
chmod go-w $HOME $HOME/.ssh
chmod go-w $HOME/.ssh/*
Also, the $HOME and $HOME/.ssh directories must be owned by the user and all the files in .ssh owned by the user. A common error is to create the .ssh directory and files as root and forget to assign the proper permissions and ownership. A better way is to login as the user, then run ssh-keygen -t to create not only the ssh keys but the .ssh directory with correct permissions and ownership.

Tuesday 4 December 2012

Reporting on hadoop

Talend - an open source provider of tools for managing Big data, provides a tool called Talend Open Studio for Big Data. Its a GUI based data integration tool like SSIS. Behind the scenes this tool generates code for Hadoop Distributed File System (HDFS), Pig, Hbase, Sqoop and Hive. This kind of tools really take Hadoop and Big Data to a extremely wide user-base.

After you have the ways to build a high-way to a mountain of data-source, the immediate need is to make meaning of these data. One of the front-runners of data visualization and analytics, Tableau, provides way to create ad-hoc visualizations from extracts of data from Hadoop clusters or straight live from the Hadoop clusters. Creating visualization from in-memory data and staging extract of data from Hadoop clusters into relational databases and creating visualizations from the same; both are facilitated by Tableau.
Other analytics vendor like Snaplogic and Pentaho also provides tools for operating with Hadoop clusters, which does not require developers to write code. Microsoft has an integrated platform for integration, reporting and analytics (in-memory/olap) and an IDE like SSDS (formerly BIDS).
If tools similar to Talend and Tableau are integrated into SSIS, SSAS, SSRS, DB Engine and SSDT, then Microsoft is one of the best positioned leaders to take Hadoop to a wide audience in their main-stream business. When platforms like Azure Data Market, Data Quality Services, Master Data Management, StreamInsight, Sharepoint etc join hands with tool and technology support integrated with SQL Sever, it would be an unmatched way to extract intelligence out of Hadoop. Connectors for Hadoop has been the first baby step towards this area. Still lot of maturity in this area is awaited.
Till then look out for existing leaders in this area like Cloudera, MapR, Hortonworks, Apache and GreenPlum for Hadoop distributions and implementation. And for Hadoop tools, software vendors like Talend, Tableau, SnapLogic and Pentaho can provide the required toolset.