In SSRS,
For reporting ,formatting can b changed for a text box by right clicking a text box and going to format.Go to 3 ellipses near format code and choose currency to display $ before costs.
By default ,if you generate monthly reports, it will generate report in alphabetical order eg april,august,december like that.To order it in proper format,right click the text box
go to properties turn on the interactive sort option.
Now right click again and go to edit group,In sorting give the column datepart(mm,date) and display would be datename(mm,date) .
Sunday, 30 December 2012
Tuesday, 18 December 2012
Shared network drive and SSIS
When we copy a big file from a server to a remote location (NAS drive or something) through ssis,there comes a network connection unavailable error.
To fix this go to my computer and map a network drive .Add that remote drive.Now the job willl execute properly through SSIS.
Executing a package is easy but running a job from sql server gives error.To resolve that
To fix this go to my computer and map a network drive .Add that remote drive.Now the job willl execute properly through SSIS.
Executing a package is easy but running a job from sql server gives error.To resolve that
Use the UNC path when specifying the destination-- the SQL Agent doesn't have a concept of "mapped" "drives".
Also, SQL Agent typically runs as "Local Service" or "Local System" and, as such, doesn't have rights to remote shares on other computers. You have a couple of choices:
|
Monday, 17 December 2012
How to copy reports from sql server to a remote disk drive
For SSIS package to work with remote disk drive go to my computer and map a network drive to a local drive
Open a new notepad and write a command
copy c:\*.txt \\kp\nas\report and save it as "filename.bat"
In SSIS package
execute process task ,put this bat file in executables and run it.
Open a new notepad and write a command
copy c:\*.txt \\kp\nas\report and save it as "filename.bat"
In SSIS package
execute process task ,put this bat file in executables and run it.
The Excel Connection Manager is not supported in the 64-bit version of SSIS
The Excel Connection Manager is not supported in the 64-bit version of SSIS
[Connection manager "Excel Connection Manager"] Error: SSIS Error Code DTS_E_OLEDB_EXCEL_NOT_SUPPORTED: The Excel Connection Manager is not supported in the 64-bit version of SSIS, as no OLE DB provider is available.
To fix this:
Go to Project –> Project Properties –> then Set Run64BitRuntime = False.
thats it…
Tuesday, 11 December 2012
Increasing java heap space
While running jasper reports i got error of insufficient java heap space.
Here is the solution for same.
In linux systems,set this
export JVM_ARGS="-Xms1024m -Xmx1024m"
In windows
Running Java applications in
computers takes some memory during the process which is known as Java
memory (Java heap). Frequently, it is necessary to increase that heap to
prevent throttling the performance of the application.
- Go to Control Panel. Click on "Start" button. Then click on "Control Panel."
- 2Select Programs. In the left side of Control Panel click on "Programs." Pleas click on the "Programs" written with green color, not the "Uninstall a program," which is in blue color.
- 3Go to Java settings. In the next dialog click on "Java," usually at the bottom of the other Programs; "Java Control Panel" dialog pop-up opens.
- 4Select "Java" tab. Inside the Java tab, click on "View" button. It opens the "Java Runtime Environment Settings"
- 5Change amount of heap. In the "Runtime Parameters" column change the value, or if it is blank decide for the new value, of the Java memory.
- 6Modify the parameter. To modify the parameter, double click in the "Runtime Parameters" column and,
- type -Xmx512m that assigns 512MB memory for the Java.
- type -Xmx1024m that assigns 1GB memory for the Java.
- type -Xmx2048m that assigns 2GB memory for the Java.
- type -Xmx3072m that assigns 3GB memory for the Java, and so on.
- Please note, it begins with a minus sign and ends to an m.
- Also note, there is no blank space between characters.
- 7Close the dialogue box. Click on "OK" button on the "Java Runtime Environment Settings" to close it.
- 8Close Java dialogue box. "Apply" button in the "Java Control Panel" has been enabled now. You should click on "Apply" to finalise the new Java memory. Then click on the "OK" button.
- 9Close the Windows 7 Control Panel.
Monday, 10 December 2012
More about HDFS
HDFS Permissions and Security
Starting with Hadoop 0.16.1, HDFS has included a rudimentary file permissions system. This permission system is based on the POSIX model, but does not provide strong security for HDFS files. The HDFS permissions system is designed to prevent accidental corruption of data or casual misuse of information within a group of users who share access to a cluster. It is not a strong security model that guarantees denial of access to unauthorized parties.HDFS security is based on the POSIX model of users and groups. Each file or directory has 3 permissions (read, write and execute) associated with it at three different granularities: the file's owner, users in the same group as the owner, and all other users in the system. As the HDFS does not provide the full POSIX spectrum of activity, some combinations of bits will be meaningless. For example, no file can be executed; the +x bits cannot be set on files (only directories). Nor can an existing file be written to, although the +w bits may still be set.
Security permissions and ownership can be modified using the bin/hadoop dfs -chmod, -chown, and -chgrp operations described earlier in this document; they work in a similar fashion to the POSIX/Linux tools of the same name.
Determining identity - Identity is not authenticated formally with HDFS; it is taken from an extrinsic source. The Hadoop system is programmed to use the user's current login as their Hadoop username (i.e., the equivalent of whoami). The user's current working group list (i.e, the output of groups) is used as the group list in Hadoop. HDFS itself does not verify that this username is genuine to the actual operator.
Superuser status - The username which was used to start the Hadoop process (i.e., the username who actually ran bin/start-all.sh or bin/start-dfs.sh) is acknowledged to be the superuser for HDFS. If this user interacts with HDFS, he does so with a special username superuser. This user's operations on HDFS never fail, regardless of permission bits set on the particular files he manipulates. If Hadoop is shutdown and restarted under a different username, that username is then bound to the superuser account.
Supergroup - There is also a special group named supergroup, whose membership is controlled by the configuration parameter dfs.permissions.supergroup.
Disabling permissions - By default, permissions are enabled on HDFS. The permission system can be disabled by setting the configuration option dfs.permissions to false. The owner, group, and permissions bits associated with each file and directory will still be preserved, but the HDFS process does not enforce them, except when using permissions-related operations such as -chmod.
Additional HDFS Tasks
Rebalancing Blocks
New nodes can be added to a cluster in a straightforward manner. On the new node, the same Hadoop version and configuration (conf/hadoop-site.xml) as on the rest of the cluster should be installed. Starting the DataNode daemon on the machine will cause it to contact the NameNode and join the cluster. (The new node should be added to the slaves file on the master server as well, to inform the master how to invoke script-based commands on the new node.)But the new DataNode will have no data on board initially; it is therefore not alleviating space concerns on the existing nodes. New files will be stored on the new DataNode in addition to the existing ones, but for optimum usage, storage should be evenly balanced across all nodes.
This can be achieved with the automatic balancer tool included with Hadoop. The Balancer class will intelligently balance blocks across the nodes to achieve an even distribution of blocks within a given threshold, expressed as a percentage. (The default is 10%.) Smaller percentages make nodes more evenly balanced, but may require more time to achieve this state. Perfect balancing (0%) is unlikely to actually be achieved.
The balancer script can be run by starting bin/start-balancer.sh in the Hadoop directory. The script can be provided a balancing threshold percentage with the -threshold parameter; e.g., bin/start-balancer.sh -threshold 5. The balancer will automatically terminate when it achieves its goal, or when an error occurs, or it cannot find more candidate blocks to move to achieve better balance. The balancer can always be terminated safely by the administrator by running bin/stop-balancer.sh.
The balancing script can be run either when nobody else is using the cluster (e.g., overnight), but can also be run in an "online" fashion while many other jobs are on-going. To prevent the rebalancing process from consuming large amounts of bandwidth and significantly degrading the performance of other processes on the cluster, the dfs.balance.bandwidthPerSec configuration parameter can be used to limit the number of bytes/sec each node may devote to rebalancing its data store.
Copying Large Sets of Files
When migrating a large number of files from one location to another (either from one HDFS cluster to another, from S3 into HDFS or vice versa, etc), the task should be divided between multiple nodes to allow them all to share in the bandwidth required for the process. Hadoop includes a tool called distcp for this purpose.By invoking bin/hadoop distcp src dest, Hadoop will start a MapReduce task to distribute the burden of copying a large number of files from src to dest. These two parameters may specify a full URL for the the path to copy. e.g., "hdfs://SomeNameNode:9000/foo/bar/" and "hdfs://OtherNameNode:2000/baz/quux/" will copy the children of /foo/bar on one cluster to the directory tree rooted at /baz/quux on the other. The paths are assumed to be directories, and are copied recursively. S3 URLs can be specified with s3://bucket-name/key.
Decommissioning Nodes
In addition to allowing nodes to be added to the cluster on the fly, nodes can also be removed from a cluster while it is running, without data loss. But if nodes are simply shut down "hard," data loss may occur as they may hold the sole copy of one or more file blocks.Nodes must be retired on a schedule that allows HDFS to ensure that no blocks are entirely replicated within the to-be-retired set of DataNodes.
HDFS provides a decommissioning feature which ensures that this process is performed safely. To use it, follow the steps below:
Step 1: Cluster configuration. If it is assumed that nodes may be retired in your cluster, then before it is started, an excludes file must be configured. Add a key named dfs.hosts.exclude to your conf/hadoop-site.xml file. The value associated with this key provides the full path to a file on the NameNode's local file system which contains a list of machines which are not permitted to connect to HDFS.
Step 2: Determine hosts to decommission. Each machine to be decommissioned should be added to the file identified by dfs.hosts.exclude, one per line. This will prevent them from connecting to the NameNode.
Step 3: Force configuration reload. Run the command bin/hadoop dfsadmin -refreshNodes. This will force the NameNode to reread its configuration, including the newly-updated excludes file. It will decommission the nodes over a period of time, allowing time for each node's blocks to be replicated onto machines which are scheduled to remain active.
Step 4: Shutdown nodes. After the decommission process has completed, the decommissioned hardware can be safely shutdown for maintenance, etc. The bin/hadoop dfsadmin -report command will describe which nodes are connected to the cluster.
Step 5: Edit excludes file again. Once the machines have been decommissioned, they can be removed from the excludes file. Running bin/hadoop dfsadmin -refreshNodes again will read the excludes file back into the NameNode, allowing the DataNodes to rejoin the cluster after maintenance has been completed, or additional capacity is needed in the cluster again, etc.
Verifying File System Health
After decommissioning nodes, restarting a cluster, or periodically during its lifetime, you may want to ensure that the file system is healthy--that files are not corrupted or under-replicated, and that blocks are not missing.Hadoop provides an fsck command to do exactly this. It can be launched at the command line like so:
bin/hadoop fsck [path] [options]
Action options specify what action should be taken when corrupted files are found. This can be -move, which moves corrupt files to /lost+found, or -delete, which deletes corrupted files.
Information options specify how verbose the tool should be in its report. The -files option will list all files it checks as it encounters them. This information can be further expanded by adding the -blocks option, which prints the list of blocks for each file. Adding -locations to these two options will then print the addresses of the DataNodes holding these blocks. Still more information can be retrieved by adding -racks to the end of this list, which then prints the rack topology information for each location. (See the next subsection for more information on configuring network rack awareness.) Note that the later options do not imply the former; you must use them in conjunction with one another. Also, note that the Hadoop program uses -files in a "common argument parser" shared by the different commands such as dfsadmin, fsck, dfs, etc. This means that if you omit a path argument to fsck, it will not receive the -files option that you intend. You can separate common options from fsck-specific options by using -- as an argument, like so:
bin/hadoop fsck -- -files -blocks
By default, fsck will not operate on files still open for write by another client. A list of such files can be produced with the -openforwrite option.
Sunday, 9 December 2012
Some tips for big data cluster
To disable the iptables in linux box
service iptables status
service iptables save
service iptables stop
chkconfig iptables off
service network restart
To leave the safe mode
hadoop dfsadmin -safemode leave
To stop a daemon of hbase
hbase-daemon.sh stop master
To kill a job
hadoop job -kill job_201209271339_0006
install java jdk rpm
rpm -Uvh jdk1.6.0_33-linux-i586.rpm
Uninstall java
rpm -qa ! grep jdk
rpm -e jdk1.6.0_33-fcs
To view task log url ,replace taskid by attemptid in the log url.
Common errors
UserPriviledgedAction:Give chown rights to hduser
session 0*0 for server null :network issue
Clock sync error:set same time in master n all slaves to synchronise the clocks
Unable to read additional data from clientsessionid:Error comes when slaves are removed and data is not replicated properly.Add the slaves back to recover data.
Create Auxlib in hive and copy these jar files in that
Zookeeper
hive contrib
hbase jar
hbase hive jar
guava jar
For talend opensource many jar files are missing and they can be taken from talend data integration tool.
For talend to connect to hive hive-site.xml and hive-defaullt.xml should be on classpath.For talend or jaspersoft to work,start the thrift server using command
hive --service hiveserver
Add these lines in bashrc to set the classpath
for i in hivehome/lib/*.jar;do
classpath=$classpath:$i
done
classpath=hadoophome/hadoopcore jar
classpath=hivehome/conf
In talend mysql java connector was missing and it requires 5.0 version only to be added to plugins.Go to modules add it and refresh.
HBase connection to talend was noot happening coz localhost could not b resolved.Go to c:/windows/system32/drivers/etc/hosts and add these lines
127.0.0.1 localhost
IP master
IP1 slave1
IP2 slave2
similarly add hosts entry of ur system in master and slaves
hbase connection is case sensitive and will throw null ptr exception if case is not taken into consideration.
namenode goes for safe mode even before jobtracker so add
dfs.namenode.threshold.percent=0 so that namenode doesnt go for safe mode.
while writing a hive load statement use fields terminated by clause to avoid adding equal no. of null columns.
copying data from eventlog to hbase hive integration creates reducer size to zero as some rowkeys comes a s null.
delete temp files to remove any data that might cause hindrance in starting the cluster.
To ssh without using password do this in root
$ chmod go-w $HOME $HOME/.ssh
$ chmod 600 $HOME/.ssh/authorized_keys
and go to hduser
$ chown `whoami` $HOME/.ssh/authorized_keys
change machine name to slave1,slave2 by
echo slave1 > /proc/sys/kernel/hostname
The most common problem that causes public keys to fail are permissions in the $HOME directory. Your $HOME directory cannot be writable by any user except the owner. Additionally, the .ssh directory and the authorized_keys file cannot be writable except by the owner. The ssh protocol will not report the problem but will silently ignore the authorized_keys file if any permissions are wrong.
To fix the destination public key handshake, you can do this (logged in as the remote user):
chmod 755 $HOME $HOME/.ssh
chmod 600 $HOME/.ssh/*
Alternatively, you can just remove the write capability with:
service iptables status
service iptables save
service iptables stop
chkconfig iptables off
service network restart
To leave the safe mode
hadoop dfsadmin -safemode leave
To stop a daemon of hbase
hbase-daemon.sh stop master
To kill a job
hadoop job -kill job_201209271339_0006
install java jdk rpm
rpm -Uvh jdk1.6.0_33-linux-i586.rpm
Uninstall java
rpm -qa ! grep jdk
rpm -e jdk1.6.0_33-fcs
To view task log url ,replace taskid by attemptid in the log url.
Common errors
UserPriviledgedAction:Give chown rights to hduser
session 0*0 for server null :network issue
Clock sync error:set same time in master n all slaves to synchronise the clocks
Unable to read additional data from clientsessionid:Error comes when slaves are removed and data is not replicated properly.Add the slaves back to recover data.
Create Auxlib in hive and copy these jar files in that
Zookeeper
hive contrib
hbase jar
hbase hive jar
guava jar
For talend opensource many jar files are missing and they can be taken from talend data integration tool.
For talend to connect to hive hive-site.xml and hive-defaullt.xml should be on classpath.For talend or jaspersoft to work,start the thrift server using command
hive --service hiveserver
Add these lines in bashrc to set the classpath
for i in hivehome/lib/*.jar;do
classpath=$classpath:$i
done
classpath=hadoophome/hadoopcore jar
classpath=hivehome/conf
In talend mysql java connector was missing and it requires 5.0 version only to be added to plugins.Go to modules add it and refresh.
HBase connection to talend was noot happening coz localhost could not b resolved.Go to c:/windows/system32/drivers/etc/hosts and add these lines
127.0.0.1 localhost
IP master
IP1 slave1
IP2 slave2
similarly add hosts entry of ur system in master and slaves
hbase connection is case sensitive and will throw null ptr exception if case is not taken into consideration.
namenode goes for safe mode even before jobtracker so add
dfs.namenode.threshold.percent=0 so that namenode doesnt go for safe mode.
while writing a hive load statement use fields terminated by clause to avoid adding equal no. of null columns.
copying data from eventlog to hbase hive integration creates reducer size to zero as some rowkeys comes a s null.
delete temp files to remove any data that might cause hindrance in starting the cluster.
To ssh without using password do this in root
$ chmod go-w $HOME $HOME/.ssh
$ chmod 600 $HOME/.ssh/authorized_keys
and go to hduser
$ chown `whoami` $HOME/.ssh/authorized_keys
change machine name to slave1,slave2 by
echo slave1 > /proc/sys/kernel/hostname
The most common problem that causes public keys to fail are permissions in the $HOME directory. Your $HOME directory cannot be writable by any user except the owner. Additionally, the .ssh directory and the authorized_keys file cannot be writable except by the owner. The ssh protocol will not report the problem but will silently ignore the authorized_keys file if any permissions are wrong.
To fix the destination public key handshake, you can do this (logged in as the remote user):
chmod 755 $HOME $HOME/.ssh
chmod 600 $HOME/.ssh/*
Alternatively, you can just remove the write capability with:
chmod go-w $HOME $HOME/.ssh
chmod go-w $HOME/.ssh/*
Also, the $HOME and $HOME/.ssh directories must be owned by the user and all the files in .ssh owned by the user. A common error is to create the .ssh
directory and files as root and forget to assign the proper permissions
and ownership. A better way is to login as the user, then run ssh-keygen -t to create not only the ssh keys but the .ssh directory with correct permissions and ownership.
chmod go-w $HOME/.ssh/*
Tuesday, 4 December 2012
Reporting on hadoop
Talend
- an open source provider of tools for managing Big data, provides a
tool called Talend Open Studio for Big Data. Its a GUI based data
integration tool like SSIS. Behind the scenes this tool generates code
for Hadoop
Distributed File System (HDFS), Pig, Hbase, Sqoop and Hive. This kind of
tools really take Hadoop and Big Data to a extremely wide user-base.
After you have the ways
to build a high-way to a mountain of data-source, the immediate need is
to make meaning of these data. One of the front-runners of data
visualization and analytics, Tableau,
provides way to create ad-hoc visualizations from extracts of data from
Hadoop clusters or straight live from the Hadoop clusters. Creating
visualization from in-memory data and staging extract of data from
Hadoop clusters into relational databases and creating visualizations
from the same; both are facilitated by Tableau.
Other analytics vendor like Snaplogic and Pentaho also
provides tools for operating with Hadoop clusters, which does not
require developers to write code. Microsoft has an integrated platform
for integration, reporting and analytics (in-memory/olap) and an IDE
like SSDS (formerly BIDS).
If tools similar to Talend and
Tableau are integrated into SSIS, SSAS, SSRS, DB Engine and SSDT, then
Microsoft is one of the best positioned leaders to take Hadoop to a wide
audience in their main-stream business. When platforms like Azure Data
Market, Data Quality Services, Master Data Management, StreamInsight,
Sharepoint etc join hands with tool and technology support integrated
with SQL Sever, it would be an unmatched way to extract intelligence out
of Hadoop. Connectors for Hadoop has been the first baby step towards
this area. Still lot of maturity in this area is awaited.
Till then look out for existing
leaders in this area like Cloudera, MapR, Hortonworks, Apache and
GreenPlum for Hadoop distributions and implementation. And for Hadoop
tools, software vendors like Talend, Tableau, SnapLogic and Pentaho can
provide the required toolset.
Wednesday, 28 November 2012
hive udf
Relational Operators
The following operators compare the passed operands and generate a TRUE or FALSE value depending on whether the comparison between the operands holds.Operator | Operand types | Description |
---|---|---|
A = B | All primitive types | TRUE if expression A is equal to expression B otherwise FALSE |
A <=> B | All primitive types | Returns same result with EQUAL(=) operator for non-null operands, but returns TRUE if both are NULL, FALSE if one of the them is NULL (as of version 0.9.0) |
A == B | None! | Fails because of invalid syntax. SQL uses =, not == |
A <> B | All primitive types | NULL if A or B is NULL, TRUE if expression A is NOT equal to expression B otherwise FALSE |
A < B | All primitive types | NULL if A or B is NULL, TRUE if expression A is less than expression B otherwise FALSE |
A <= B | All primitive types | NULL if A or B is NULL, TRUE if expression A is less than or equal to expression B otherwise FALSE |
A > B | All primitive types | NULL if A or B is NULL, TRUE if expression A is greater than expression B otherwise FALSE |
A >= B | All primitive types | NULL if A or B is NULL, TRUE if expression A is greater than or equal to expression B otherwise FALSE |
A [NOT] BETWEEN B AND C | All primitive types | NULL if A, B or C is NULL, TRUE if A is greater than or equal to B AND A less than or equal to C otherwise FALSE. This can be inverted by using the NOT keyword. (as of version 0.9.0) |
A IS NULL | all types | TRUE if expression A evaluates to NULL otherwise FALSE |
A IS NOT NULL | All types | FALSE if expression A evaluates to NULL otherwise TRUE |
A LIKE B | strings | NULL if A or B is NULL, TRUE if string A matches the SQL simple regular expression B, otherwise FALSE. The comparison is done character by character. The _ character in B matches any character in A(similar to . in posix regular expressions) while the % character in B matches an arbitrary number of characters in A(similar to .* in posix regular expressions) e.g. 'foobar' like 'foo' evaluates to FALSE where as 'foobar' like 'foo_ _ _' evaluates to TRUE and so does 'foobar' like 'foo%' |
A RLIKE B | strings | NULL if A or B is NULL, TRUE if string A matches the Java regular expression B(See Java regular expressions syntax), otherwise FALSE e.g. 'foobar' rlike 'foo' evaluates to FALSE where as 'foobar' rlike '^f.*r$' evaluates to TRUE |
A REGEXP B | strings | Same as RLIKE |
Arithmetic Operators
The following operators support various common arithmetic operations on the operands. All return number types; if any of the operands are NULL, then the result is also NULL.Operator | Operand types | Description |
---|---|---|
A + B | All number types | Gives the result of adding A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. e.g. since every integer is a float, therefore float is a containing type of integer so the + operator on a float and an int will result in a float. |
A - B | All number types | Gives the result of subtracting B from A. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A * B | All number types | Gives the result of multiplying A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. Note that if the multiplication causing overflow, you will have to cast one of the operators to a type higher in the type hierarchy. |
A / B | All number types | Gives the result of dividing B from A. The result is a double type. |
A % B | All number types | Gives the reminder resulting from dividing A by B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A & B | All number types | Gives the result of bitwise AND of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A | B | All number types | Gives the result of bitwise OR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
A ^ B | All number types | Gives the result of bitwise XOR of A and B. The type of the result is the same as the common parent(in the type hierarchy) of the types of the operands. |
~A | All number types | Gives the result of bitwise NOT of A. The type of the result is the same as the type of A. |
Logical Operators
The following operators provide support for creating logical expressions. All of them return boolean TRUE, FALSE, or NULL depending upon the boolean values of the operands. NULL behaves as an "unknown" flag, so if the result depends on the state of an unknown, the result itself is unknown.Operator | Operand types | Description |
---|---|---|
A AND B | boolean | TRUE if both A and B are TRUE, otherwise FALSE. NULL if A or B is NULL |
A && B | boolean | Same as A AND B |
A OR B | boolean | TRUE if either A or B or both are TRUE; FALSE OR NULL is NULL; otherwise FALSE |
A || B | boolean | Same as A OR B |
NOT A | boolean | TRUE if A is FALSE or NULL if A is NULL. Otherwise FALSE. |
! A | boolean | Same as NOT A |
Complex Type Constructors
The following functions construct instances of complex types.Constructor Function | Operands | Description |
---|---|---|
map | (key1, value1, key2, value2, ...) | Creates a map with the given key/value pairs |
struct | (val1, val2, val3, ...) | Creates a struct with the given field values. Struct field names will be col1, col2, ... |
named_struct | (name1, val1, name2, val2, ...) | Creates a struct with the given field names and values. |
array | (val1, val2, ...) | Creates an array with the given elements |
create_union | (tag, val1, val2, ...) | Creates a union type with the value that is being pointed to by the tag parameter |
Operators on Complex Types
The following operators provide mechanisms to access elements in Complex TypesOperator | Operand types | Description |
---|---|---|
A[n] | A is an Array and n is an int | Returns the nth element in the array A. The first element has index 0 e.g. if A is an array comprising of ['foo', 'bar'] then A[0] returns 'foo' and A[1] returns 'bar' |
M[key] | M is a Map |
Returns the value corresponding to the key in the map e.g. if M is a map comprising of {'f' -> 'foo', 'b' -> 'bar', 'all' -> 'foobar'} then M['all'] returns 'foobar' |
S.x | S is a struct | Returns the x field of S. e.g for struct foobar {int foo, int bar} foobar.foo returns the integer stored in the foo field of the struct. |
Built-in Functions
Mathematical Functions
The following built-in mathematical functions are supported in hive; most return NULL when the argument(s) are NULL:Return Type | Name(Signature) | Description |
---|---|---|
BIGINT | round(double a) | Returns the rounded BIGINT value of the double |
DOUBLE | round(double a, int d) | Returns the double rounded to d decimal places |
BIGINT | floor(double a) | Returns the maximum BIGINT value that is equal or less than the double |
BIGINT | ceil(double a), ceiling(double a) | Returns the minimum BIGINT value that is equal or greater than the double |
double | rand(), rand(int seed) | Returns a random number (that changes from row to row) that is distributed uniformly from 0 to 1. Specifiying the seed will make sure the generated random number sequence is deterministic. |
double | exp(double a) | Returns ea where e is the base of the natural logarithm |
double | ln(double a) | Returns the natural logarithm of the argument |
double | log10(double a) | Returns the base-10 logarithm of the argument |
double | log2(double a) | Returns the base-2 logarithm of the argument |
double | log(double base, double a) | Return the base "base" logarithm of the argument |
double | pow(double a, double p) power(double a, double p) | Return ap |
double | sqrt(double a) | Returns the square root of a |
string | bin(BIGINT a) | Returns the number in binary format (see [http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_bin]) |
string | hex(BIGINT a) hex(string a) | If the argument is an int, hex returns the number as a string in hex format. Otherwise if the number is a string, it converts each character into its hex representation and returns the resulting string. (see [http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_hex]) |
string | unhex(string a) | Inverse of hex. Interprets each pair of characters as a hexidecimal number and converts to the character represented by the number. |
string | conv(BIGINT num, int from_base, int to_base) | Converts a number from a given base to another (see [http://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_conv]) |
double | abs(double a) | Returns the absolute value |
int double | pmod(int a, int b) pmod(double a, double b) | Returns the positive value of a mod b |
double | sin(double a) | Returns the sine of a (a is in radians) |
double | asin(double a) | Returns the arc sin of x if -1<=a<=1 or null otherwise |
double | cos(double a) | Returns the cosine of a (a is in radians) |
double | acos(double a) | Returns the arc cosine of x if -1<=a<=1 or null otherwise |
double | tan(double a) | Returns the tangent of a (a is in radians) |
double | atan(double a) | Returns the arctangent of a |
double | degrees(double a) | Converts value of a from radians to degrees |
double | radians(double a) | Converts value of a from degrees to radians |
int double | positive(int a) positive(double a) | Returns a |
int double | negative(int a) negative(double a) | Returns -a |
float | sign(double a) | Returns the sign of a as '1.0' or '-1.0' |
double | e() | Returns the value of e |
double | pi() | Returns the value of pi |
Collection Functions
The following built-in collection functions are supported in hive:Return Type | Name(Signature) | Description |
---|---|---|
int | size(Map |
Returns the number of elements in the map type |
int | size(Array |
Returns the number of elements in the array type |
array |
map_keys(Map |
Returns an unordered array containing the keys of the input map |
array |
map_values(Map |
Returns an unordered array containing the values of the input map |
boolean | array_contains(Array |
Returns TRUE if the array contains value |
array |
sort_array(Array |
Sorts the input array in ascending order according to the natural ordering of the array elements and returns it (as of version 0.9.0) |
Type Conversion Functions
The following type conversion functions are supported in hive:Return Type | Name(Signature) | Description |
---|---|---|
binary | binary(string|binary) | Casts the parameter into a binary |
Expected "=" to follow "type" | cast(expr as |
Converts the results of the expression expr to
|
Date Functions
The following built-in date functions are supported in hive:Return Type | Name(Signature) | Description |
---|---|---|
string | from_unixtime(bigint unixtime[, string format]) | Converts the number of seconds from unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of that moment in the current system time zone in the format of "1970-01-01 00:00:00" |
bigint | unix_timestamp() | Gets current time stamp using the default time zone. |
bigint | unix_timestamp(string date) | Converts time string in format yyyy-MM-dd HH:mm:ss to Unix time stamp, return 0 if fail: unix_timestamp('2009-03-20 11:30:01') = 1237573801 |
bigint | unix_timestamp(string date, string pattern) | Convert time string with given pattern (see [http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html]) to Unix time stamp, return 0 if fail: unix_timestamp('2009-03-20', 'yyyy-MM-dd') = 1237532400 |
string | to_date(string timestamp) | Returns the date part of a timestamp string: to_date("1970-01-01 00:00:00") = "1970-01-01" |
int | year(string date) | Returns the year part of a date or a timestamp string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970 |
int | month(string date) | Returns the month part of a date or a timestamp string: month("1970-11-01 00:00:00") = 11, month("1970-11-01") = 11 |
int | day(string date) dayofmonth(date) | Return the day part of a date or a timestamp string: day("1970-11-01 00:00:00") = 1, day("1970-11-01") = 1 |
int | hour(string date) | Returns the hour of the timestamp: hour('2009-07-30 12:58:59') = 12, hour('12:58:59') = 12 |
int | minute(string date) | Returns the minute of the timestamp |
int | second(string date) | Returns the second of the timestamp |
int | weekofyear(string date) | Return the week number of a timestamp string: weekofyear("1970-11-01 00:00:00") = 44, weekofyear("1970-11-01") = 44 |
int | datediff(string enddate, string startdate) | Return the number of days from startdate to enddate: datediff('2009-03-01', '2009-02-27') = 2 |
string | date_add(string startdate, int days) | Add a number of days to startdate: date_add('2008-12-31', 1) = '2009-01-01' |
string | date_sub(string startdate, int days) | Subtract a number of days to startdate: date_sub('2008-12-31', 1) = '2008-12-30' |
timestamp | from_utc_timestamp(timestamp, string timezone) | Assumes given timestamp ist UTC and converts to given timezone (as of Hive 0.8.0) |
timestamp | to_utc_timestamp(timestamp, string timezone) | Assumes given timestamp is in given timezone and converts to UTC (as of Hive 0.8.0) |
Conditional Functions
Return Type | Name(Signature) | Description |
---|---|---|
T | if(boolean testCondition, T valueTrue, T valueFalseOrNull) | Return valueTrue when testCondition is true, returns valueFalseOrNull otherwise |
T | COALESCE(T v1, T v2, ...) | Return the first v that is not NULL, or NULL if all v's are NULL |
T | CASE a WHEN b THEN c [WHEN d THEN e]* [ELSE f] END | When a = b, returns c; when a = d, return e; else return f |
T | CASE WHEN a THEN b [WHEN c THEN d]* [ELSE e] END | When a = true, returns b; when c = true, return d; else return e |
String Functions
The following are built-in String functions are supported in hive:Return Type | Name(Signature) | Description |
---|---|---|
int | ascii(string str) | Returns the numeric value of the first character of str |
string | concat(string|binary A, string|binary B...) | Returns the string or bytes resulting from concatenating the strings or bytes passed in as parameters in order. e.g. concat('foo', 'bar') results in 'foobar'. Note that this function can take any number of input strings. |
array |
context_ngrams(array |
Returns the top-k contextual N-grams from a set of tokenized sentences, given a string of "context". See StatisticsAndDataMining for more information. |
string | concat_ws(string SEP, string A, string B...) | Like concat() above, but with custom separator SEP. |
string | concat_ws(string SEP, array |
Like concat_ws() above, but taking an array of strings. (as of Hive 0.9.0) |
int | find_in_set(string str, string strList) | Returns the first occurance of str in strList where strList is a comma-delimited string. Returns null if either argument is null. Returns 0 if the first argument contains any commas. e.g. find_in_set('ab', 'abc,b,ab,c,def') returns 3 |
string | format_number(number x, int d) | Formats the number X to a format like '#,###,###.##', rounded to D decimal places, and returns the result as a string. If D is 0, the result has no decimal point or fractional part. (as of Hive 0.10.0) |
string | get_json_object(string json_string, string path) | Extract json object from a json string based on json path specified, and return json string of the extracted json object. It will return null if the input json string is invalid. NOTE: The json path can only have the characters [0-9a-z_], i.e., no upper-case or special characters. Also, the keys *cannot start with numbers.* This is due to restrictions on Hive column names. |
boolean | in_file(string str, string filename) | Returns true if the string str appears as an entire line in filename. |
int | instr(string str, string substr) | Returns the position of the first occurence of substr in str |
int | length(string A) | Returns the length of the string |
int | locate(string substr, string str[, int pos]) | Returns the position of the first occurrence of substr in str after position pos |
string | lower(string A) lcase(string A) | Returns the string resulting from converting all characters of B to lower case e.g. lower('fOoBaR') results in 'foobar' |
string | lpad(string str, int len, string pad) | Returns str, left-padded with pad to a length of len |
string | ltrim(string A) | Returns the string resulting from trimming spaces from the beginning(left hand side) of A e.g. ltrim(' foobar ') results in 'foobar ' |
array |
ngrams(array |
Returns the top-k N-grams from a set of tokenized sentences, such as those returned by the sentences() UDAF. See StatisticsAndDataMining for more information. |
string | parse_url(string urlString, string partToExtract [, string keyToExtract]) | Returns the specified part from the URL. Valid values for partToExtract include HOST, PATH, QUERY, REF, PROTOCOL, AUTHORITY, FILE, and USERINFO. e.g. parse_url('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1', 'HOST') returns 'facebook.com'. Also a value of a particular key in QUERY can be extracted by providing the key as the third argument, e.g. parse_url('http://facebook.com/path1/p.php?k1=v1&k2=v2#Ref1', 'QUERY', 'k1') returns 'v1'. |
string | printf(String format, Obj... args) | Returns the input formatted according do printf-style format strings (as of Hive 0.9.0) |
string | regexp_extract(string subject, string pattern, int index) | Returns the string extracted using the
pattern. e.g. regexp_extract('foothebar', 'foo(.*?)(bar)', 2) returns
'bar.' Note that some care is necessary in using predefined character
classes: using '\s' as the second argument will match the letter s; ' s' is necessary to match whitespace, etc. The 'index' parameter is the Java regex Matcher group() method index. See docs/api/java/util/regex/Matcher.html for more information on the 'index' or Java regex group() method. |
string | regexp_replace(string INITIAL_STRING, string PATTERN, string REPLACEMENT) | Returns the string resulting from replacing
all substrings in INITIAL_STRING that match the java regular expression
syntax defined in PATTERN with instances of REPLACEMENT, e.g.
regexp_replace("foobar", "oo|ar", "") returns 'fb.' Note that some care
is necessary in using predefined character classes: using '\s' as the
second argument will match the letter s; ' s' is necessary to match whitespace, etc. |
string | repeat(string str, int n) | Repeat str n times |
string | reverse(string A) | Returns the reversed string |
string | rpad(string str, int len, string pad) | Returns str, right-padded with pad to a length of len |
string | rtrim(string A) | Returns the string resulting from trimming spaces from the end(right hand side) of A e.g. rtrim(' foobar ') results in ' foobar' |
array |
sentences(string str, string lang, string locale) | Tokenizes a string of natural language text into words and sentences, where each sentence is broken at the appropriate sentence boundary and returned as an array of words. The 'lang' and 'locale' are optional arguments. e.g. sentences('Hello there! How are you?') returns ( ("Hello", "there"), ("How", "are", "you") ) |
string | space(int n) | Return a string of n spaces |
array | split(string str, string pat) | Split str around pat (pat is a regular expression) |
map |
str_to_map(text[, delimiter1, delimiter2]) | Splits text into key-value pairs using two delimiters. Delimiter1 separates text into K-V pairs, and Delimiter2 splits each K-V pair. Default delimiters are ',' for delimiter1 and '=' for delimiter2. |
string | substr(string|binary A, int start) substring(string|binary A, int start) | Returns the substring or slice of the byte array of A starting from start position till the end of string A e.g. substr('foobar', 4) results in 'bar' (see [http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substr]) |
string | substr(string|binary A, int start, int len) substring(string|binary A, int start, int len) | Returns the substring or slice of the byte array of A starting from start position with length len e.g. substr('foobar', 4, 1) results in 'b' (see [http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substr]) |
string | translate(string input, string from, string to) | Translates the input string by replacing the characters present in the from string with the corresponding characters in the to string. This is similar to the translate function in PostgreSQL. If any of the parameters to this UDF are NULL, the result is NULL as well (available as of Hive 0.10.0) |
string | trim(string A) | Returns the string resulting from trimming spaces from both ends of A e.g. trim(' foobar ') results in 'foobar' |
string | upper(string A) ucase(string A) | Returns the string resulting from converting all characters of A to upper case e.g. upper('fOoBaR') results in 'FOOBAR' |
Misc. Functions
Return Type | Name(Signature) | Description |
---|---|---|
varies | java_method(class, method[, arg1[, arg2..]]) | Synonym for reflect (as of Hive 0.9.0) |
varies | reflect(class, method[, arg1[, arg2..]]) | Use this UDF to call Java methods by matching the argument signature (uses reflection). (as of Hive 0.7.0) |
xpath
Tuesday, 27 November 2012
Pig Basics
Pig raises the level of abstraction for processing large datasets. MapReduce allows you, as the programmer, to specify a map function followed by a reduce function, but working out how to fit your data processing into this pattern, which often requires multiple MapReduce stages, can be a challenge. With Pig, the data structures are much richer, typically being multivalued and nested, and the set of transformations you can apply to the data are much more powerful.
Pig is made up of two pieces:
• The language used to express data flows, called Pig Latin.
• The execution environment to run Pig Latin programs. There are currently two environments: local execution in a single JVM and distributed execution on a Hadoop cluster.
A Pig Latin program is made up of a series of operations, or transformations, that are applied to the input data to produce output. Taken as a whole, the operations describe a data flow, which the Pig execution environment translates into an executable representation and then runs. Under the covers, Pig turns the transformations into a series of MapReduce jobs
Installing and Running Pig
Download latest version of Pig from the following link (Pig Installation).
$ tar xzf pig-0.7.0.tar.gz
set pig environment variables
$ export PIG_INSTALL=/home/user1/pig-0.7.0.tar.gz
$ export PATH=$PATH:$PIG_INSTALL/bin
You also need to set the JAVA_HOME environment variable to point to a suitable Java installation.
Pig has two execution types or modes:
1) local mode : Pig runs in a single JVM and accesses the local filesystem. This mode is suitable only for small datasets.
$ pig -x local
grunt>
This starts Grunt, the Pig interactive shell
2) MapReduce mode : In MapReduce mode, Pig translates queries into MapReduce jobs and runs them on a Hadoop cluster. The cluster may be a pseudo- or fully distributed cluster.
set the HADOOP_HOME environment variable for finding which Hadoop client to run.
$ pig or $ pig -x mapreduce , runs pig in MapReduce mode
Running Pig Programs
There are three ways of executing Pig programs, all of which work in both local and MapReduce mode
Script : Pig can run a script file that contains Pig commands. For example, pig
script.pig runs the commands in the local file script.pig
$ pig script.pig
Grunt : Grunt is an interactive shell for running Pig commands.It is also possible to run Pig scripts from within Grunt using run and exec.
Embedded :
You can run Pig programs from Java using the PigServer class, much like you can use JDBC to run SQL programs from Java.
PigPen is an Eclipse plug-in that provides an environment for developing Pig programs.
PigTools and EditorPlugins for pig can be downloaded from PigTools
Example of Pig in Interactive Mode (Grunt)
max_cgpa.pig
-- max_cgpa.pig: Finds the maximum cgpa of a user
records = LOAD 'pigsample.txt'
AS (name:chararray, spl:chararray, cgpa:float);
filtered_records = FILTER records BY cgpa > 0 AND cgpa < 10;
grouped_records = GROUP filtered_records BY spl;
max_cgpa = FOREACH grouped_records GENERATE group, MAX(filtered_records.cgpa);
STORE max_cgpa INTO 'output/cgpa_out';
Above pig script finds the maximum cgpa of a specialization.
pigsample.txt ( Input to the pig )
raghu ece 9
kumar cse 8.5
biju ece 8
mukul cse 8.6
ashish ece 7.0
subha cse 8.3
ramu ece -8.3
rahul cse 11.4
budania ece 5.4
first column represents name ,
second column specialization and third column is cgpa, by default each
column is separated by tab space.
$ pig max_cgpa.pig
Output :
(cse,8.6F)
(ece,9.0F)
Analysis :
Statement : 1
records = LOAD 'pigsample.txt'AS (name:chararray, spl:chararray, cgpa:float);
Load input file in to memory from the file system (HDFS or local or Amazon S3). name:chararray notation describes the field’s
name and type; chararray is like a Java string, and an float is like a Java float.
grunt> DUMP records;
(raghu,ece,9.0F)
(kumar,cse,8.5F)
(biju,ece,8.0F)
(mukul,cse,8.6F)
(ashish,ece,7.0F)
(subha,cse,8.3F)
(ramu,ece,-8.3F)
(rahul,cse,11.4F)
(budania,ece,5.4F)
Input is converted in to a tuple , and each column is separated by ,
grunt> DESCRIBE records;
records: {name: chararray,spl: chararray,cgpa: float}
Statement : 2
filtered_records = FILTER records BY cgpa > 0 AND cgpa < 10;
grunt> DUMP filtered_records;
filter all the records whose cgpa <0 and="and" negative="negative">10
0>
(raghu,ece,9.0F)
(kumar,cse,8.5F)
(biju,ece,8.0F)
(mukul,cse,8.6F)
(ashish,ece,7.0F)
(subha,cse,8.3F)
(budania,ece,5.4F)
grunt> DESCRIBE filtered_records;
filtered_records: {name: chararray,spl: chararray,cgpa: float}
Statement : 3
The third statement uses the GROUP function to group the records relation by the specialization field.
grouped_records = GROUP filtered_records BY spl;
grunt> DUMP
grouped_records ;
(cse,{(kumar,cse,8.5F),(mukul,cse,8.6F),(subha,cse,8.3F)})
(ece,{(raghu,ece,9.0F),(biju,ece,8.0F),(ashish,ece,7.0F),(budania,ece,5.4F)})
grunt> DESCRIBE
grouped_records;
grouped_records: {group: chararray,filtered_records: {name: chararray,spl: chararray,cgpa: float}}
We now have two rows, or tuples, one for each specialization in the input data. The first field in each tuple is the field being grouped by (the specialization), and the second field is a bag of tuples
for that
specialization. A bag is just an unordered collection of tuples, which in Pig Latin is represented using curly braces.
By grouping the data in this way, we have created a row per
specialization , so now all that remains is to find the maximum cgpa for the tuples in each bag.
Statement : 4
max_cgpa = FOREACH grouped_records GENERATE group,
MAX(filtered_records.cgpa);
FOREACH processes every row to generate a derived set of rows, using a GENERATE clause to define the fields in each derived row. In this example, the first field is group, which is just the specialization. The second field is a little more complex.
The filtered_records.cgpa reference is to the cgpa field of the
filtered_records bag in the grouped_records relation. MAX is a built-in function for calculating the maximum value of fields in a bag. In this case, it calculates the maximum cgpa for the fields in each filtered_records bag.
grunt> DUMP
max_cgpa ;
(cse,8.6F)
(ece,9.0F)
grunt> DESCRIBE
max_cgpa ;
max_cgpa : {group: chararray,float}
Statement : 5
STORE max_cgpa INTO 'output/cgpa_out'
This command redirects the output of the script to a file (Local or HDFS) instead of printing the output on the console .
we’ve successfully calculated the maximum cgpa for each specialization.
With the ILLUSTRATE operator, Pig provides a tool for generating a reasonably complete and concise sample dataset.
--------------------------------------------------------------------
| records | name: bytearray | spl: bytearray | cgpa: bytearray |
--------------------------------------------------------------------
| | kumar | cse | 8.5 |
| | mukul | cse | 8.6 |
| | ramu | ece | -8.3 |
--------------------------------------------------------------------
----------------------------------------------------------------
| records | name: chararray | spl: chararray | cgpa: float |
----------------------------------------------------------------
| | kumar | cse | 8.5 |
| | mukul | cse | 8.6 |
| | ramu | ece | -8.3 |
----------------------------------------------------------------
-------------------------------------------------------------------------
| filtered_records | name: chararray | spl: chararray | cgpa: float |
-------------------------------------------------------------------------
| | kumar | cse | 8.5 |
| | mukul | cse | 8.6 |
-------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------
| grouped_records | group: chararray | filtered_records: bag({name: chararray,spl: chararray,cgpa: float}) |
----------------------------------------------------------------------------------------------------------------
| | cse | {(kumar, cse, 8.5), (mukul, cse, 8.6)} |
----------------------------------------------------------------------------------------------------------------
-------------------------------------------
|
max_cgpa | group: chararray | float |
-------------------------------------------
| | cse | 8.6 |
EXPLAIN max_cgpa
Use the above command to see the logical and physical plans created by Pig.
Monday, 26 November 2012
Delete a temp table in sql server if it already exists
IF OBJECT_ID('tempdb..##tmp') IS NOT NULL begin DROP TABLE ##tmp end CREATE TABLE ##tmp ( TableName varchar(255),DifferenceInCounts int )
Thursday, 15 November 2012
Date Functions in Hive
Date data types do not exist in Hive. In fact the dates are treated as strings in Hive. The date functions are listed below.
UNIX_TIMESTAMP()
This function returns the number of seconds from the Unix epoch (1970-01-01 00:00:00 UTC) using the default time zone.
UNIX_TIMESTAMP( string date )
This function converts the date in format 'yyyy-MM-dd HH:mm:ss' into Unix timestamp. This will return the number of seconds between the specified date and the Unix epoch. If it fails, then it returns 0.
Example: UNIX_TIMESTAMP('2000-01-01 00:00:00') returns 946713600
UNIX_TIMESTAMP( string date, string pattern )
This function converts the date to the specified date format and returns the number of seconds between the specified date and Unix epoch. If it fails, then it returns 0.
Example: UNIX_TIMESTAMP('2000-01-01 10:20:30','yyyy-MM-dd') returns 946713600
FROM_UNIXTIME( bigint number_of_seconds [, string format] )
The FROM_UNIX function converts the specified number of seconds from Unix epoch and returns the date in the format 'yyyy-MM-dd HH:mm:ss'.
Example: FROM_UNIXTIME( UNIX_TIMESTAMP() ) returns the current date including the time. This is equivalent to the SYSDATE in oracle.
TO_DATE( string timestamp )
The TO_DATE function returns the date part of the timestamp in the format 'yyyy-MM-dd'.
Example: TO_DATE('2000-01-01 10:20:30') returns '2000-01-01'
YEAR( string date )
The YEAR function returns the year part of the date.
Example: YEAR('2000-01-01 10:20:30') returns 2000
MONTH( string date )
The MONTH function returns the month part of the date.
Example: YEAR('2000-03-01 10:20:30') returns 3
DAY( string date ), DAYOFMONTH( date )
The DAY or DAYOFMONTH function returns the day part of the date.
Example: DAY('2000-03-01 10:20:30') returns 1
HOUR( string date )
The HOUR function returns the hour part of the date.
Example: HOUR('2000-03-01 10:20:30') returns 10
MINUTE( string date )
The MINUTE function returns the minute part of the timestamp.
Example: MINUTE('2000-03-01 10:20:30') returns 20
SECOND( string date )
The SECOND function returns the second part of the timestamp.
Example: SECOND('2000-03-01 10:20:30') returns 30
WEEKOFYEAR( string date )
The WEEKOFYEAR function returns the week number of the date.
Example: WEEKOFYEAR('2000-03-01 10:20:30') returns 9
DATEDIFF( string date1, string date2 )
The DATEDIFF function returns the number of days between the two given dates.
Example: DATEDIFF('2000-03-01', '2000-01-10') returns 51
DATE_ADD( string date, int days )
The DATE_ADD function adds the number of days to the specified date
Example: DATE_ADD('2000-03-01', 5) returns '2000-03-06'
DATE_SUB( string date, int days )
The DATE_SUB function subtracts the number of days to the specified date
Example: DATE_SUB('2000-03-01', 5) returns ‘2000-02-25’
UNIX_TIMESTAMP()
This function returns the number of seconds from the Unix epoch (1970-01-01 00:00:00 UTC) using the default time zone.
UNIX_TIMESTAMP( string date )
This function converts the date in format 'yyyy-MM-dd HH:mm:ss' into Unix timestamp. This will return the number of seconds between the specified date and the Unix epoch. If it fails, then it returns 0.
Example: UNIX_TIMESTAMP('2000-01-01 00:00:00') returns 946713600
UNIX_TIMESTAMP( string date, string pattern )
This function converts the date to the specified date format and returns the number of seconds between the specified date and Unix epoch. If it fails, then it returns 0.
Example: UNIX_TIMESTAMP('2000-01-01 10:20:30','yyyy-MM-dd') returns 946713600
FROM_UNIXTIME( bigint number_of_seconds [, string format] )
The FROM_UNIX function converts the specified number of seconds from Unix epoch and returns the date in the format 'yyyy-MM-dd HH:mm:ss'.
Example: FROM_UNIXTIME( UNIX_TIMESTAMP() ) returns the current date including the time. This is equivalent to the SYSDATE in oracle.
TO_DATE( string timestamp )
The TO_DATE function returns the date part of the timestamp in the format 'yyyy-MM-dd'.
Example: TO_DATE('2000-01-01 10:20:30') returns '2000-01-01'
YEAR( string date )
The YEAR function returns the year part of the date.
Example: YEAR('2000-01-01 10:20:30') returns 2000
MONTH( string date )
The MONTH function returns the month part of the date.
Example: YEAR('2000-03-01 10:20:30') returns 3
DAY( string date ), DAYOFMONTH( date )
The DAY or DAYOFMONTH function returns the day part of the date.
Example: DAY('2000-03-01 10:20:30') returns 1
HOUR( string date )
The HOUR function returns the hour part of the date.
Example: HOUR('2000-03-01 10:20:30') returns 10
MINUTE( string date )
The MINUTE function returns the minute part of the timestamp.
Example: MINUTE('2000-03-01 10:20:30') returns 20
SECOND( string date )
The SECOND function returns the second part of the timestamp.
Example: SECOND('2000-03-01 10:20:30') returns 30
WEEKOFYEAR( string date )
The WEEKOFYEAR function returns the week number of the date.
Example: WEEKOFYEAR('2000-03-01 10:20:30') returns 9
DATEDIFF( string date1, string date2 )
The DATEDIFF function returns the number of days between the two given dates.
Example: DATEDIFF('2000-03-01', '2000-01-10') returns 51
DATE_ADD( string date, int days )
The DATE_ADD function adds the number of days to the specified date
Example: DATE_ADD('2000-03-01', 5) returns '2000-03-06'
DATE_SUB( string date, int days )
The DATE_SUB function subtracts the number of days to the specified date
Example: DATE_SUB('2000-03-01', 5) returns ‘2000-02-25’
Subscribe to:
Posts (Atom)