Friday 27 September 2013

DynamoDB : Pros and Cons



Dynamo DB
Pros
·         Scalable, Simple, Distributed, Flexible and offers Tunable consistency.
·         Hosted by Amazon and receives Fully Managed Service from Amazon.
·         Plans are cheap – a free tier allows more than 40 million database operations/month and pricing is based on throughput
·         Automatic data replication across multiple AWS availability zones to protect data and provide high uptime
·          Average service-side latencies in single-digit milliseconds
·          Single API call allows you to atomically increment or decrement numerical attributes
·          Uses secure algorithms to keep your data safe
·          AWS Management Console monitors your table operational metrics and  receives amazons infrastructure support
·          Tightly integrated with Amazon Elastic MapReduce (Amazon EMR)
·         Composite key support
·         Offers Conditional updates
·         Supports Hadoop integration M/R, Hive
·         It has the property of distributed hash tables hence more performance booster as compared to MySQL
·         Consistent performance
·         Low learning curve
·         It comes with a decent object mapper for Java


Cons

·         Poor documentation
·         Limited data types. : It doesn’t accept binary data. You cannot store images, byte arrays etc. You can get around it by encoding everything in string using base64 encoding, but base64 encoding produces bigger data. And you are counting your bytes as you cannot hit the 64KB limit!
·         Poor query comparison operators
·         Unable to do complex queries
·         64KB limit on row size
·         1MB limit on querying
·         Deployable Only on AWS
·         Indexes on column values are not supported. Secondary indexes also are not supported.
·         Integrated caching is not well explained in the document
·         When you create a table programatically (or even using AWS Console), the table doesn’t become available instantly
·         In RDBMS we get ACID guarantee, but in Dynamo-db there is no such guarantee.
·         Dynamo is an expensive and extremely low latency solution, If you are trying to store more than 64KB per item
·         Indexing - Changing or adding keys on-the-fly is impossible without creating a new table.
·         Queries - Querying data is extremely limited. Especially if you want to query non-indexed data. Joins are of course impossible so you have to manage complex data relations on your code/cache layer.
·         Backup - tedious backup procedure as compared to the slick backup of RDS
·         GUI - bad UX, limited search, no fun.
·         Speed - Response time is problematic compared to RDS. You find yourself building elaborate caching mechanism to compensate for it in places you would have settled for RDS's internal caching.
·         A big limitation of DynamoDB and other non-relational database is the lack of multiple indices.
·         DynamoDB is great for lookups by key, not so good for queries, and abysmal for queries with multiple predicates. (Esp. for Eventlog tables)
·         DynamoDB does not support transactions in the traditional SQL sense. Each write operation is atomic to an item. A write operation either successfully updates all of the item's attributes or none of its attributes.
·         Once you hit the read or write limit, your requests are denied until enough time has elapsed.
·         No Server-side scripts
·         No triggers
·         No Foreign Keys

Monday 16 September 2013

Difference between varchar and nvarchar

Differences :

1
Character Data Type

Varchar - Non-Unicode Data
NVarchar - Unicode Data

2 Character Size

Varchar - 1 byte
NVarchar - 2 bytes

3 Maximum Length

Varchar
- 8,000 bytes
NVarchar - 4,000 bytes

4 Storage Size

Varchar - Actual Length (in bytes)
NVarchar - 2 times Actual Length (in bytes)

5    Example



DECLARE @FirstName AS VARCHAR(50) = ‘Techie’
SELECT @FirstName AS FirstName,
DATALENGTH(@FirstName) AS
Length

Result:
FirstName Length
Techie 6


 DECLARE @FirstName AS NVARCHAR(50)= ‘Techie’ SELECT @FirstName AS FirstName, DATALENGTH(@FirstName) AS Length
Result:
FirstName Length
Techie 12




 * The abbreviation for Varchar is Variable Length character String.

* The abbreviation of NVarchar is uNicode Variable Length character String.

Monday 9 September 2013

Difference between Table Variable and temporary table

Difference between Table Variable and temporary table

Feature Table Variables Temporary Tables
Scope Current batch Current session, nested stored procedures. Global: all sessions.
Usage UDFs, Stored Procedures, Triggers, Batches. Stored Procedures, Triggers, Batches.
Creation DECLARE statement only. CREATE TABLE statement.
SELECT INTO statement.
Table name Maximum 128 characters. Maximum 116 characters.
Column data types Can use user-defined data types.
Can use XML collections.
User-defined data types and XML collections must be in tempdb to use.
Collation String columns inherit collation from current database. String columns inherit collation from tempdb database.
Indexes Can only have indexes that are automatically created with PRIMARY KEY & UNIQUE constraints as part of the DECLARE statement. Indexes can be added after the table has been created.
Constraints PRIMARY KEY, UNIQUE, NULL, CHECK, but they must be incorporated with the creation of the table in the DECLARE statement. FOREIGN KEY not allowed. PRIMARY KEY, UNIQUE, NULL, CHECK. Can be part of the CREATE TABLE statement, or can be added after the table has been created. FOREIGN KEY not allowed.
Post-creation DDL (indexes, columns) Statements are not allowed. Statements are allowed.
Data insertion INSERT statement (SQL 2000: cannot use INSERT/EXEC). INSERT statement, including INSERT/EXEC.
SELECT INTO statement.
Insert explicit values into identity columns (SET IDENTITY_INSERT). The SET IDENTITY_INSERT statement is not supported. The SET IDENTITY_INSERT statement is supported.
Truncate table Not allowed. Allowed.
Destruction Automatically at the end of the batch. Explicitly with DROP TABLE statement. Automatically when session ends. (Global: also when other sessions have no statements using table.)
Transactions Last only for length of update against the table variable. Uses less than temporary tables. Last for the length of the transaction. Uses more than table variables.
Stored procedure recompilations Not applicable. Creating temp table and data inserts cause procedure recompilations.
Rollbacks Not affected (Data not rolled back). Affected (Data is rolled back).
Statistics Optimizer cannot create any statistics on columns, so it treats table variable has having 1 record when creating execution plans. Optimizer can create statistics on columns. Uses actual row count for generation execution plan.
Pass to stored procedures SQL 2008 only, with predefined user-defined table type. Not allowed to pass, but they are still in scope to nested procedures.
Explicitly named objects (indexes, constraints). Not allowed. Allowed, but be aware of multi-user issues.
Dynamic SQL Must declare table variable inside the dynamic SQL. Can use temporary tables created prior to calling the dynamic sql.

Wednesday 4 September 2013

Introduction to Table-Valued Parameters with Example

Table-Valued Parameters 
To use a Table Valued Parameters we need follow steps shown below:
  1. Create a table type and define the table structure
  2. Declare a stored procedure that has a parameter of table type.
  3. Declare a table type variable and reference the table type.
  4. Using the INSERT statement and occupy the variable.
  5. We can now pass the variable to the procedure.
For Example,
Let’s create a Department Table and pass the table variable to insert data using procedure. In our example we will create Department table and afterward we will query it and see that all the content of table value parameter is inserted into it.
Department:
CREATE TABLE Department
(
DepartmentID INT PRIMARY KEY,
DepartmentName VARCHAR(30)
)
GO
1. Create a TABLE TYPE and define the table structure:
CREATE TYPE DeptType AS TABLE
(
DeptId INT, DeptName VARCHAR(30)
);
GO

2. Declare a STORED PROCEDURE that has a parameter of table type:
CREATE PROCEDURE InsertDepartment
@InsertDept_TVP DeptType
READONLY
AS
INSERT INTO
Department(DepartmentID,DepartmentName)
SELECT * FROM @InsertDept_TVP;
GO
Important points  to remember :
-  Table-valued parameters must be passed as READONLY parameters to SQL routines. You cannot perform DML operations like UPDATE, DELETE, or INSERT on a table-valued parameter in the body of a routine.
-  You cannot use a table-valued parameter as target of a SELECT INTO or INSERT EXEC statement. A table-valued parameter can be in the FROM clause of SELECT INTO or in the INSERT EXEC string or stored-procedure.
3. Declare a table type variable and reference the table type.
DECLARE @DepartmentTVP AS DeptType;
4. Using the INSERT statement and occupy the variable.
INSERT INTO @DepartmentTVP(DeptId,DeptName)
VALUES (1,'Accounts'),
(
2,'Purchase'),
(
3,'Software'),
(
4,'Stores'),
(
5,'Maarketing');
5. We can now pass the variable to the procedure and Execute.
EXEC InsertDepartment @DepartmentTVP;
GO

Let’s see if the Data are inserted in the Department Table
Conclusion:
Table-Valued Parameters is a new parameter type in SQL SERVER 2008 that provides efficient way of passing the table type variable than using the temporary table or passing so many parameters. It helps in using complex business logic in single routine. They reduce Round Trips to the server making the performance better.

Monday 2 September 2013

Optimizing SQL Query

  • Optimizing the WHERE clause - there are many cases where index access path of a column of the WHERE clause is not used even if the index on that column has already been created. Avoid such cases to make best use of the indexes, which will ultimately improve the performance. Some of these cases are: COLUMN_NAME IS NOT NULL (ROWID for a null is not stored by an index), COLUMN_NAME NOT IN (value1, value2, value3, ...), COLUMN_NAME != expression, COLUMN_NAME LIKE'%pattern' (whereas COLUMN_NAME LIKE 'pattern%' uses the index access path), etc. Usage of expressions or functions on indexed columns will prevent the index access path to be used. So, use them wisely!
  • Using WHERE instead of HAVING - usage of WHERE clause may take advantage of the index defined on the column(s) used in the WHERE clause.
  • Using the leading index columns in WHERE clause - the WHERE clause may use the complex index access path in case we specify the leading index column(s) of a complex index otherwise the WHERE clause won't use the indexed access path.
  • Indexed Scan vs Full Table Scan - Indexed scan is faster only if we are selcting only a few rows of a table otherwise full table scan should be preferred. It's estimated that an indexed scan is slower than a full table scan if the SQL statement is selecting more than 15% of the rows of the table. So, in all such cases use the SQL hints to force full table scan and suppress the use of pre-defined indexes. Okay... any guesses why full table scan is faster when a large percentage of rows are accessed? Because an indexed scan causes multiple reads per row accessed whereas a full table scan can read all rows contained in a block in a single logical read operation.
  • Using ORDER BY for an indexed scan - the optimizer uses the indexed scan if the column specified in the ORDER BY clause has an index defined on it. It'll use indexed scan even if the WHERE doesn't contain that column (or even if the WHERE clause itself is missing). So, analyze if you really want an indexed scan or a full table scan and if the latter is preferred in a particular scenario then use 'FULL' SQL hint to force the full table scan.
  • Minimizing table passes - it normally results in a better performance for obvious reasons.
  • Joining tables in the proper order - the order in which tables are joined normally affects the number of rows processed by that JOIN operation and hence proper ordering of tables in a JOIN operation may result in the processing of fewer rows, which will in turn improve the performance. The key to decide the proper order is to have the most restrictive filtering condition in the early phases of a multiple table JOIN. For example, in case we are using a master table and a details table then it's better to connect to the master table first to connecting to the details table first may result in more number of rows getting joined.
  • Simple is usually faster - yeah... instead of writing a very complex SQL statement, if we break it into multiple simple SQL statements then the chances are quite high that the performance will improve. Make use of the EXPLAIN PLAN and TKPROF tools to analyze both the conditions and stick to the complex SQL only if you're very sure about its performance.
  • Using ROWID and ROWNUM wherever possible - these special columns can be used to improve the performance of many SQL queries. The ROWID search is the fastest for Oracle database and this luxury must be enjoyed wherever possible. ROWNUM comes really handy in the cases where we want to limit the number of rows returned.
  • Usage of explicit cursors is better - explicit cursors perform better as the implicit cursors result in an extra fetch operation. Implicit cursosrs are opened the Oracle Server for INSERT, UPDATE, DELETE, and SELECT statements whereas the explicit cursors are opened by the writers of the query by explicitly using DECLARE, OPEN, FETCH, and CLOSE statements.
  • Reducing network traffic - Arrays and PL/SQL blocks can be used effectively to reduce the network traffic especially in the scenarios where a huge amount of data requires processing. For example, a single INSERT statement can insert thousands of rows if arrays are used. This will obviously result into fewer DB passes and it'll in turn improve performance by reducing the network traffic. Similarly, if we can club multiple SQL statements in a single PL/SQL block then the entire block can be sent to Oracle Server involving a single network communication only, which will eventually improve performance by reducing the network traffic.
  • Using Oracle parallel query option - Since Oracle 8, even the queries based on indexed range scans can use this parallel query option if the index is partitioned. This feature can result in an improved performance in certain scenarios.