Locking and Blocking (3) – Update and Exclusive Locks

Update and Exclusive locks mostly are used for Data Manipulating Language delete and update. Fisrt of all, let’s have a quick glance below:

use tempdb
set nocount on 
if OBJECT_ID('abc') is not null
	drop table abc
create table abc(id int)
insert into abc values(1),(2),(3)
begin transaction
delete abc where id = 1
exec sp_lock @@spid
rollback transaction
/*
spid   dbid   ObjId       IndId  Type Resource                         Mode     Status
------ ------ ----------- ------ ---- -------------------------------- -------- ------
52     2      1365579903  0      RID  1:172:0                          X        GRANT
52     2      1365579903  0      PAG  1:172                            IX       GRANT
52     2      1365579903  0      TAB                                   IX       GRANT
52     1      439672614   0      TAB                                   IS       GRANT
*/

From result, you can ignore the line 17, the IS lock on that line on master database (dbid = 1) is look up values in the master database, like RID, PAG, TAB. While data being deleted, there’s on IX (Intent Exclusive) lock granted to the table (TAB), IX lock on the page where the records located, and X (eXclusive) lock on the row. After transaction is rolledback or committed, the lock on those resources will be release. Seem like there is nothing to the Update lock. That’s true if you use this tool or sys.dm_trans_locks to check the locking behaviour. Your will see more locks by using below.

Read more

Locking and Blocking (2) – Shared Locks and Lock Resource

I talked about tools can be used for analyzing locking behaviors in prevous post. In this post, I am going to describe shared lock and intent shared locks and how to extract index key hash value and index from the returning value of Extended Event. Shared lock and Intent Shared lock are used while reading the data from tables. They called “shared” because if there are one record holding a S lock on an record, other sessions can hold the S lock on the same record. You may think this lock is useless. That’s true if SQL Server only supports those 2 type of locks. In my later post, more lock types will come. The S lock conflicts with Exclusive(X) locks. While an Shared lock is holding, an Exclusive lock will wait. Think about this question, will you allow others to drop a database you connected? No, because there is an Shared lock on the database when a connection is on it.

Read more

Locking and Blocking (1) – Tools

I am going to start a series of posts to talk about Locking and blocking in SQL Server, which is one of my favorate topics. As a SQL Server developer, many professionals are more focusing on programming languages, database designs, features (like Service Broker), writting SQLs to effectively manipulate data and process designs. They’re all correct. But one more thing is quite frequently getting neglected is locking. Especially after SQL Server 2005, Snapshot isolatioin level is used more frequent than others. It seems like SQL Server will never locked. Actually, that’s not the fact. Nothing has been changed from locking perspective before and after version 2005. After having a fully understanding of locking and blocking mechanism, you will find out you can easily write a high concurrency application, prevent excessive blocking, and design more sophisticated concurrent SQL process controls.

Read more

SQL Saturday #92 @Portland

SQL Saturday 92 Portland Oct 8, 2011 Great experience of presenting one of my favorite, Multi-threading TSQL. About 20 folks attended the session. Thanks for that. Here you can download the slide deck for my session. Alternatively, You can download them from SQL Saturday’s website http://www.sqlsaturday.com/92/schedule.aspx

Restore Database With Partitions Excluded (or Selected Partitions)

Refreshing QA environment is not an easy work when your backup files are terabytes in size. The challenges are not only on space but also on time. Some tables are never envolved into QA phase. Restoring them will waste of your time and space. If you put tables in different file groups, then you would have the option to not restore them. If they are accessed by QA test, you can rename existing corrupted table (file groups are not restored) and create an empty table to make the logic correct. Now, if there is a huge table holding the historical data for the system, QA only need the latest 2 partitions of worth of data for testing, most of the DBAs in this case will restore entire database fist, switch useless partions out of the table, drop the table and file group. What if you have space limitation that only allow you restore part of the database? The solution would be

  • Restore database with primary file group and desired file groups
  • create table empty table which has the same structure as original, including the partition scheme
  • Swith the partition from existing table to the empty table
  • Rename tables

That’s a big work especially when you have hundreds of partitions across multiple tables. Here I post my script for that. It supports full, differential, log, and point in time retore, as well as multiple partitions over tables.

Read more

Procedures for API Cursor

SQL Server API cursor was designed for data access components years ago. It’s sharing the same framework of T-SQL cursors with more powerful features, such as create cursor over a return set from a procedure. This article lists procedures for API cursor. They are undocumented and deprecated. Please use them in caution. The major part of this article is copied it from http://jtds.sourceforge.net/apiCursors.html.

Read more

SQL Saturday #92 @Portland

SQL Saturday 92 Portland Oct 8, 2011 I am going to present “Multi-threading TSQL” in Portland Oct 8th, 2011, http://www.sqlsaturday.com/92/eventhome.aspx. This session will tell you how to use pure T-SQL to implement parallel SQL processing with complex ordering, such as how to start 2 threads at beginning, synchrinoze them then start another n threads for … Read more

SQL Server Transaction Log Buffers

Transaction log is a very important component in SQL Server to keep transaction durable even in an event of system failure. It’s generated while a modification taking place in most of data pages in the memory that eventually being written into data files within a database. Inserting, deleting and updating DML operations will generate transaction logs. DDL operations, such as alter table, create procedure, which cause changes to the system tables, will generate transaction logs as well. Transaction logs are generated by transactions. Any data modification to persistent data in the data files will be at least implicitly involved in a transaction. As far as I know, table variables are not transactional even though SQL Server actually writes the data in table variable into the data files of tempdb. (this is an reason why table variable is faster)

Read more

A Procedure for Creating Database Snapshot

Database snapshot is a great SQL Server built-in feature serving many situations for programmers and DBAs to quickly create a read only copy of a operational database. Sometime, creating a database snapshot is a daunting work especially when you have hundreds database files. Here I give you the code to generate database snapshot automatically with … Read more

Cold and Hot Backup

There are always a chance databases suffering from severe system failure and hence causes database corruption, for instance, memory failure, hard drive failure, power failure and so on. Once the database gets corrupted, it’s almost hopeless to recover it without backup. So it’s very important to backup the databases for any database systems. There are two big categories in backup world – cold backup and hot backup.

Read more