There are more demands from different areas within a business requires things getting done quickly. Once way is to optimize the process to let it go faster, and another way is to have more hands on it in which the loads get spreaded, each person works on few smaller parts of a bigger task,  and finally getting task done in a desired time frame. Such approaches exist in the desing of SQL Server processes as well. There’re always ways to tune a procedure up, indexes, table structures, processing order, statistics, plan guides, and etc, however, you may finally find out that those do not always work as you expected. Splitting a monster process into multiple smaller independent tasks and running them in parallel come and play. In this series, I am going to talk about the ways to schedule those tasks and control the order of the executions by T-SQL. Error handling will not be discuss here since the code can always be written in the way that it handles the errors and run successfully

Using SQL jobs to schedule mutiple taks is a very obvious and simple way to run tasks in parallel. There are array of procedures built-in msdb to allow you to control jobs. You can use them directly in your process design but this is not recommended since decoupling business process with system process in this case can provide more flexibilities and scalabilities. In order to separate them, few procedures below are needed to be created in case there is a need that the underlying scheduling system has to be changed with less changes of business processes.

  1. CreateTask
  2. TaskStatus

CreateTask creates and starts a task. We don’t need hvae job cleanup process here since we can define jobs removed by itself after the first run. Code of CreateTask is

create procedure CreateTask (@TaskName sysname, @Handle uniqueidentifier output, @TaskBody nvarchar(max))
as
begin
	set nocount on
	if isnull(@TaskName, '') = '' -- send error back if there is no task name
	begin
		raiserror('No task name', 16,1) 
		return
	end
	if isnull(@TaskBody, '') = '' -- send error back if there is no task body 
	begin
		raiserror('No task body', 16,1) 
		return
	end
	declare @InternalTaskName varchar(128), @DatabaseName sysname
	select	@InternalTaskName = cast(@@spid as nvarchar(20)) + '-' + @TaskName + '-' + CAST(newid() as varchar(50)),
			@DatabaseName = DB_NAME(), @Handle = null

	begin transaction
	exec msdb.dbo.sp_add_job @job_name = @InternalTaskName, @delete_level = 3, @job_id = @Handle output
	select @TaskBody = 'set context_info ' + convert(varchar(256), cast(@InternalTaskName as varbinary(128)), 1) +'; 
	go
	'+ @TaskBody
	exec msdb.dbo.sp_add_jobserver @job_id = @Handle, @server_name = '(LOCAL)'
	exec msdb.dbo.sp_add_jobstep @job_id = @handle, @step_name = 'Task', @database_name = @DatabaseName, @command = @TaskBody
	exec msdb.dbo.sp_start_job @job_id = @handle
	commit	
end
  1. @TaskName: name of the task. it does not need to be unique
  2. @Handle: The job id returned by sp_add_job
  3. @TaskBody: T-SQL code of the task body

There are 2 important tricks here. One is at line 20 where @delete_level = 3 that tells SQL Server remove the job automatically after first run. The second trick is that I inject set context_info to the task body. Contaxt Info is a binary(128) value that can be attached to the caller session. It can be read by function context_info() within the session which set the flag or accessed from sys.dm_exec_sessions.

select session_id, context_info from sys.dm_exec_sessions

This allows DBAs to identify the running tasks and link them back to the creator session.

TaskStatus checks the status of the job in which it checks the existance of the job since we knew that job will be started as it’s created and removed as it’s complemeted automatically.

create function dbo.TaskStatus(@handle uniqueidentifier)
returns bit
as
begin
	return case when exists(select 1 from msdb.dbo.sysjobs where job_id = @handle) then 1 else 0 end
end

Now we can do a test showing as below

set nocount on
declare @SynchronizationTable table(Handle uniqueidentifier primary key)
declare @handle uniqueidentifier, @i int
select @i = 1
while @i <= 3
begin
	exec CreateTask 'MyTest', @handle output, 'waitfor delay ''00:00:10'''
	insert into @SynchronizationTable values(@handle)
	select @i = @i +1
end
while (1=1)
begin
	select @handle = null
	select top 1 @handle = Handle from @SynchronizationTable
	if @handle is null
		break; -- all tasks are done
	if dbo.TaskStatus(@handle) = 0
		delete @SynchronizationTable where Handle = @handle
end

Before line 11, we start 3 tasks and want them run at the same time. From line 11 to the end, the caller process will wait until all jobs complete. This script took 20 seconds.
what would happen if you change the first loop to @i <= 50. It will really create 50 jobs and run them. You will figure that creating those jobs and notifying SQL Agent to start them are not a light way process. It tooks me almost 4 minutes to end up an error on my SQL Server 2008 R2 and the synchronization script never returned and the job causing error was never started. While jobs are created, refreshing job list in SSMS is realy a tough work.

Msg 22022, Level 16, State 1, Line 0
Unable to post notification to SQLServerAgent (reason: The maximum number of pending SQLServerAgent notifications has been exceeded.  The notification will be ignored.)

Definitely, it’s a sign of instability of the solution. This may not happen to your machine however there’s always a chance that some limit being reached and err from your original design. So this is not a recommended solution for launching multiple threads from a session although it’s acceptable if the parallelism of your process is not high, and request of such parallel execution is very low since I would have to mention that every auto deleted jobs are logged in SQL Agent logs. If this is something running very often, the log will grow too fast and cause issues.

Sorry for giving you a none scalable and weaker solution in my first post of this series but it doesn’t mean it’s not usable. More robust solutions are coming in the rest of the posts.

Parallel Task Scheduling (1) – Jobs

You May Also Like

One thought on “Parallel Task Scheduling (1) – Jobs

  1. Thanks for this, Jon. I’m looking forward to reading the rest this series tonight.

    I work with a more primitive version of this idea in my own organization.

    The system has one huge table that contains all the operational data. Queries to this table come from all over the system.

    There is a queue table that contains updates to the operational data. Inserts to this table come from all over the system.

    The rows in the queue table are to be merged the huge table. The queue allows the system to process updates in batches rather than one a time.

    The SQL Agent has several jobs that are almost identical to process the queue in parallel.

    Each job is a T-SQL script that calls a stored procedure. The only difference in each script is the identifier value each script passes into the procedure. Job 1 passes ID 1, job 2 passes ID 2, and so on.

    The stored procedure is a loop. It maintains some metadata about each job identifier, reads some rows from the queue table, and merges the rows into the huge table.

    Choosing the correct level of parallelism is left up to my team, the system administrators.

    To decrease the level of parallelism, we stop and disable one of the existing jobs.

    To increase the level of parallelism up to a previous maximum, we enable and start one of the disabled jobs.

    To increase the level of parallelism beyond a previous maximum, we create a new job as a copy of an old one and replace the old identifier with a new one.

    It’s not the most elegant system, but it works well enough in practice.

    By the way, I discovered your blog thanks to Paul Fryer’s review of your SQL Saturday talk in Portland:
    http://fryerblog.com/post/11358861869/multi-threading-with-tsql

Leave a Reply to Iain Elder Cancel reply

Your email address will not be published. Required fields are marked *

C# | HTML | Plain Text | SQL | XHTML | XML | XSLT |

This site uses Akismet to reduce spam. Learn how your comment data is processed.