Stopping MSDB bloat due to frequent backups

I’ve always wanted to catch a truly fat fish. In fall many species of fish feed heavily to put on fat for the winter, and when catching them you can see the stomach pushing the belly of the fish out and making them look quite bloated. This is the reason that fishing in the cold wind and rain of late fall is worth the pain and sacrifice.
I recently ran into an issue where I noticed the MSDB database on an instance was growing quite bloated. MSDB database contains data related to SQL Agent jobs, database mail, SSIS packages, Backups, and Maintenance plans. In most cases it should be fairly small, but I observed it at close to 4 GB.
20150325 Bloated MSDB Initial Size
To find what was taking all the space I ran this query:

USE msdb

SELECT  OBJECT_NAME(i.OBJECT_ID) AS objectName ,
        i.[name] AS indexName ,
        SUM(a.total_pages) AS totalPages ,
        SUM(a.used_pages) AS usedPages ,
        SUM(a.data_pages) AS dataPages ,
        ( SUM(a.total_pages) * 8 ) / 1024 AS totalSpaceMB ,
        ( SUM(a.used_pages) * 8 ) / 1024 AS usedSpaceMB ,
        ( SUM(a.data_pages) * 8 ) / 1024 AS dataSpaceMB
FROM    sys.indexes i
        INNER JOIN sys.partitions p ON i.OBJECT_ID = p.OBJECT_ID
                                       AND i.index_id = p.index_id
        INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
GROUP BY i.OBJECT_ID ,
        i.index_id ,
        i.[name]
ORDER BY SUM(a.total_pages) DESC ,
        OBJECT_NAME(i.OBJECT_ID)
GO

20150325 Bloated MSDB Biggest Space Users
This showed the backupfile table to be taking over half the database space, and some other associated backup tables taking up quite a bit of the rest. Looking in MSDN, I was able to find that the backupset table contains a row for each backup set. A backup set contains the backup from a single, successful backup operation. At he bottom of the msdn page I found this: “To reduce the number of rows in this table and in other backup and history tables, execute the sp_delete_backuphistory stored procedure.” I followed the hyperlink to the MSDN page describing this stored procedure. The page describes the stored procedure as follows: “Reduces the size of the backup and restore history tables by deleting the entries for backup sets older than the specified date. Additional rows are added to the backup and restore history tables after each backup or restore operation is performed; therefore, we recommend that you periodically execute sp_delete_backuphistory.”
I decided to get a baseline of how many rows are currently in the backupset table, then start deleting rows using this stored procedure. For this instance we never have to look at backups older than a month at the very most, but to be safe I would prefer not to delete backups that are too recent. I decided I wanted to reduce the record count by about 75%, and then monitor growth to see how much data I should actually keep.
Here is the initial rowcount:
20150325 Bloated MSDB Initial Rowcount Next I ran the stored procedure to delete everything older than January 1st 2011:

 EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = '1/1/2011'

Then I checked the rowcount again:
20150325 Bloated MSDB 2011 rowcount I ran this several more times, keeping less history each time:

 EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = '1/1/2012'

20150325 Bloated MSDB 2012 rowcount

 EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = '1/1/2013'

20150325 Bloated MSDB 2013 rowcount I quickly realized much of this bloat is due to a change in the backup configuration we made late last year. In order to facilitate disaster recovery we had to abandon our native SQL backups (NO!) and go with a third party image backup system. I didn’t like this because I prefer to be in charge of the backups instead of leaving them to an outside vendor, but our cloud backup/disaster recovery area was only able to handle image backups, so I had no real option. Since we are busiest late in the year, we ended up having to change the transaction log backups so they are run every minute. This is the only way the image backup would work without slowing the system, since it requires an I/O freeze when it takes the backup, and the backup capture all changes since the last backup. Smaller intervals mean less changes and a shorter I/O freeze time.
After trimming the backup history to about three months:
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = '1/1/2015'
20150325 Bloated MSDB 2015 rowcount I was able to trim the rowcount enough such that there is significant empty space in the database. Next I set up a SQL job to delete all records older than 3 months and run it once a month. Now I don’t have to worry about MSDB continuing to grow.

Advertisements

Using Certificates for Cross-Database Access

Having lockable compartments inside your boat is unfortunately pretty important. A couple years ago I was traveling back from a fishing trip with my brother-in-law and we stopped at a gas station. I remained in the truck while my brother went in to get something to drink. We had gotten up very early to fish that morning, and I was not awake, but not all the way sleeping either. Anyway, someone either could tell I wasn’t paying attention or they just didn’t care and they took my tackle box right out of the back of the boat. I had failed to keep it locked up, and as a result I had quite a bit of tackle taken.
Security is a key concern for database administrators. It seems like every week we hear about another large corporation who had customer data stolen. One way to restrict access to data within the database is through the use of Users. A database user has specific permissions set up, and these permissions can be customized from the database level all the way down to specific columns within a table. However, what should you do if you want a user in one database to be able to access data in a different database? In this post I will show how to sign stored procedures with certificates to enable access to objects in other databases.
The first thing to do is to create a certificate. You can put in a password to make it secure, and give it an expiration date far in the future.

CREATE CERTIFICATE ReportingComponentsExecutor_Cert
ENCRYPTION BY PASSWORD = 'Y3110wP3r¢h'
WITH SUBJECT = 'Gugg: For accessing object in other databases', --Goes into the cert metadata
START_DATE = '20150320', EXPIRY_DATE = '2020320'

The next step is to create a user that will be linked to that certificate in the target object. Since this is the user who has the permissions in the target database, the user needs to be created in that database.

CREATE USER [ReportingComponentsExecutor_User] FOR CERTIFICATE [ReportingComponentsExecutor_Cert]
GO

Next we need to give this user the permissions on the database objects that will be needed.

USE TargetDB
GRANT SELECT ON TargetDB.dbo.myTable TO ReportingComponentsExecutor_User;
GRANT UPDATE ON TargetDB.dbo.myTable TO ReportingComponentsExecutor_User;

Lastly, add the certificate to the stored procedure that will be getting the data from the target database.

ADD SIGNATURE TO ReportingComponents.dbo.GetDataFromTargetDB BY CERTIFICATE ReportingComponentsExecutor_Cert WITH PASSWORD = 'Y3110wP3r¢h'

Any change made to the stored procedure will invalidate the signature, so you will have to re-sign it after every change. It can be a good idea to make a note in the stored procedure’s header so anyone making changes will remember to re-sign it. I also like to set up a table with a list of stored procedures that should be signed, and send out an alert each day for any that have become unsigned. Here is a rough example that uses a hardcoded list instead of a table. This uses the stored procedure I showed here that sends dbmail in html table format:

CREATE PROCEDURE [dbo].[VerifySPSignatures] 

AS
BEGIN

   DECLARE @SignedSPs TABLE (SP VARCHAR(100))
   INSERT INTO @SignedSPs
           ( SP )
   VALUES  ( 'InsertPendingTransIntoOMS' ),
           ('UnlockReplenishmentBatch'),
           ('InsertSameShipAddressIntoPickSlip'),
           ('UpdatePLMStyleGradeRuleValues'),
           ('InsertPendingReplenIntoOMS'),
           ('UpdateAdditionalPOInfo'),
           ('UnredeemTestGiftCardsForCSTraining')
   
   ;WITH cte AS (
   SELECT  cer.name AS [Certificate] ,
           pro.name AS [Stored Procedure]
   FROM    ReportingComponents.sys.procedures pro
           INNER JOIN ReportingComponents.sys.crypt_properties cry ON cry.major_id = pro.OBJECT_ID
           INNER JOIN ReportingComponents.sys.certificates cer ON cer.thumbprint = cry.thumbprint)

   SELECT  [@SignedSPs].SP
   INTO ##CertSPTemp
   FROM @SignedSPs
   LEFT OUTER JOIN cte ON cte.[Stored Procedure] = [@SignedSPs].SP
   WHERE cte.[Stored Procedure] IS NULL

   EXEC DBA.dbo.SendDBEmailwithTabularQuery 
       @qSELECT = N'SELECT',
       @fieldlist = N'SP',
       @qFROM = N'FROM ##CertSPTemp',
       @recipients = N'itsupport@domain.com',
       @subject = N'Reporting Stored Procedures Missing Security Certificate',
       @Title = 'From DBA.dbo.VerifySPSignatures: These need to be re-signed using stored procedures in DBA.'

   DROP TABLE ##CertSPTemp
   
END

I also created a stored procedure in my DBA database to help sign these stored procedures as necessary. Since it is in the DBA database it is only accessible to the DBAs, so I don’t have to worry about it being abused.

CREATE PROCEDURE [dbo].[AddSPToOMSCert]
@StoredProcedureSchemaandName VARCHAR(255), --ReportingComponents stored procedure name with schema included
@TargetDBObjectSchemaandName VARCHAR(255), --TargetDB table or view name with schema included
@AccessType VARCHAR(25) --SELECT, DELETE, UPDATE, or INSERT

AS
BEGIN

   DECLARE @SignSQL NVARCHAR(1000)
   DECLARE @FullSignSQL NVARCHAR(1000)
   DECLARE @PermSQL NVARCHAR(1000)
   
   --Sign the stored procedure using the existing certificate
   SELECT @SignSQL = 'ADD SIGNATURE TO ReportingComponents.' + @StoredProcedureSchemaandName + ' BY CERTIFICATE ReportingComponentsExecutor
    WITH PASSWORD = ''''P@ssword123'''''
	SELECT @FullSignSQL = 'USE ReportingComponents; EXEC sp_executesql N''' + @SignSQL + '''';
	EXEC sp_executesql @FullSignSQL

	--Add permission to the OMSObject
	SELECT @PermSQL = 'USE TARGETDB GRANT ' + @AccessType + ' ON TARGETDB.' + @TargetDBObjectSchemaandName + ' TO ReportingComponentsExecutor'
	EXEC sp_executesql @PermSQL
		
END

GO'

This simplifies the process of signing a stored procedure.
I love looking at examples to demonstrate how something works, so let’s set one up. Let’s assume we have two databases, Gamefish and Baitfish. I have a user in Gamefish, Muskie, who I want to be able to see into a table in Baitfish, but only through the use of a stored procedure. Here’s how I’d set it up.  It is important to note that the same certificate must exist in both databases.  This means we must export the cert created in the first database and import it into the second.

--Create the databases
USE MASTER
CREATE DATABASE Baitfish;
CREATE DATABASE Gamefish;
GO

--Create the login that will be testing permissions
CREATE LOGIN [Muskie] WITH PASSWORD=N'Password123'
GO

--Create the cert
USE Baitfish
CREATE CERTIFICATE CrossDBReportingCert
ENCRYPTION BY PASSWORD = 'B1u3g1LL'
WITH SUBJECT = 'Gugg: For accessing object in other databases', --Goes into the cert metadata
START_DATE = '20150320', EXPIRY_DATE = '20200320'
GO

--Export the cert so we can import it into the other database
BACKUP CERTIFICATE CrossDBReportingCert TO FILE = 'C:\temp\CrossDBReportingCert.cer'
WITH PRIVATE KEY (FILE = 'C:\temp\CrossDBReportingCert.pvk' ,
                  ENCRYPTION BY PASSWORD = 'B1u3g1LL',
                  DECRYPTION BY PASSWORD = 'B1u3g1LL')
GO

--Create the user off the cert
CREATE USER CrossDBReportingUser FROM CERTIFICATE CrossDBReportingCert
GO

--Create the table in baitfish
CREATE TABLE dbo.prey (id INT IDENTITY(1,1), fishname VARCHAR(50))
INSERT INTO dbo.prey
        ( fishname )
VALUES  ( 'whitefish'),('cisco'),('sucker')
GO

--Grant read-only permissions to the new user for dbo.prey
GRANT SELECT ON dbo.prey TO CrossDBReportingUser
GO

--Switch over to the Gamefish Database
USE Gamefish
GO

--Create a user who will have no permissions in the Baitfish database, but full access to the Gamefish DB
CREATE USER [Muskie] FOR LOGIN [Muskie]
EXEC sp_addrolemember N'db_owner', N'Muskie'
GO

--Create a Stored Procedure to get the fish names out of Baitfish.dbo.Prey
CREATE PROCEDURE dbo.GetBaitfishPrey
AS

BEGIN
   SELECT fishname
   FROM Baitfish.dbo.prey
END

GO

--Cert must exist in both databases
CREATE CERTIFICATE CrossDBReportingCert FROM FILE = 'C:\temp\CrossDBReportingCert.cer'
WITH PRIVATE KEY (FILE = 'C:\temp\CrossDBReportingCert.pvk',
                  DECRYPTION BY PASSWORD = 'B1u3g1LL',
                  ENCRYPTION BY PASSWORD = 'B1u3g1LL')
GO

-- Delete the file with the certificate, we don't need it anymore.
EXEC MASTER..xp_cmdshell 'DEL C:\temp\CrossDBReportingCert.*', 'no_output'
GO

--Run as Muskie user
EXECUTE AS LOGIN = 'Muskie'
GO

--Try to get results before signing procedure with certificate
EXEC dbo.GetBaitfishPrey
GO

20150324 Cross DB Permissions Failed

REVERT
GO

--Next sign procedure with certificate
ADD SIGNATURE TO dbo.GetBaitfishPrey BY CERTIFICATE CrossDBReportingCert WITH PASSWORD = 'B1u3g1LL'
GO

--Run as Muskie user
EXECUTE AS LOGIN = 'Muskie'
GO

--Successfully get results from stored procedure
EXEC dbo.GetBaitfishPrey
GO

20150324 Cross DB Permissions Success

REVERT
GO

--Cleanup
USE MASTER
DROP LOGIN Muskie
DROP DATABASE Gamefish;
DROP DATABASE Baitfish;
GO

So there is an example that I believe is easy to understand. Use this as a model to set up your own cross database permissions.

Stop Empty Reports from being Emailed in Reporting Services

My grandpa owns a cabin on a medium sized lake up in Oneida County. This lake is very clear (often described as gin-clear), and so fishing it presents certain challenges. Since the water is clear, you need to make sure your presentation looks natural. Lures with bright colors like fire-tiger are usually not the way to go. You also often have to fish deeper, since the sunlight penetrates further and weeds can grow in deeper depths. Since I grew up fishing this lake, I got used to clear lake fishing. I feel like I’m a better clear-lake fisherman. Maybe that’s why I prefer fishing Okauchee over Pewaukee here in southeast Wisconsin. Pewaukee and Okauchee are the only two lakes in SE WI that are rated A class muskie lakes by the Wisconsin DNR (Department of Natural Resources). Pewaukee gets all the publicity, but Okauchee is a great lake too. Pewaukee has fairly cloudy water, and is a shallower lake while Okauchee is a clear, deep lake. I’ve had much better success on Okauchee, and that’s why it’ll be the first lake I get on when fishing opens on May 3rd.
SQL Server Reporting Services SSRS has the ability to send reports out on a regular basis. This is a very helpful tool because we can put the data right in the users’ inboxes, making it easy for them to get and use the data. The scheduling functionality in SSRS is pretty flexible, but one important piece thats missing is the ability to not send empty reports. If the SQL statements used to send the report produce zero rows, an empty report gets emailed out to the recipients. I decided to look for a way to prevent this from happening. To demonstrate, let’s create a bit of database structure. First I created a table with a few records:

CREATE TABLE dbo.FavoriteLakes
    (
      id INT IDENTITY(1, 1) ,
      LakeName VARCHAR(50)
    )
INSERT  INTO dbo.FavoriteLakes
        ( LakeName )
VALUES  ( 'Two Sisters' ),
        ( 'Little Arbor Vitae' ),
        ( 'Okauchee' ),
        ( 'Pewaukee' ),
        ( 'Little Green Lake' ),
        ( 'Lake Tomahawk' )

Next I created a stored procedure that will return the results of this table. The stored procedure will be the data source for our SSRS report.

CREATE PROCEDURE [dbo].DisplayFavoriteLakes

AS
BEGIN
   SELECT LakeName
   FROM dbo.FavoriteLakes
END

Next I created a very simple SSRS report using BIDS. The report used an embedded data set with the stored procedure as the source, and had a textbox title and a table with a single row for the data.
20150318 SSRS Scheduler Mod
After deploying the report I can see it on the intranet site.
20150318 SSRS Scheduler Mod intranet
Next I set up a subscription for this job, but I stop before actually pressing OK. It will be sending me an email every five minutes.
20150318 SSRS Scheduler Mod Schedule Before actually sending the job I needed to take a screenshot of the top of my job scheduler in SQL Server Agent. Those jobs where the name looks like a GUID are the jobs associated with scheduled subscriptions in SSRS.
20150318 SSRS Scheduler Before Job Scheduled
Next I hit OK in Reporting Services to create the subscription. Then I refreshed the SQL Server Agent jobs and took another look. Now I see one extra job, so I know this is the one that is associated with the subscription I just created.

20150318 SSRS Scheduler AFter Job Scheduled

Upon going into the job the first thing I see is a warning not to modify the job. Doing so can result in database incompatabilities. I had to change the owner from the service account that runs SQL Agent to sa. It is important to note that any changes to the job or to delete the job would require me to change the owner back.
20150318 SSRS Scheduler Job Summary
When I go into the single job step for this job, I see a single line of SQL:

 [LW_ReportDB].dbo.AddEvent @EventType = 'TimedSubscription',
    @EventData = 'e20a1691-792b-4f8f-938e-ac4b5ed25b69'

20150318 SSRS Scheduler Mod Job Step
I’m going to change that SQL to the following:

SELECT TOP 1 1
FROM DBMaint.dbo.FavoriteLakes

IF @@ROWCOUNT > 0
BEGIN
   EXEC [LW_ReportDB].dbo.AddEvent @EventType = 'TimedSubscription',
       @EventData = 'e20a1691-792b-4f8f-938e-ac4b5ed25b69'
END

This takes the existing code and only call it if there is at least one record in the dbo.FavoriteLakes table.
Before and after this change I am receiving the following email:
20150318 SSRS Scheduler Mod Email
However, after clearing out the table:

TRUNCATE TABLE DBMaint.dbo.FavoriteLakes

I am no longer receiving the email. I’ve now made the subscription dependant upon having a result set.

Oracle Invalid Objects by Schema

We’ve had a spell of warm weather for the last couple weeks, and I am optimistic we’ll have an early ice-off this spring. The weather has been into the fifties and sixties for the last couple weeks, and even hit seventy yesterday. Last year many lakes up in Oneida and Vilas counties were still covered in ice when the main fishing season opened on May 3rd. This year the ice may be off by April 1st! Although the fishing season doesn’t open for most fish (walleye, northern pike, muskie, bass) until May 2nd, panfish (bluegill, yellow perch) are open all year round. It’s a good time to get some fishing in with the kids.

My oldest son with a freshly caught bluegill

My oldest son with a freshly caught bluegill

Passing on a love for fishing is something I’m hoping to do successfully. I have a couple buddies who still fish regularly with their dads, and I’m hoping when my kids are grown up they’ll still want to spend time fishing with me.
Today I received an alert in Oracle Enterprise Manager. The alert says “Invalid Objects by Schema”, “4 object(s) are invalid in the TAURUS schema”.
20150317 Oracle Invalid Objects OEM
This is a fairly common alert, and it arises when objects that other objects depend on are changed. Usually for me this happens when a table that a stored procedure or view uses as a data source changes. To fix, simply recompile the stored procedure or view. First you’ll need to find which objects are invalid. You can use this SQL statement to find the objects:

SELECT owner,
  OBJECT_NAME,
  object_type,
  status
FROM dba_objects
WHERE status<>'VALID'
ORDER BY owner;

20150317 Oracle Invalid Objects Listed
To fix you just need to recompile the object. You can use this syntax:

--Recompile as sysdba
ALTER VIEW VEXT_MISSING_OPS compile;

Now any invalid objects can be recompiled and your database will stay running smoothly!

Compression in SQL Server

There is one week each year that is absolutely terrible for walleye fishing, and that week is when the mayfly hatch. Mayflies are a type of fly similar to a dragon fly. They live in the immature stage for a full year underwater, then they burrow into the ground underneath the water to change into the final fly stage. They usually hatch all in a one to two day span, and emerge from the water for a day or two before reproducing and then dying. When this hatch takes place walleye feed heavily on them, and trying to catch a walleye during this period is almost impossible due to the amount of food available to them. The density of the mayflies can be so great that it is sometimes seen on weather radar.
The density of data on a SQL Server page can be increased through the use of compression. Compression in SQL Server is a powerful tool. Not only does it reduce the amount of disk space being used, saving money on disks, but given that reading from a disk is the slowest part of the retrieval of data from a database, it will speed up your queries (less reads = less time). The one downside is that compression will increase CPU usage by a small amount (I’ve seen about 3% increase), so if you’re strapped for CPU you’re going to have some decisions to make.
Compression in SQL Server comes in two different flavors: row compression and page compression. Row compression works by reducing the data type for a specific record to the lowest sized data type that will accurately hold the information. For example, an INTEGER field takes 4 bytes. If I’m storing the number 23 in an INTEGER field, I’m taking up a lot of space by storing zeros. Row compression will reduce that record to a tinyint, thereby saving 3 bytes. This clearly only reduces fixed length fields, so if your table is mostly variable length it may not provide much space savings.
Page compression gives you row compression, but it also includes some compression information structure in each data page. There are two parts to the compression information structure – a column prefix section and a data dictionary section. The column prefix section will hold values that are part of the total field and are repetitive in a field. For example, if a text field is storing muskie lure manufacturers and it has “Muskie Innoviations”, “Muskie Mania”, and “Muskie Mayhem”, the prefix section can hold the word “Muskie ” and then each record needs to just hold a pointer to the prefix and the remainder of the string. The dictionary section acts the same way, but it only stores complete matches. So if I am storing addresses (city, state, zip) and most of them are in Wisconsin, the data dictionary would store “Wisconsin” and each state that had that record would just include a pointer to that record in the data dictionary.
So there’s a pretty good summary of how compression works in SQL Server, now let’s look at how to set it up. You can set up compression when you are first creating a table with this syntax:

CREATE TABLE [dbo].[BaitPurchases](
   [ID] [INT] IDENTITY(1,1) NOT NULL,
   [Baitname] [VARCHAR](255) NOT NULL,
   [Purchasedate] [DATETIME2](7) NULL,
   [Price] [MONEY] NULL,
) ON [PRIMARY]
WITH (DATA_COMPRESSION = PAGE) --OR ROW
GO

You can also add compression to an existing table like this:

ALTER TABLE dbo.FishCaught REBUILD WITH (DATA_COMPRESSION = PAGE) --OR ROW

Microsoft also includes a handy stored procedure for estimating how much of a space savings you will gain by compressing a table. Here is the syntax for that SP:

EXEC sp_estimate_data_compression_savings
  'dbo','LiveBait',NULL,NULL,'page'

It is also important to note that nonclustered index compression is not tied to table (clustered indexes and heaps) compression. This means you will need to compress the table and each nonclustered index separately. I’ve created the following SQL to generate the SQL statements needed to compress a table and all its non-clustered indexes:

DECLARE @tablename VARCHAR(6) = 'AverageFishLength'

SELECT 'ALTER INDEX ' + ind.name + ' ON dbo.' + OBJECT_NAME(ind.OBJECT_ID) + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE, ONLINE = ON)'
FROM sys.indexes IND
WHERE OBJECT_NAME(ind.OBJECT_ID) = @tablename
AND type_desc = 'NONCLUSTERED'

Lastly, I’ll share my results when compressing some key tables for one of our largest databases here. The total database used space went from 84.3 GB to 33.2 GB, a space savings of almost 61%!
Here is a screenshot of the spreadsheet where I tracked my results. I ended up compressing the 48 largest tables in a database with over 500 tables.

20150312 Compression

Max Degree of Parallelism and Cost Threshold for Parallelism

A couple years ago I took a fishing trip up to Canada with my Pop, brother-in-law, and his Pop. We went to Lost Bay Resort on Cliff lake in Northwestern Ontario. It was an outstanding experience, staying in a cabin, spending time with other men, and doing a whole lot of fishing. I really fell in love with Muskie fishing while I was up there. Between the four of us (two to a boat) we probably saw over 100 of the toothy predators, though we managed to boat only three of them. I caught my current personal best, this 46″ beast:
PB Muskie
Let’s say we knew of ten must-fish locations on the lake, and we wanted to make sure to fish each of them one day. If we had a single boat to fish them, we would move from one place to another until we’d hit all ten locations. If the locations are big and each takes a while to thoroughly fish, we’re going to be spending quite a bit of time. A more efficient way to make sure we are fishing each of these ten locations would be to use multiple boats. If we have two boats then each needs to cover just five of the locations. If we have five boats each one can focus on just two locations. From a fishing perspective, this makes more sense as the more water that gets covered, the better chance of catching fish.

From a SQL Server perspective, this concept, called Parallelism, also makes a lot of sense. If we can split a task among multiple workers, or threads, the task will finish in a much shorter time. To facilitate this process, SQL Server has a couple of settings that can be adjusted. The first is called Max Degree of Parallelism, of MAXDOP. This setting limits the number of processors to be used in parallel plan execution. The default is 0 which will use all available processors. The maximum number of processors that can be used for a parallel plan is 64, and the maximum number of cores is 32,767. The second setting is called Cost Threshold of Parallelism. This controls how expensive a plan must be before the query optimizer considers using a parallel plan. This is an important part of the equation because threads that finish their part of the query have to sit and wait for the other threads to finish. Using our fishing analogy, if you have five boats fishing ten locations and one boat has two very large locations that each take a half day to fish, but all the others have small locations that take a mere couple hours to fish, you’re going to have four baits sitting around doing nothing while that last boat fishes for the rest of the day.

So you can see that Max Degree of Parallelism and Cost Threshold of Parallelism must managed together to maximize the system efficiency. The only way to know the appropriate values for each is to know your system! You’ll see lots of sources around the net saying turn off Parallelism for systems that are primarily OLAP. This is, in my opinion, bad advice! Use the Cost Threshold of Parallelism to make sure simple queries aren’t using parallelism, don’t turn it off completely. This is like saying you’re usually going to be moving around the lake with a trolling motor, so don’t bother to bring along an outboard. Even if you are using the trolling motor 99% of the time, when it does come time to move all the way across the lake, you’re going to be glad you have that outboard available.

First let’s take a look at where these server properties are shown in SSMS.

20150305 MAXDOP Properties SSMS

On this sandbox server, I have the Cost Threshold for Parallelism set to 1 and the Max Degree of Parallelism set to 8.  This means that any query where the cost is over 1, the query optimizer will try to use parallelism.  Well, how much is 1?  To answer that, we need to look at a query’s execution plan.  Here’s a simple query using a couple tables from previous blog posts:

SELECT *
FROM Sandbox.dbo.AverageFishLength afl
INNER JOIN Sandbox.dbo.FishCaught fc ON fc.FishType = afl.Fishtype
WHERE afl.AvgLength > 12

I use Control + M before executing so I can see the actual execution plan, and then run the query. To see the cost relating to Cost Threshold for Parallelism for this query, I leave my mouse over the farthest left icon, the SELECT one. A small table will popup that includes the Estimated Subtree Cost:

20150305 MAXDOP Simply Query Subtree Cost

Since my Subtree Cost here is 0.0084115, which is less than 1, the Query Optimizer will not attempt to use parallelism on this query. In order to get a complex plan, I’m going to take a few tables that have been created in previous posts and do a cross join:

SELECT *
FROM Sandbox.dbo.ClusteredIndex CI
CROSS JOIN Sandbox.dbo.NonClusteredIndex NCI
WHERE CI.ID <= 15;

Looking at the actual execution plan for this query, I can see the estimated subtree cost is now X, so the Query Optimizer will/won’t try to use parallelism on this query.

20150305 MAXDOP Complex Query Subtree Cost

So that’s how to see the cost associated with specific queries. However, to set the Cost Threshold to Parallelism to an appropriate number, we need a general idea of the cost of all our queries. To accomplish this, I’ve created the following table:

CREATE TABLE [dbo].[PlanCacheStatementCost](
   [PCSCID] [INT] IDENTITY(1,1) NOT NULL,
   [StatementText] [VARCHAR](8000) NOT NULL,
   [StatementSubTreeCost] [FLOAT] NOT NULL,
   [UseCounts] [INT] NOT NULL,
   [StatementEstRows] [INT] NOT NULL,
   [Avg_Worker_Time] [INT] NULL,
   [Avg_Logical_Reads] [INT] NULL,
   [Avg_Elapsed_Time] [INT] NULL,
   [CaptureDateTime] [DATETIME] NOT NULL
) ON [PRIMARY]

And this stored procedure to fill the table:

CREATE PROCEDURE [dbo].[CapturePlanCacheStatementCost] 

AS
BEGIN
   
   DECLARE @CaptureDate DATETIME
   SELECT @CaptureDate = DATEADD(minute, DATEDIFF(minute, 0, DATEADD(second, 30 - DATEPART(second, GETDATE() + '00:00:30.000'), GETDATE())), 0)
   
    CREATE TABLE #StatementSubTreeCost
        (
          StatementSubTreeCost FLOAT ,
          StatementId INT ,
          UseCounts BIGINT ,
          plan_handle VARBINARY(64)
        );

    WITH XMLNAMESPACES
   (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
   INSERT INTO #StatementSubTreeCost (StatementSubTreeCost, StatementId, UseCounts, plan_handle)
   SELECT
   c.value('@StatementSubTreeCost', 'float') AS StatementSubTreeCost
   ,c.value('@StatementId', 'float') AS StatementId
   ,cp.UseCounts
   ,cp.plan_handle
   FROM sys.dm_exec_cached_plans AS cp
   CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qp
   CROSS APPLY qp.query_plan.nodes('//StmtSimple') t(c)
   WHERE cp.cacheobjtype = 'Compiled Plan'
   AND qp.query_plan.exist('//StmtSimple') = 1
   AND c.value('@StatementSubTreeCost', 'float') > 5
   ORDER BY c.value('@StatementSubTreeCost', 'float') DESC;

       WITH XMLNAMESPACES
   (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
   ,cQueryStats
   AS (
   SELECT query_hash
   ,SUM(total_worker_time / NULLIF(qs.execution_count,0)) AS avg_worker_time
   ,SUM(total_logical_reads / NULLIF(qs.execution_count,0)) AS avg_logical_reads
   ,SUM(total_elapsed_time / NULLIF(qs.execution_count,0)) AS avg_elapsed_time
   FROM sys.dm_exec_query_stats qs
   GROUP BY query_hash
   )
   INSERT INTO dbo.PlanCacheStatementCost
           ( StatementText ,
             StatementSubTreeCost ,
             UseCounts ,
             StatementEstRows ,
             Avg_Worker_Time ,
             Avg_Logical_Reads ,
             Avg_Elapsed_Time ,
             CaptureDateTime
           )

   SELECT
   LEFT(c.value('@StatementText', 'varchar(max)'),8000) AS StatementText,
   s.StatementSubTreeCost
   , s.usecounts
   , CAST(c.value('@StatementEstRows', 'float') AS BIGINT) AS StatementEstRows
   , qs.avg_worker_time
   , qs.avg_logical_reads
   , qs.avg_elapsed_time
   , @CaptureDate
   FROM #StatementSubTreeCost s
   CROSS APPLY sys.dm_exec_query_plan(s.plan_handle) AS qp
   CROSS APPLY qp.query_plan.nodes('//StmtSimple') t(c)
   LEFT OUTER JOIN cQueryStats qs ON c.value('xs:hexBinary(substring(@QueryHash,3))','binary(8)') = query_hash
   WHERE c.value('@StatementId', 'float') = s.StatementId
   ORDER BY c.value('@StatementSubTreeCost', 'float') DESC, s.StatementSubTreeCost DESC

    DROP TABLE #StatementSubTreeCost
   
END

This stored procedure will grab every execution plan in the plan cache that has an estimated subtree cost over five and put it into the table. The five will have to be adjusted based on your system. You’ll get a good idea by running it a couple time what that number should be. You want to see the expensive plans, but you don’t want to be inundated. Note that this is a very expensive query and should only be run on a Production environment with great care! I then set up a SQL Agent job to run this stored procedure on a regular basis. Now I can look through the “expensive” queries and decide whether I’d want these queries to run in parallel or not, and adjust the Cost Threshold of Parallelism accordingly.
20150305 MAXDOP Expensive Queries
One last thing to note is the wait_stat CXPACKET. This is the measurement of amount of time that worker threads are waiting for the other threads assigned to the same task to finish. So in our example of the five boats where four of them are waiting half a day for the last to finish fishing, CXPACKET would measure the time those four boats are waiting. CXPACKET can indicate you have Cost Threshold of Parallelism set too low, but you have to know how to interpret it. Just because CXPACKET is high doesn’t mean you have a problem. Indeed by turning off parallelism (setting Max Degree of Parallelism to 1) you can eliminate the CXPACKET wait completely, but all your complex queries are going to be running more slowly than they could be. You want to set the Cost Threshold of Parallelism to a number where complex queries are getting the parallelism they need, but you don’t have a bunch of worker threads sitting around waiting for others to finish. You can see the CXPACKET wait with this query:

SELECT  wait_type ,
        waiting_tasks_count ,
        wait_time_ms ,
        max_wait_time_ms ,
        signal_wait_time_ms 
FROM    sys.dm_os_wait_stats
WHERE   wait_type ='CXPACKET'

The numbers returned are cumulative, so you need to track these numbers over time to see how fast the waits are increasing.
I hope this post has helped you understand how parallelism can help make queries run faster, and that using the correct settings means knowing your system and the queries that are routinely being run.

Striped Backups in SQL Server

Trolling is a common technique for trying to catch fish. Trolling involves motoring slowly through the water while pulling one or more fishing lines. Usually rod holders secure the rod in the boat and allow you to space the lures apart. You’ll set a line deep, a line shallow, and a line in a medium depth, and you can also use different lure types and different colors. With this great variety of bait in the water, you have a much better chance of giving the fish something that they are interested in biting. I’ve never been a fan of trolling because you generally aren’t holding the rods, so you don’t feel the fish strike. I kind of view it as motoring around in circles, then reeling in a fish when it strikes. That said, I certainly understand how it can be more efficient than casting.
Similarly, SQL Server gives us the ability to be more efficient with our backups by writing them out with multiple lines, or striping them. Using this technique, the backups are written to multiple files. The total space used is equal to a single file, but since each file can be written to and read from simultaneously, the backups and restores may end up being faster if I/O throughput is an issue. Before I show an example, there is one important caveats to mention:

  • All of the backup files are necessary to do a restore. These aren’t like a mirrored RAID where you get redundancy. Therefore, if you lose one file, the entire backup is broken.

Ok, now on to the demonstration. To make a good test you’re going to want a fairly large database that has a high percentage of space used. If you need to artificially fill a database for testing, try using my script below to create a table and fill it with junk data:


CREATE TABLE StripedBackupTest
    (
      ID UNIQUEIDENTIFIER DEFAULT NEWID()
                          PRIMARY KEY ,
      FixedNumber DECIMAL(38, 10) ,
      MuskieFishingRocks CHAR(100) ,
      ReallyLongBlob VARCHAR(MAX) ,
     ReallyLongBlob2 VARCHAR(MAX) ,
     ReallyLongBlob3 VARCHAR(MAX)
    )
    CREATE INDEX IX_AllCols ON dbo.StripedBackupTest 
    (
    MuskieFishingRocks,
    FixedNumber,
    ID
    ) INCLUDE (ReallyLongBlob)
   
DECLARE @i INT = 1
WHILE @i <= 100000
   BEGIN
       INSERT INTO dbo.StripedBackupTest
               ( FixedNumber ,
                 MuskieFishingRocks ,
                 ReallyLongBlob ,
                 ReallyLongBlob2 ,
                 ReallyLongBlob3
               )
       SELECT  ABS(CAST(NEWID() AS BINARY(6))%1000000000)*1000000000.0 
                   + ABS(CAST(NEWID() AS BINARY(6))%1000000000)
                   + ABS(CAST(NEWID() AS BINARY(6))%1000000000)*.0000000001, --A random number with 18 digits left of the decimal and 10 right of the decimal
               'MuskieFishingRocks',
               REPLICATE(CONVERT(VARCHAR(255),NEWID()) + CHAR(10) + CHAR(13),500),
               REPLICATE(CONVERT(VARCHAR(255),NEWID()) + CHAR(10) + CHAR(13),500),
               REPLICATE(CONVERT(VARCHAR(255),NEWID()) + CHAR(10) + CHAR(13),500)
   END

I’m going to use an existing test database that is almost 28 GB in size, with almost 4 GB of free space. This gives me about 24 GB in used space.
20150219 Striped Backups Final DB Size First I’ll take a normal, one file backup to get a baseline.

BACKUP DATABASE cwi_warehouse TO  DISK = N'T:\TempBackups\SingleBackupTest.bak' 
   WITH  COPY_ONLY, NOFORMAT, NOINIT,  NAME = N'DB_Test-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

20150219 Striped Backups Single File Backup Time You can see this backup took 334 seconds to run. It’s also a great idea to perform a restore-verify only consistency check on a backup when you take it. I also ran that and got an additional 3:51 amount of time:

DECLARE @backupSetId AS INT
SELECT  @backupSetId = position
FROM    msdb..backupset
WHERE   database_name = N'cwi_warehouse'
        AND backup_set_id = ( SELECT    MAX(backup_set_id)
                              FROM      msdb..backupset
                              WHERE     database_name = N'cwi_warehouse'
                            )
IF @backupSetId IS NULL
    BEGIN
        RAISERROR(N'Verify failed. Backup information for database ''cwi_warehouse'' not found.', 16, 1)
    END
RESTORE VERIFYONLY FROM  DISK = N'T:\TempBackups\SingleBackupTest.bak' WITH  FILE = @backupSetId,  NOUNLOAD,  NOREWIND
GO

20150219 Striped Backups Single File Verify Time

The last step is to run a full restore:

USE [master]
RESTORE DATABASE [cwi_warehouse] FROM  DISK = N'T:\TempBackups\SingleBackupTest.bak' WITH  FILE = 1,  NOUNLOAD,  REPLACE,  STATS = 5

GO

20150219 Striped Backups Single File Restore Time

This ran in 306 seconds. Now we have our baseline. It can be a good idea to run each statment three times and take the average when calculating a baseline, just because other things can affect the results, such as processes, network traffic, etc. Next I’ll run a striped backup using four separate files:

BACKUP DATABASE cwi_warehouse TO  DISK = N'T:\TempBackups\MultiBackup_1.bak' ,
   DISK = N'T:\TempBackups\MultiBackup_2.bak' ,
   DISK = N'T:\TempBackups\MultiBackup_3.bak' ,
   DISK = N'T:\TempBackups\MultiBackup_4.bak' 
   WITH  COPY_ONLY, NOFORMAT, NOINIT,  NAME = N'DB_Test-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

This time the backup ran in 340 seconds.
20150219 Striped Backups Multi File Backup Time Next I run the backup verification.

DECLARE @backupSetId AS INT
SELECT  @backupSetId = position
FROM    msdb..backupset
WHERE   database_name = N'cwi_warehouse'
        AND backup_set_id = ( SELECT    MAX(backup_set_id)
                              FROM      msdb..backupset
                              WHERE     database_name = N'cwi_warehouse'
                            )
IF @backupSetId IS NULL
    BEGIN
        RAISERROR(N'Verify failed. Backup information for database ''cwi_warehouse'' not found.', 16, 1)
    END
RESTORE VERIFYONLY FROM  DISK = N'T:\TempBackups\MultiBackup_1.bak',
   DISK = N'T:\TempBackups\MultiBackup_2.bak' ,
   DISK = N'T:\TempBackups\MultiBackup_3.bak' ,
   DISK = N'T:\TempBackups\MultiBackup_4.bak'  WITH  FILE = @backupSetId,  NOUNLOAD,  NOREWIND
GO

20150219 Striped Backups Multi File Verify Time This ran in 3:37.
Lastly I run the full Restore.

USE [master]
RESTORE DATABASE [cwi_warehouse] FROM  DISK = N'T:\TempBackups\SingleBackupTest.bak',
   DISK = N'T:\TempBackups\MultiBackup_2.bak' ,
   DISK = N'T:\TempBackups\MultiBackup_3.bak' ,
   DISK = N'T:\TempBackups\MultiBackup_4.bak' WITH  FILE = 1,  NOUNLOAD,  REPLACE,  STATS = 5

GO

20150219 Striped Backups Multi File Restore Time This ran in 352 seconds.
You can see that the multi-file backup configuration didn’t really give us any significant time savings. The last point to make is that multi-file backups can be a huge advantage when backup disk space is an issue. If you have a particularily large backup and need to spread it across multiple physical disks to get it to fit. You can see here the sum of the four backups were about equal in size to the single backup file.

20150219 Striped Backups Single File Size

20150219 Striped Backups Multi File Size

If I had four disks with five GB available on each, I would not be able to do a backup of this database with a single file, but could with multiple files. Just remember, all the files are needed to be able to restore the database.