Never Assume – An Index Defragmentation Story

I can remember muskie fishing a couple years ago with my brother in law. We were fishing down a shoreline and moved to a part of the lake where I’d never even seen a muskie before. I was wondering how much longer I should wait before asking him if he wanted to move on to a new location. Suddenly he encountered a viscious strike, and shortly after landed a fat 38″ muskie. Sometimes you think you know where the fish will be, but you never really know you so you’ve got to check.

This week I got an email from one of our vendors. The vendor was reminding me that I had declined some of their maintenance script for their database. I didn’t see the need for their script as I run maintenance over all the databases on the instance at the same time. I update statistics and rebuild/reorganize indexes based on fragmentation levels and page counts. Their maintenance also included a file shrink, which is usually a waste of resources – size it out correctly the first time! Anyway, I responded that the software was performing fine and my maintenance plan was covering everything that theirs would have. Then, just for kicks and giggles, I decided to check out the fragmentaton on the database in question. It is a small database and only 6 tables are over 1000 pages anyway, so fragmentation probably wouldn’t ever cause a noticable performance impact, but I was surprised to see some pretty high numbers. I went to my defragmentation log and could not find a record of any of those tables being defragmented in the recent past. I went in and checked the SQL Agent job and found it was running as expected. Then I found the issue – the stored procedure I use to loop through the tables and defragment takes a @databases parameter that can be filled in if you only want to defragment select databases. Since there were a few I wanted to skip, I put all the others in, but the parameter was defined as varchar(128), which was woefully short. The defragmentation routine was only getting the first several databases that I had specified, and the others were not being touched.
I quickly corrected this by extending the parameter to varchar(4000) and ran it (overnight) to catch up. Now I can see fragmentation is down where it should be. The lesson in this story is to check and double check. As a DBA you can never be too careful, and you don’t ever really know something unless you can prove it by looking at the raw data.


Creating Drill Down Reports in SQL Server Reporting Services

The wait finally ended! The Okauchee boat launch, after being closed for remodeling for most of the year, finally opened back up. I had an opportunity to try it out last weekend, and I was pretty happy with it. They left the existing launch ramp from the previous configuration, and the pitch is absolutely perfect. They cleared out a bunch of trees and a disgusting bathroom, and replaced them with extra parking places, some grass, and a permanent port-a-john. I felt like they probably could have fit some extra parking places if they had made less of it grass. My main complaint is there is no way to stop and clear weeds after pulling out of the lake without blocking other parked trailers or rigs trying to drive out. In terms of fishing, I spent about three hours on the lake fishing for muskies with a buddy. We managed three fairly aggressive follows, but the fish were all pretty small. All in all, it was a great morning and it was awesome being back on Okauchee.

Okauchee boat launch
Today I had to learn to make a slightly more complex report in SQL Server Reporting Services (SSRS) than I’m used to. In order to mimic the functionality of an Excel pivot table, I needed to make use of the SSRS drilldown functionality. The first step is to create a dataset that you want to display. For my purposes, I am using two predefined business intelligence views, one that has invoice data and one that has product data:

SELECT  p.ProductClass ,
        p.ProductDepartment ,
        p.ProductNumber ,
        SUM(i.[Total Amount]) AS TotalAmount ,
        SUM(i.[Ship Qty]) AS ShippedQty
FROM    BI.dbo.FACT_Invoices i
        INNER JOIN BI.dbo.DIM_Products p ON p.[ITM Number] = i.[ITM Number]
                                            AND p.[SKU Code] = i.[SKU Code]
WHERE   i.[Inv Date] BETWEEN '2015-09-03'
                     AND     '2015-09-04'
GROUP BY p.ProductClass ,
        p.ProductDepartment ,

The next step is to set up a new report in SSRS and define the Data Source and your Data Set.
20151016 SSRS data defined Next add a tablix to the report. Put your metric data in the right columns and your highest level category in the left column.
20151016 SSRS tablix part 1 You need to add a child group for any fields that you want to drill into. I’ll start by adding the Product Department.
20151016 SSRS add child group20151016 SSRS add department This will automatically add the group to your tablix.
20151016 SSRS tablix part 2 Next pull up the properties for the row group you just added.
20151016 SSRS child row group properties Go to Visibility, and choose Hide. Toggle the checkbox to Display can be toggled by this report item:. Choose the row set of data that is the parent.
20151016 SSRS child group by visibility Next I add the Product Number data set and change the properties in the same way. This time I make the Department the toggled by report item. I format the report to make it look polished:
20151016 SSRS final design After deploying, you can now see there are plus signs before any unexpanded Classes and Departments, and minus signs after any that I have expanded.
20151016 SSRS final report I hope this has provided a simple example of how to create drilldown reports in SSRS

Oracle Ends Standard Edition and Standard Edition One

A Wisconsin fishing license costs $20 a year for any resident age 16 or older. There are discounts for first time purchasers, kids age 16 and 17, active members of the armed forces, and senior citizens. Personally, I have no problem paying $20 a year to fish as much as I want. The money goes to support the WI DNR in their efforts to manage and improve our fisheries. Included in this management is fish stocking, which is very important as many lakes have no natural reproduction of some types of fish. It also gives each licensee a stake in the success and beauty of our natural resources. We care more about something when we have to pay for it.

WI fishing license
Oracle has really been getting on my nerves with their licensing lately. Two times already this year I’ve been harassed in an effort to get us to give them more money. The first time was a periodic license review with the threat of a full audit behind it. Before I even explained our specific situation the salesman was already trying to push me into buying Enterprise licenses that would have costed extra tens of thousands of dollars. We are a small shop and have a very limited number of Oracle database users, and Enterprise Edition is not needed.
This second time was a response to a change Oracle has made to their standard edition licensing model. Why was this change made? As far as I can tell, Oracle executives were sitting around trying to think of ways to squeeze more money out of their customers. Here is an announcement of the change from an official Oracle blog. Basically, they have decided to retire license types standard and standard edition one (SE1) at the end of the year. Any licenses purchased after then will have to be SE2 licenses, which will cost more than the SE1 and require us to upgrade the database from our version to the latest version, 12C. There are also additional limitations to the licenses which are designed to push customers to the more expensive editions. For example, the new hardware limit on CPUs is 2 physical sockets with no more than 16 threads. Having researched these new limitations before my call with them, I was interiorly bristling as they probed me with questions about our current environment. “I just want to understand your environment” the Oracle guy told me. I can just picture how that conversation goes for those that are unaware. “Oh, you’re database server has four sockets? You’re going to need to upgrade to Enterprise Edition or risk being out of compliance.” The Oracle guy also implied that they are trying to get into hardware sales as well. When you’re trying to nickel and dime me to death for the use of your software, why would I want to get my hardware from you too?
These license concerns are a real shame because the software itself is very good. Having learned to be a DBA on SQL Server, I still prefer Microsoft’s RDBMS, but I can definitely appreciate all Oracle’s RDBMS has to offer. I’m not sure if it’s worth the headache involved in dealing with Oracle’s greed and desire to rule the world.

Restore latest backup and all transaction logs

Last weekend was the 2015 Rhinelander Hodag Muskie Challenge, the first fishing tournament I’ve ever competed in. It was a cool experience, and although we didn’t record a fish, I plan to keep trying. My team included myself and my brother-in-law, Jason. We came up two days early to prefish the lakes and try to identify a pattern that would work once the tournament started. Jason caught two fish during the tournament, a 42″ on Friday and a huge 48″ on Thursday. The bigger one was an enormous fish, and neither words nor the picture below really do it justice.

Jason's 48

It was one of those things where you just had to be their understand the real size of the fish and the excitement of the moment. Unfortunately, once the tournament started on Friday morning we only managed to get follows, no biters. The winning team ended up landing three fish, including two legal (40″+) ones. Congrats to them, that is a great accomplishment. It was cool to see how the weather really affected the fishing. We were faced with high pressure on Saturday and Sunday, and by Sunday it really appeared to have slowed down the fish. There were about 30 fish caught on Saturday during the 10 hours of fishing and only 6 on Sunday during the 5 hours of fishing. 36 fish between 80 teams isn’t that many, and it helps to realize that even some of the best muskie fishermen out there struggle under tough conditions.

I recently set up a HA passive node for our production 2008 R2 SQL Server. It is recommended by Microsoft to only mirror at most 10 databases per server because the mirror process takes up processor threads. I mirrored our 10 most important databases, but in the event of an outage we would probably want some of the other databases to be current on the backup server as well. So I created a script to create the restore process for any other databases. This script finds the latest full backup and each transaction log backup after that full backup. It is important to note that each restore except the last should be WITH NORECOVERY while the last should be WITH RECOVERY. I prefer very frequent TLOG backups for minimum data loss, so the potentially large number of transaction logs to restore makes the script necessary to get the latest version of the databases without spending a ton of time setting up the restores. The script will place all the restore statements in the correct order in the RestoreStatements field, and the database_name field is just used for sorting and can be disregarded.  This script also requires that you restore msdb to the mirror and will only generate backup statements for tlog backups taken before the last msdb backup.  It will also work best if the backups are stored on a network location that is accessible from both the principal and mirrored servers.

WITH    latestfullbackups
          AS ( SELECT   bu.database_name ,
                        bu.backup_set_id ,
                        bmf.physical_device_name ,
                        bu.backup_start_date ,
                        ROW_NUMBER() OVER ( PARTITION BY bu.database_name ORDER BY bu.backup_set_id DESC ) rn
               FROM     msdb.dbo.backupset bu
                        INNER JOIN msdb.dbo.backupmediafamily bmf ON bmf.media_set_id = bu.media_set_id
               WHERE    bu.database_name NOT IN ( 'master', 'msdb', 'model','anymirroreddbs')
                        AND bmf.device_type  7
                        AND bu.TYPE = 'D'
          AS ( SELECT   bu.database_name ,
                        bu.backup_set_id ,
                        bmf.physical_device_name ,
                        ROW_NUMBER() OVER ( PARTITION BY bu.database_name ORDER BY bu.backup_set_id DESC ) rn
               FROM     msdb.dbo.backupset bu
                        INNER JOIN msdb.dbo.backupmediafamily bmf ON bmf.media_set_id = bu.media_set_id
                        INNER JOIN latestfullbackups ON latestfullbackups.database_name = bu.database_name
               WHERE    bmf.device_type  7
                        AND bu.TYPE = 'L'
                        AND bu.backup_start_date > latestfullbackups.backup_start_date
                        AND latestfullbackups.rn = 1
    --Latest full backup
    SELECT  'RESTORE DATABASE [' + latestfullbackups.database_name
            + '] FROM DISK = N''' + latestfullbackups.physical_device_name
            + CHAR(10) + 'GO' AS RestoreStatements ,
    FROM    msdb.dbo.backupset bs
            INNER JOIN msdb.dbo.backupmediafamily bmf ON bmf.media_set_id = bs.media_set_id
            INNER JOIN latestfullbackups ON latestfullbackups.backup_set_id = bs.backup_set_id
                                            AND latestfullbackups.database_name = bs.database_name
                                            AND latestfullbackups.rn = 1
   --All tlog backups after latest full except the last one
    SELECT  'RESTORE LOG [' + alltlogsafterlastfull.database_name
            + '] FROM DISK = N''' + alltlogsafterlastfull.physical_device_name
            + CHAR(10) + 'GO' ,
    FROM    alltlogsafterlastfull
    WHERE   alltlogsafterlastfull.rn  1

   --Last tlog backup - recover the database
    SELECT  'RESTORE LOG [' + alltlogsafterlastfull.database_name
            + '] FROM DISK = N''' + alltlogsafterlastfull.physical_device_name
            + ''' WITH FILE = 1, RECOVERY, NOUNLOAD, REPLACE, STATS = 5;'
            + CHAR(10) + 'GO' ,
    FROM    alltlogsafterlastfull
    WHERE   alltlogsafterlastfull.rn = 1
    ORDER BY 2 ,