Moving the SQL Server Installation to a Different Drive

Following fishing regulations is very important.  We as a society are called to be responsible stewards of our natural resources, and that includes fish.  Overfishing, poaching, and spreading invasive species can all decimate a lake’s fish population, ruining it for everyone else.  I was disheartened to see a news article this week about a man caught with over 2,500 panfish in his freezer.  The legal limit is 50 per species, so he would have been allowed to possess 150 fish.  Hopefully the story of his guilt will dissuade other poachers, but given his rather light sentence, I doubt that will be the case.

I recently needed to install SQL Server Analysis Services (SSAS) on our test server to begin experimenting with it.  However, the C drive, where SQL Server was installed, had only a few hundred MBs of space left.  When installing SSAS on the existing instance of SQL Server, you are forced to use the same drive and I didn’t have enough space.  I decided to move the existing installation from the C drive to the D drive, which had plenty of available space.

There isn’t any way to move the existing installation, so I was forced to uninstall SQL Server on the C drive, then install it on the D drive.  Here are the steps I followed:

  1. Take a backup of all the databases, just in case.  This is always a good first step when making any significant changes to your environment.
  2. Run the Uninstall Wizard through Windows Control Panel to remove all SQL Server components.
  3. Reinstall SQL Server on the D drive.  I found I had to use an actual iso to do the install rather than the extracted contents of the iso.  When I tried to use the extracted contents I kept running into errors about missing msi files.
  4. Apply any service pack and patches to the installation so it is at least at the same version as the uninstalled instance.  If you skip this step you will not be able to restore/attach any of your existing databases to the new instance.
  5. At this point I expected to be able to move my existing master database file into the new default data folder, but I found my existing master database file had disappeared!  The uninstall must have deleted it.
  6. Instead, I started up SQL Server with the -m parameter in the SQL Server Configuration Manager’s SQL Server Advanced Properties.  This causes SQL Server to start up in single user mode, and only the master database comes online.
  7. Now restore the last backup of the master database:
    C:\> sqlcmd  
    1> RESTORE DATABASE master FROM DISK = 'Z:\SQLServerBackups\master.bak' WITH REPLACE;  
    2> GO
  8. When the restore is complete the service will stop.  Remove the -m parameter and start SQL back up.
  9. At this point everything came up as expected.  There were a few cleanup tasks to complete before I was finished:
  • Reconfigure Reporting Services.
    • During the install I had chosen to install but not configure so that I could plug into my existing SSRS databases.
  • Configure Powershell
    • The msdb.dbo.syssubsystems table contains information about Powershell that SQL Server uses when executing a PS script.  Mine was pointing to a subsystem dll and agent exe that were in the old installation location.  I updated this directly in the table with an UPDATE statement.

Once complete, SQL was ready to use, I had SSAS installed, and I opened up an additional 3 GB of hard drive space on the C drive, relieving the fear of crashing the OS.

Querying Oracle through Powershell

I’ve been dealing with some back pain this summer/fall that has made fishing uncomfortable.  I’m still able to do it, but when you’re in constant pain, it takes the fun out of doing pretty much everything but laying down.  That, along with the general busyness of the summer and fall, have kept the fishing down to a minimum.  However, I did get out for a weekend and found that while musky fishing was difficult, slip-bobber fishing was better.  I’d never slip-bobber fished until this year, but I have found it’s really enjoyable.  You can get the bait right above the bottom at any depth, and that presentation seems to work well when the fish aren’t as aggressive.

This brings me to my second note – I love catching smallmouth bass.  They fight really hard for their size, with deep runs and long battles.  Below is one of two that I caught at dusk on a weedy finger that stuck out into the main basin of the lake.

late-summer-smallmouth-bass

As you’d expect, querying Oracle database from Powershell isn’t as easy as querying SQL Server.  Powershell and SQL Server are both Microsoft technologies, so they play together pretty nicely.  With Oracle, their are a couple ways to connect to the database.  You can use the OLE DB database connector or the Oracle Client version.  I tried using the OLE DB connector first, but found I was unable to get it going.  So I switched over to the Oracle Client, which was a bit more complicated in the setup, but worked.

In order to use the Oracle Client, you have to install it on your system first.  I will warn you that this is not a minor install.  The installer itself is 600 MB, and you need to make sure you grab the correct version (32 vs 64 bit), depending on your OS.  Once installed you should be ready to go.  Here is the basic pattern for creating the connection and querying data:

#Create connection to Oracle
 $OracleconnectionString = "Data Source=OracleDatabaseServer;User Id=dgugg;Password=myPass;Integrated Security=No"
 $OracleSchedulerQueryString = "select IMAGE_ID, DESCRIPTION from FMSH_IMAGE WHERE Rownum < 11 ORDER BY IMAGE_ID DESC"
 [System.Reflection.Assembly]::LoadWithPartialName("System.Data.OracleClient") | Out-Null
 $OracleConnection = New-Object System.Data.OracleClient.OracleConnection($OracleConnectionString)
 $OracleCommand = New-Object System.Data.OracleClient.OracleCommand($OracleSchedulerQueryString, $OracleConnection)
 $OracleConnection.Open()
 $OraImages = $OracleCommand.ExecuteReader()
 $OraCounter = $OraImages.FieldCount
 While ($OraImages.Read()) {
    for ($o=0; $o -lt $OraCounter; $o = $o++ ) {
        $OraImages.GetName($o) = $OraImages.GetValue($o)
        }
    }
$OracleConnection.Close()

Here is the gist of the code:

  • Define a connection string to the Oracle database.
  • Define your SQL statement
  • Load the Oracle Client Assembly into this instance of Powershell
  • Define and create your Oracle Connection Object
  • Define and create your Oracle command
  • Open your connection to Oracle
  • Execute your SQL into a recordset
  • Loop through the recordset from beginning to end
    • In each loop, print out the field name and field value

There are a few gotchas to keep in mind:

  1. Don’t end your SQL statement with a semicolon.  I did this and couldn’t figure out why it wasn’t working.  Once I took it out it was smooth sailing.
  2. Don’t forget to escape Powershell special characters.  I was doing a query from the V$Database view and again ran into problems.  Once I escaped the dollar sign ($) with a back tick (`), everything once again started working.

Stay tuned for my final blog in this series where I will use this connection to Oracle as well as the previously posted connection to SQL Server to create a workbook that has daily monitoring of my database health stats.

Coding Challenge – Advent of Code

Although my true passion will always be fishing, I’ve started bow hunting this year. It isn’t quite as fun (although maybe it’ll get better once I actually see some deer), but it’s still a nice way to get out and enjoy nature. Especially during this time of year when it’s too cold to launch a boat but not yet cold enough to ice fish, it’s great to have another way to get out of the house. I’ve found that muskie fishing is probably closer to deer hunting than it is to other types of fishing; you are waiting all day for just the chance to see a muskie/deer, but when you do it’s very exciting. It takes a great deal of though and preparation to be successful.

I was recently shown a set of challenges called Advent of Code. Its a fun way to practice coding and keep your skills fresh. I mainly code in SQL, but used to do quite a bit in Access and Excel VBA, and a small amount in C# and Powershell. I’d like to use this Advent of Code challenge to brush up on my skills and hopefully learn a bit more about Powershell, which can be a big asset in administering SQL Server databases.
The first challenge is to interpret a set of instructions where the parentheses symbols are used to tell Santa to go up or down in a house; right paranthesis “(” tells Santa to go up a floor, left “)” tells him to go down a level. In the two part question, we need to figure out first what floor he ends up at and second to figure out which step first takes him below level 0, into the basement. I solved this one by pasting the instructions, which was 7000 characters long, into a cell in Excel and then running some VBA code to get the answers. I got both parts correct on my first try. Here’s the code I used:

Sub Solver()

    Dim i As Integer
    Dim flr As Integer
    
    For i = 1 To Len(Me.Cells(1, 1).Value)
        If Mid(Me.Cells(1, 1).Value, i, 1) = "(" Then
            flr = flr + 1
        Else
            flr = flr - 1
        End If
    Next i
    
    Debug.Print flr

End Sub
Sub Solver2()

    Dim i As Integer
    Dim flr As Integer
    
    For i = 1 To Len(Me.Cells(1, 1).Value)
        If Mid(Me.Cells(1, 1).Value, i, 1) = "(" Then
            flr = flr + 1
        Else
            flr = flr - 1
        End If
        If flr = -1 Then
            Debug.Print i
            Exit Sub
        End If
    Next i
    

End Sub

If you’re looking for a fun challenge or a way to grow your coding skills, definitely check out the Advent of Code.

How to Prove Database Concepts

Although it is still late fall, there is usually plenty of snow on the ground by now here in Wisconsin. I say usually because this year is different. We’ve experienced a warm fall and the temperatures outside right now are still in the 40s. It made fall fishing more comfortable but less consistent. The tough thing right now is knowing that these beautiful late fall days can’t be enjoyed while muskie fishing since the season ended December 1st. Most the lakes in all of Wisconsin are still open (no ice), but they can only be enjoyed by the walleye and panfishermen. I’ve started bow hunting this year and that is keeping me in the outdoors, but it isn’t nearly as enjoyable as being out on the water, patrolling the depths for an elusive trophy. Fishing will open again on May 7th, which feels like an eternity away.
Disagreements in the workplace are inevitable. They don’t have to be contentious, and they are almost always positive in the long run. They force each side to reinforce their knowledge by proving out their position. If I believe using a 1:1 virtual machine for my database server is a better option, I need to be able to describe why in a factual and eloquent manner. If I believe the resource costs of virtualization aren’t worth the benefits, I’d better be prepared to back that up. I like to make my case by stating my opinion, showing the basis for that opinion in the documentation (MSDN, Technet), and then showing an example.
Recently I had a call with one of our software vendors. There was a disagreement about whether the use of variable length fields would be a benefit to the database – all their tables were using fixed length fields (char instead of varchar). Below is an email I sent them that I felt demonstrated the benefits of variable length fields.

Hi [Recipients]

 

I felt like you weren’t really convinced by my declaration yesterday on some of the database issues, so I figured I’d prove them out.  I realize it’s probably too late to make any changes in V14, but my hope is that you will consider these suggestions when building out V15.  I’ll start with the issue with fixed length fields.

 

As I explained yesterday, the slowest operation for any relational database management system is Disk I/O.  Once the data is read, it goes into the buffer pool and can be read, updated, inserted, or deleted in memory, which is very fast.  Even high performance hard drives such as SSDs are slower than operations in memory.  With this in mind, one key way to improve performance in a database is to minimize the amount of information being read from disk.  This can be done by not storing empty spaces in fixed length fields and instead using variable length fields.  Microsoft’s guidance on when to use variable versus fixed length fields is:

  • Use char when the sizes of the column data entries are consistent.
  • Use varchar when the sizes of the column data entries vary considerably.
  • Use varchar(max) when the sizes of the column data entries vary considerably, and the size might exceed 8,000 bytes.

So as an example, the miscX fields, which are the company defined fields, will not only vary wildly but also often be blank when they are not being used.  This would be a perfect candidate for variable length fields since instead of storing 40 empty spaces of data which would require 40 bytes of disk space, you’d be storing a zero length string, which would require 2 bytes.  Variable length fields can cause fragmentation problems when updates to the fields cause the row to exceed 8060 bytes (your extremely wide tables exacerbate this problem).  When this happens one or more of the variable length fields are pushed off the page onto separate data pages that are normally used for LOB data.  However, SQL Server will stop you from creating tables with fixed length rows greater than 8060 bytes anyway, so you can easily avoid this scenario by enforcing your existing field size limits.  We also talked about the large object data types yesterday, such as the deprecated text fields that are in your database.  Text fields are automatically put on pages reserved for LOB data whether they contain any value or not.  Varchar(max) fields are different.  Once the data in these fields exceeds 8,000 bytes, it will automatically be pushed off the data page onto a separate type of page used to store large data objects.

 

So let’s take a look at some of the existing performance metrics, and see how they would improve based on the suggested changes.  I’ll use the tablename table as the example, but this would apply to any tables with fixed length fields, and especially to those tables with miscX fields.

Here is the current size of the tablename table, using Query 1:

20151214 field length argument current size

You can see that we are averaging just under two rows per page.  We are also using 849 pages for text data, most of which are blank in our database.

Here are the stats for pulling all data from the table.  I ran it three times to account for data entering the buffer pool.

20151214 original table trace

CPU is averaging 344, 93K reads, and over a second and a half in duration.

 

Next I’ll create a new version of the table using variable length fields instead of fixed length.  The DDL is in Query 2.  It is important to note that I am also replacing the text fields with varchar(max).  This allows the smaller LOB data to sit on the same page as the row data until it becomes too big to fit.  I load it with everything in tablename (minus the useless spaces) in Query 3.

Here is the current size of the tablename_improved table, again using the logic from Query 1:

20151214 Improved table size

You can see the size has gone down by 80% on the clustered index and 66% on the nonclustered indexes.  We’re also no longer using any LOB pages.

 

Now look at the improvement in our query performance:

20151214 improved table trace

The CPU has decreased by about 50%, the reads have decreased by 99.5% (mostly due to the text -> varchar(max) change), and the duration has decreased by 66%, a great improvement.

Because every data page that is read first has to be pulled into the buffer pool, and because the buffer pool is limited by the amount of memory on the server and all databases on the server share it, the performance impact of pulling all this extra blank data into memory has a negative performance impact  not only on your PLM application, but all applications that are using the SQL Server instance.

I hope this has shown you the value of using variable length fields instead of fixed length fields.

Thank you

 

Appendix:

Query 1

SELECT  OBJECT_NAME(i.OBJECT_ID) AS 'tableName' ,
        i.name AS 'indexName' ,
        i.type_desc ,
        MAX(p.partition_number) AS 'partitions' ,
        SUM(p.rows) AS 'rows' ,
        SUM(au.data_pages) AS 'dataPages' ,
        CAST(SUM(p.rows)*1.0 / SUM(au.data_pages) AS DECIMAL(10,1)) AS 'rowsPerPage',
              SUM(ps.lob_reserved_page_count) AS 'LOBPages'
FROM    sys.indexes AS i
        JOIN sys.partitions AS p ON i.OBJECT_ID = p.OBJECT_ID
                                    AND i.index_id = p.index_id
              JOIN sys.dm_db_partition_stats ps ON ps.index_id = i.index_id AND ps.OBJECT_ID = i.OBJECT_ID AND ps.partition_id = p.partition_id
        JOIN sys.allocation_units AS au ON p.hobt_id = au.container_id
WHERE   OBJECT_NAME(i.OBJECT_ID) = 'tablename_improved'
        AND au.type_desc = 'IN_ROW_DATA'
GROUP BY OBJECT_NAME(i.OBJECT_ID) ,
        i.name ,
        i.type_desc
ORDER BY rowsPerPage;

 

Query 2

CREATE TABLE [dbo].[tablename_improved](
       [season] [varchar](15) NOT NULL,
       [style] [varchar](30) NOT NULL,
       [designmemo] [varchar](MAX) NOT NULL,
       [designimg] [varchar](50) NOT NULL,
       [id_style] [int] IDENTITY(1,1) NOT NULL,
       [expdate] [datetime] NOT NULL,
       [origexpdate] [datetime] NOT NULL,
       [status] [varchar](50) NOT NULL,
       [cycledays] [int] NOT NULL,
       [laststep] [varchar](50) NOT NULL,
       [dayslate] [int] NOT NULL,
       [inidate] [datetime] NOT NULL,
       [stgscode] [varchar](50) NOT NULL,
       [completed] [bit] NOT NULL,
       [datecompleted] [datetime] NOT NULL,
       [Revision] [varchar](10) NOT NULL,
       [Desce] [varchar](50) NOT NULL,
       [Publish] [bit] NOT NULL,
       [DesnImage] [varchar](MAX) NOT NULL,
       [DistSpecs] [bit] NOT NULL,
       [LastDate] [datetime] NOT NULL,
       [WasRevised] [bit] NOT NULL,
       [misc1] [varchar](30) NOT NULL,
       [misc2] [varchar](30) NOT NULL,
       [misc3] [varchar](30) NOT NULL,
       [misc4] [varchar](30) NOT NULL,
       [misc5] [varchar](30) NOT NULL,
       [misc6] [varchar](30) NOT NULL,
       [misc7] [varchar](30) NOT NULL,
       [misc8] [varchar](30) NOT NULL,
       [misc9] [varchar](30) NOT NULL,
       [misc10] [varchar](30) NOT NULL,
       [misc11] [varchar](30) NOT NULL,
       [misc12] [varchar](30) NOT NULL,
       [misc13] [varchar](30) NOT NULL,
       [misc14] [varchar](30) NOT NULL,
       [misc15] [varchar](30) NOT NULL,
       [misc16] [varchar](30) NOT NULL,
       [misc17] [varchar](30) NOT NULL,
       [misc18] [varchar](30) NOT NULL,
       [misc19] [varchar](30) NOT NULL,
       [misc20] [varchar](30) NOT NULL,
       [prototype] [bit] NOT NULL,
       [startdate] [datetime] NULL,
       [wholecost] [numeric](14, 4) NOT NULL,
       [WholePrice] [numeric](14, 4) NOT NULL,
       [mainimage] [varchar](10) NOT NULL,
       [scale] [varchar](15) NOT NULL,
       [ManufPrice] [numeric](14, 4) NOT NULL,
       [RetailPrice] [numeric](14, 4) NOT NULL,
       [WholeMargin] [numeric](14, 4) NOT NULL,
       [RetailMargin] [numeric](14, 4) NOT NULL,
       [manufcost] [numeric](14, 4) NOT NULL,
       [manufmargin] [numeric](14, 4) NOT NULL,
       [retailcost] [numeric](14, 4) NOT NULL,
       [Forecast] [numeric](10, 0) NOT NULL,
       [unit] [varchar](4) NOT NULL,
       [gartype] [varchar](5) NOT NULL,
       [PDMisc1] [varchar](30) NOT NULL,
       [PDMisc2] [varchar](30) NOT NULL,
       [PDMisc3] [varchar](30) NOT NULL,
       [stylestatus] [varchar](15) NOT NULL,
       [hasColorWay] [bit] NOT NULL,
       [chartcode] [varchar](10) NOT NULL,
       [chartDescription] [varchar](50) NOT NULL,
       [LabPacknotes] [varchar](MAX) NULL,
       [Constnotes] [varchar](MAX) NULL,
       [colwnotes] [varchar](MAX) NULL,
       [labnotes] [varchar](MAX) NULL,
       [ProtoSeason] [varchar](15) NOT NULL,
       [ProtoStyle] [varchar](30) NOT NULL,
       [groupName] [varchar](20) NOT NULL,
       [sizespecs] [varchar](MAX) NULL,
       [colorspecs] [varchar](MAX) NULL,
       [cost] [numeric](8, 2) NOT NULL,
       [saleprice] [numeric](8, 2) NOT NULL,
       [ldpcost] [numeric](10, 4) NOT NULL,
       [retailconfirmed] [bit] NOT NULL,
       [costconfirmed] [bit] NOT NULL,
       [plannedunits] [numeric](11, 0) NOT NULL,
       [id_calendar] [int] NOT NULL,
       [morefields] [varchar](MAX) NULL,
       [wboardMaster] [numeric](1, 0) NOT NULL,
       [optLocked] [numeric](1, 0) NOT NULL,
       [RHReady] [bit] NOT NULL,
       [lastexported] [datetime] NOT NULL,
       [lastedited] [datetime] NOT NULL,
       [imgfront] [varchar](10) NOT NULL,
       [imgBack] [varchar](10) NOT NULL,
       [imgSide] [varchar](10) NOT NULL,
       [imgDetail] [varchar](10) NOT NULL,
       [Frontxt] [varchar](30) NOT NULL,
       [backtxt] [varchar](30) NOT NULL,
       [sidetxt] [varchar](30)  NOT NULL,
       [detailtxt] [varchar](30) NOT NULL,
       [misc21] [varchar](30) NOT NULL,
       [misc22] [varchar](30) NOT NULL,
       [misc23] [varchar](30) NOT NULL,
       [misc24] [varchar](30) NOT NULL,
       [misc25] [varchar](30) NOT NULL,
       [misc26] [varchar](30) NOT NULL,
       [misc27] [varchar](30) NOT NULL,
       [misc28] [varchar](30) NOT NULL,
       [misc29] [varchar](30) NOT NULL,
       [misc30] [varchar](30) NOT NULL,
       [misc31] [varchar](30) NOT NULL,
       [misc32] [varchar](30) NOT NULL,
       [misc33] [varchar](30) NOT NULL,
       [misc34] [varchar](30) NOT NULL,
       [misc35] [varchar](30) NOT NULL,
       [misc36] [varchar](30) NOT NULL,
       [misc37] [varchar](30) NOT NULL,
       [misc38] [varchar](30) NOT NULL,
       [misc39] [varchar](30) NOT NULL,
       [misc40] [varchar](30) NOT NULL,
       [descrev] [varchar](10) NOT NULL,
       [batchNo] [varchar](10) NOT NULL,
       [collab_issues] [int] NOT NULL,
       [prob] [varchar](250) NOT NULL,
       [Attachments] [int] NOT NULL,
       [CreatedDate] [datetime] NOT NULL,
       [Createdby] [varchar](25) NOT NULL,
       [Editedby] [varchar](25) NOT NULL,
       [LastTabEdited] [varchar](25) NOT NULL,
       [CreatedOn] [datetime] NOT NULL,
       [ModifiedBy] [varchar](25) NOT NULL,
       [ModifiedOn] [datetime] NOT NULL,
       [RowVersion] [int] NOT NULL,
       [hts] [varchar](20) NOT NULL,
       [TargetFOB] [money] NOT NULL,
       [QuotedFOB] [money] NOT NULL,
       [ActualFOB] [money] NOT NULL,
       [CurrentFOB] [money] NOT NULL,
       [LandedCost] [money] NOT NULL,
       [xTargetFOB] [money] NOT NULL,
       [xQuotedFOB] [money] NOT NULL,
       [xActualFOB] [money] NOT NULL,
       [xCurrentFOB] [money] NOT NULL,
       [xLandedCost] [money] NOT NULL,
       [xRetailPrice] [money] NOT NULL,
       [xWholePrice] [money] NOT NULL,
       [xWholeMargin] [money] NOT NULL,
       [isset] [bit] NOT NULL,
       [sidemeasurement] [bit] NOT NULL,
       [LastSeason] [varchar](15) NULL,
       [LastStyle] [varchar](30) NULL,
       [lineplanorder] [numeric](6, 0) NOT NULL,
CONSTRAINT [PK_tablename_improved] PRIMARY KEY CLUSTERED 
(
       [id_style] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY],
CONSTRAINT [Design Styles Improved: Style/Season Duplicated] UNIQUE NONCLUSTERED 
(
       [style] ASC,
       [season] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

CREATE NONCLUSTERED INDEX [tablenameseastyle_improved] ON [dbo].[tablename_improved]
(
[season] ASC,
[style] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY];
GO

CREATE NONCLUSTERED INDEX [ix_tablenameStyleSeasonID_calendar_imp] ON [dbo].[tablename_improved]
(
[style] ASC,
[season] ASC
)
INCLUDE (     [id_calendar]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY];
GO

I didn’t paste in all the constraints, but I did add them to the table.

Query 3

SET IDENTITY_INSERT dbo.tablename_improved ON;

INSERT  INTO eSPSv14Prod.dbo.tablename_improved
( season ,
style ,
designmemo ,
designimg ,
id_style ,
expdate ,
origexpdate ,
status ,
cycledays ,
laststep ,
dayslate ,
inidate ,
stgscode ,
completed ,
datecompleted ,
Revision ,
Desce ,
Publish ,
DesnImage ,
DistSpecs ,
LastDate ,
WasRevised ,
misc1 ,
misc2 ,
misc3 ,
misc4 ,
misc5 ,
misc6 ,
misc7 ,
misc8 ,
misc9 ,
misc10 ,
misc11 ,
misc12 ,
misc13 ,
misc14 ,
misc15 ,
misc16 ,
misc17 ,
misc18 ,
misc19 ,
misc20 ,
prototype ,
startdate ,
wholecost ,
WholePrice ,
mainimage ,
scale ,
ManufPrice ,
RetailPrice ,
WholeMargin ,
RetailMargin ,
manufcost ,
manufmargin ,
retailcost ,
Forecast ,
unit ,
gartype ,
PDmisc1 ,
PDMisc2 ,
PDMisc3 ,
stylestatus ,
hasColorWay ,
CHARtcode ,
CHARtDescription ,
LabPacknotes ,
Constnotes ,
colwnotes ,
labnotes ,
ProtoSeason ,
ProtoStyle ,
groupName ,
sizespecs ,
colorspecs ,
cost ,
saleprice ,
ldpcost ,
retailconfirmed ,
costconfirmed ,
plannedunits ,
id_calendar ,
morefields ,
wboardMaster ,
optLocked ,
RHReady ,
lastexported ,
lastedited ,
imgfront ,
imgBack ,
imgSide ,
imgDetail ,
Frontxt ,
backtxt ,
sidetxt ,
detailtxt ,
misc21 ,
misc22 ,
misc23 ,
misc24 ,
misc25 ,
misc26 ,
misc27 ,
misc28 ,
misc29 ,
misc30 ,
misc31 ,
misc32 ,
misc33 ,
misc34 ,
misc35 ,
misc36 ,
misc37 ,
misc38 ,
misc39 ,
misc40 ,
descrev ,
batchNo ,
collab_issues ,
prob ,
Attachments ,
CreatedDate ,
CreatedBy ,
Editedby ,
LastTabEdited ,
CreatedOn ,
ModifiedBy ,
ModifiedOn ,
RowVersion ,
hts ,
TargetFOB ,
QuotedFOB ,
ActualFOB ,
CurrentFOB ,
LandedCost ,
xTargetFOB ,
xQuotedFOB ,
xActualFOB ,
xCurrentFOB ,
xLandedCost ,
xRetailPrice ,
xWholePrice ,
xWholeMargin ,
isset ,
sidemeasurement ,
LastSeason ,
LastStyle ,
lineplanorder
)
SELECT  RTRIM(LTRIM(season)) ,
RTRIM(LTRIM(style)) ,
RTRIM(LTRIM(CAST(designmemo AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(designimg)) ,
RTRIM(LTRIM(id_style)) ,
RTRIM(LTRIM(expdate)) ,
RTRIM(LTRIM(origexpdate)) ,
RTRIM(LTRIM(status)) ,
RTRIM(LTRIM(cycledays)) ,
RTRIM(LTRIM(laststep)) ,
RTRIM(LTRIM(dayslate)) ,
RTRIM(LTRIM(inidate)) ,
RTRIM(LTRIM(stgscode)) ,
RTRIM(LTRIM(completed)) ,
RTRIM(LTRIM(datecompleted)) ,
RTRIM(LTRIM(Revision)) ,
RTRIM(LTRIM(Desce)) ,
RTRIM(LTRIM(Publish)) ,
RTRIM(LTRIM(CAST(DesnImage AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(DistSpecs)) ,
RTRIM(LTRIM(LastDate)) ,
RTRIM(LTRIM(WasRevised)) ,
RTRIM(LTRIM(misc1)) ,
RTRIM(LTRIM(misc2)) ,
RTRIM(LTRIM(misc3)) ,
RTRIM(LTRIM(misc4)) ,
RTRIM(LTRIM(misc5)) ,
RTRIM(LTRIM(misc6)) ,
RTRIM(LTRIM(misc7)) ,
RTRIM(LTRIM(misc8)) ,
RTRIM(LTRIM(misc9)) ,
RTRIM(LTRIM(misc10)) ,
RTRIM(LTRIM(misc11)) ,
RTRIM(LTRIM(misc12)) ,
RTRIM(LTRIM(misc13)) ,
RTRIM(LTRIM(misc14)) ,
RTRIM(LTRIM(misc15)) ,
RTRIM(LTRIM(misc16)) ,
RTRIM(LTRIM(misc17)) ,
RTRIM(LTRIM(misc18)) ,
RTRIM(LTRIM(misc19)) ,
RTRIM(LTRIM(misc20)) ,
RTRIM(LTRIM(prototype)) ,
RTRIM(LTRIM(startdate)) ,
RTRIM(LTRIM(wholecost)) ,
RTRIM(LTRIM(WholePrice)) ,
RTRIM(LTRIM(mainimage)) ,
RTRIM(LTRIM(scale)) ,
RTRIM(LTRIM(ManufPrice)) ,
RTRIM(LTRIM(RetailPrice)) ,
RTRIM(LTRIM(WholeMargin)) ,
RTRIM(LTRIM(RetailMargin)) ,
RTRIM(LTRIM(manufcost)) ,
RTRIM(LTRIM(manufmargin)) ,
RTRIM(LTRIM(retailcost)) ,
RTRIM(LTRIM(Forecast)) ,
RTRIM(LTRIM(unit)) ,
RTRIM(LTRIM(gartype)) ,
RTRIM(LTRIM(PDMisc1)) ,
RTRIM(LTRIM(PDMisc2)) ,
RTRIM(LTRIM(PDMisc3)) ,
RTRIM(LTRIM(stylestatus)) ,
RTRIM(LTRIM(hasColorWay)) ,
RTRIM(LTRIM(CHARtcode)) ,
RTRIM(LTRIM(CHARtDescription)) ,
RTRIM(LTRIM(CAST(LabPacknotes AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(CAST(Constnotes AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(CAST(colwnotes AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(CAST(labnotes AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(ProtoSeason)) ,
RTRIM(LTRIM(ProtoStyle)) ,
RTRIM(LTRIM(groupName)) ,
RTRIM(LTRIM(CAST(sizespecs AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(CAST(colorspecs AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(cost)) ,
RTRIM(LTRIM(saleprice)) ,
RTRIM(LTRIM(ldpcost)) ,
RTRIM(LTRIM(retailconfirmed)) ,
RTRIM(LTRIM(costconfirmed)) ,
RTRIM(LTRIM(plannedunits)) ,
RTRIM(LTRIM(id_calendar)) ,
RTRIM(LTR(CAST(morefields AS VARCHAR(MAX)))) ,
RTRIM(LTRIM(wboardMaster)) ,
RTRIM(LTRIM(optLocked)) ,
RTRIM(LTRIM(RHReady)) ,
RTRIM(LTRIM(lastexported)) ,
RTRIM(LTRIM(lastedited)) ,
RTRIM(LTRIM(imgfront)) ,
RTRIM(LTRIM(imgBack)) ,
RTRIM(LTRIM(imgSide)) ,
RTRIM(LTRIM(imgDetail)) ,
RTRIM(LTRIM(Frontxt)) ,
RTRIM(LTRIM(backtxt)) ,
RTRIM(LTRIM(sidetxt)) ,
RTRIM(LTRIM(detailtxt)) ,
RTRIM(LTRIM(misc21)) ,
RTRIM(LTRIM(misc22)) ,
RTRIM(LTRIM(misc23)) ,
RTRIM(LTRIM(misc24)) ,
RTRIM(LTRIM(misc25)) ,
RTRIM(LTRIM(misc26)) ,
RTRIM(LTRIM(misc27)) ,
RTRIM(LTRIM(misc28)) ,
RTRIM(LTRIM(misc29)) ,
RTRIM(LTRIM(misc30)) ,
RTRIM(LTRIM(misc31)) ,
RTRIM(LTRIM(misc32)) ,
RTRIM(LTRIM(misc33)) ,
RTRIM(LTRIM(misc34)) ,
RTRIM(LTRIM(misc35)) ,
RTRIM(LTRIM(misc36)) ,
RTRIM(LTRIM(misc37)) ,
RTRIM(LTRIM(misc38)) ,
RTRIM(LTRIM(misc39)) ,
RTRIM(LTRIM(misc40)) ,
RTRIM(LTRIM(descrev)) ,
RTRIM(LTRIM(batchNo)) ,
RTRIM(LTRIM(collab_issues)) ,
RTRIM(LTRIM(prob)) ,
RTRIM(LTRIM(Attachments)) ,
RTRIM(LTRIM(CreatedDate)) ,
RTRIM(LTRIM(Createdby)) ,
RTRIM(LTRIM(Editedby)) ,
RTRIM(LTRIM(LastTabEdited)) ,
RTRIM(LTRIM(CreatedOn)) ,
RTRIM(LTRIM(ModifiedBy)) ,
RTRIM(LTRIM(ModifiedOn)) ,
RTRIM(LTRIM(RowVersion)) ,
RTRIM(LTRIM(hts)) ,
RTRIM(LTRIM(TargetFOB)) ,
RTRIM(LTRIM(QuotedFOB)) ,
RTRIM(LTRIM(ActualFOB)) ,
RTRIM(LTRIM(CurrentFOB)) ,
RTRIM(LTRIM(LandedCost)) ,
RTRIM(LTRIM(xTargetFOB)) ,
RTRIM(LTRIM(xQuotedFOB)) ,
RTRIM(LTRIM(xActualFOB)) ,
RTRIM(LTRIM(xCurrentFOB)) ,
RTRIM(LTRIM(xLandedCost)) ,
RTRIM(LTRIM(xRetailPrice)) ,
RTRIM(LTRIM(xWholePrice)) ,
RTRIM(LTRIM(xWholeMargin)) ,
RTRIM(LTRIM(isset)) ,
RTRIM(LTRIM(sidemeasurement)) ,
RTRIM(LTRIM(LastSeason)) ,
RTRIM(LTRIM(LastStyle)) ,
RTRIM(LTRIM(lineplanorder))
FROM    eSPSv14Prod.dbo.tablename;

SET IDENTITY_INSERT dbo.tablename_improved ON;

*note the trim functions can’t be used on text fields, but could have on varchar(max), which is the preferred lob text data type.

Hardware Headaches

Beginning this year, I began to notice my muskie reel running very rough. Instead of a smooth, quiet spin, you could hear the gears pushing each other as I cranked on the handle. Even worse, the reel would often skip, or reverse, when I would be yanking with the rod at the same time as I was pulling in line by turning the handle. I realized this was not going to go away or fix itself, so I sent the reel in to Badger Reel Repair. They were able to fix the reel, but I they had to replace many of the interior parts. I was informed rather directly that my current reel, an Abu Garcia Ambassadeur 6600 C4 was not built to handle the type of fishing that I do. Ripping big rubber baits such as bulldawgs as well as large bucktails like twin 10s will wreck havoc on my reel. As a result, I’m in the market for a new reel. It’s kind of disappointing because my C4 is a $140 reel, and I’d expect that to be able to stand up to anything, but I guess that is not the case. My prime choices for a new reel so far are the Shimano Calcutta 400B, the Abu Garcia Revo Toro NaCl, and the new Abu Garcia Revo Toro S, which doesn’t come out until September. Although it will stink to shell out $200 – $300 on a new reel, it will be fun to use some new equipment, and to compare its performance to that of my current reel.

Calcutta 400B
It can be pretty frustrating when equipment (or hardware) doesn’t work the way it should. During our recent server upgrade, I put forth great effort to ensure everything would go smoothly. I created a set of procedures and practiced those procedures multiple times. I closely inspected the current server to find out what needed to be migrated and what could be left and forgotten. I did everything I could think of to make this a success and a positive experience for the business.
The day of the migration went smoothly. There were a few minor hiccups, the worst of them involving database mail, but overall everything went fine. I did the upgrade on a Saturday, and everything seemed gravy into the next week. Then on Wednesday night, the new SQL Server errored out. I checked the logs and they said the data and log files were inaccessible. I went to check for the files and found the drives were gone! A reboot of the server brought them back and after running DBCC CHECKDB on all databases I was able to tell we were back online. I check of the Windows Event Logs showed one error at that time: “The driver detected a controller error on \Device\RaidPort5”. This isn’t the type of error that most DBAs can troubleshoot. In fact, it eventually took several IBM techs to get to the bottom of the issue.
We tried updating some firmware, but the issue occurred again the following night. This time our system adminstrator vendor brought in an IBM tech to troubleshoot. After multiple hours on hold with IBM support, the IBM tech found the raid card to be loose in the server node. He clipped in back in tightly and we gave it another go. That night, the issue occurred yet again. At this point, the disruptions to our production environment were just too great. Later that evening, the Friday after the initial upgrade, we failed back over to our old database server. The issue is still ongoing, but we are hopeful that a replacement RAID controller will fix them.
One frustrating thing is that the issue didn’t really happen for the month and a half we had the new server in standby, and it hasn’t happened since we took it out of production. We don’t really know how to tell if the issue is fixed without putting it back into production, where it causes severe problems when it bombs. The other frustration is how I perceive this incident affects other people’s impression of my work. I tried very hard to make the process go smoothly, but in the end I was derailed by something that was never in my control. All the users know is that I upgraded the server, and suddenly we are having system outages. I wish there were a lesson here, but in retrospect I can’t think of anything I could have done differently. The only way we may have been able to catch this is if we were able to simulate a production type load on the new server before we put it into production (which we don’t have the capability to do at this small employer), and even with that, the failures occurred off hours when the load was very minimal. We may not have found it regardless.

Query Completion Time Estimate

Thunderstorms can be a dangerous temptation to a fisherman. Like King David gazing upon Bathsheba as she bathed, a fisherman looks toward the ominous, incoming clouds and sees opportunity. When a storm rolls into an area, it causes a big drop in barometric pressure. This is the weight of the atmosphere pushing down on everything, including the water. When that wieght drops quickly, it causes a huge rise in fish activity. I’ve caught some great fish right before a storm strikes. I can remember being right outside a bay on a lake up north as a storm rolled in. We hadn’t caught much all day, but a wall of thick black clouds was approaching quickly. I managed to hook into a nice sized walleye using a Lindy Rig, but lost it right before I could get it in the net. After that we had to get off the water. The key is to be smart about the situation. The lightening that accompanies these storms can be fatal, and out in the middle of a lake is not a good place to be. Fishing right before a storm is a balancing act of staying long enough to hook into some fish but leaving before conditions become dangerous.
Storm Northern
Similarly, as Database Administrators we are often forced to perform tasks within some tight windows. Software updates can involve both schema and data changes that need to be deployed (and possibly rolled back, if the software update is unsuccessful) within a short timeframe. Moving databases from one location to another is another task that may need to be done within a small window of time. One very interesting challenge that I encountered during my first few months as a DBA was a significant table change on a very large production table. This table held over a billion rows of detail records, and had an integer for the primary key/identity field. A little less than a month before it happened, we noticed we were running out of integer values for this field. We needed to change the field to a bigint, but this change required a significant outage.

It is helpful to have a way to measure progress when performing changes that take a long time. I found this query which has been helpful in estimating the time required to do some longer running tasks. I’ve found it works well for database backups and restores.

SELECT  percent_complete ,
        CONVERT(NUMERIC(10, 2), estimated_completion_time / 1000 / 60) AS Estimated_Minutes_Remaining ,
        start_time ,
        session_id ,
        status ,
        command ,
        sql_handle ,
        database_id ,
        blocking_session_id
FROM    sys.dm_exec_requests
WHERE   command <> 'TASK MANAGER'
        AND status <> 'background'
ORDER BY 1 DESC

Sending Alerts when Reporting Services job fail

There’s a lot of gear to remember when you head out for a fishing trip. You need your rods/reels, lures, live bait, net, fishing license, sunscreen, bug spray, hat, water, snacks, beer, and one of the most important things to remember – put the plug in the boat before you launch! A recent fishing trip to Lake Winnebago got off to a rocky start. First, I did forget to put the plug in. Once I noticed water pouring in I quickly screwed in the plug and got the bilge pump running. Next, after we motored out about a half mile to an offshore reef I looked in the cooler only to discover that I’d left the worms at home. We had to motor all the way back and swing by a nearby gas station to pick up some more.
It can be frustrating when we are expecting something to be there, but it’s not. We have several SQL Server Reporting Services (SSRS) reports that are scheduled to run at certain times throughout the week. These reports are sent to internal users for making everyday business decisions. Unfortunately, if the reports fail for any reason, they are simply not sent. I wanted to make sure everyone was getting their reports, so I set up a SQL Agent job to alert me for job failures. I was able to take advantage of the ReportDB.dbo.Subscriptions table to looks for statuses that weren’t “Mail sent to” or “New Subscrip”.

SELECT  c.Name ,
        S.LastRunTime ,
        S.LastStatus ,
        S.Description ,
        c.Path
FROM    LW_ReportDB.dbo.Subscriptions S
        LEFT OUTER JOIN LW_ReportDB.dbo.Catalog c ON c.ItemID = S.Report_OID
WHERE   LEFT(S.LastStatus, 12) != 'Mail sent to'
        AND LEFT(S.LastStatus, 12) != 'New Subscrip' 

This query will show any scheduled jobs that didn’t run as expected. Using my stored procedure that I created to send emails in html table format, I run the query like this:

EXEC DBMaint.dbo.SendDBEmailwithTabularQuery 
   @qSELECT = N'SELECT', 
    @fieldlist = N'C.Name|S.LastRunTime|S.LastStatus|S.Description|C.Path', 
    @qFROM = N'FROM LW_ReportDB.dbo.Subscriptions AS S
				LEFT OUTER JOIN LW_ReportDB.dbo.[Catalog] AS C
				ON C.ItemID = S.Report_OID',
    @qWHERE = N'WHERE LEFT (S.LastStatus, 12) != ''Mail sent to''
				AND LEFT (S.LastStatus, 12) != ''New Subscrip''',
    @recipients = N'dgugg@domain.com', 
    @subject = N'SSRS Reports that have failed'

And I get results like this:
20150126 SSRS Report Fail Notification
The last step is just to set up the query in a SQL Agent job to run each morning. Now when I get in each day I can investigate any failures before the report user has a chance to even notice it failed.