Index a Computed Column to Improve Query Performance

This summer started out with a below-average fishing volume. The birth of my fourth child in June used up much of the time I would normally spend fishing. As the summer grew late, I was able to get out a bit more, but it’s mostly been a small lake near my house. I’ve only gotten down to Pewaukee to muskie fish once, though I am happy to say I did have some luck that outing. The fish came in hot behind my lure, but veered off when it got to the side of the boat. As I began the figure eight, I turned to my brother-in-law to let him know I saw a fish and to estimate the size. While I was looking at him the fish came back and struck. After a brief but spirited fight, we netted the fish, a 40.5″ muskie. She was very healthy and I feel blessed to have gotten one my first time out this summer.

20180728 Pewaukee Musky

Doing a figure-eight while musky fishing is something you should do on every cast, just like making sure every query you’re executing is using an index as expected.  In some cases though, indexes can be hard to use.  I wrote an earlier post about SARGability – making sure your indexes can be used.  As I explained in that post, some things, such as functions on the columns being searched, can prevent SQL Server from doing a seek on the index.

I recently was facing a situation like that while loading a data warehouse table.  These loads use a large number of row and can be very time consuming, so it is important to make sure the performance is as efficient as possible.  In this case, I was matching customers in one table to customers in another by comparing their addresses and cities.  The relevant part of the join was:

REPLACE(LEFT(Table1.StreetAddress, 10) + LEFT(Table1.City, 5), ' ', '') = REPLACE(LEFT(Table2.StreetAddress, 10) + LEFT(Table2.City, 5), ' ', '')

Both the REPLACE and LEFT functions prevent the index from being seeked.  An initial test run of the join on the two big tables came in at about 3 hours, much longer than I needed to make it.

To solve the problem, I created a view that added a computed column to hold the match code:

CREATE VIEW [cwi].[Customer Sold To Indexed Address Match]
WITH SCHEMABINDING

AS
SELECT dncomp AS [CMP Company]
     , dnjynb AS [CST Customer #]
    , [...] AS [Many other columns]
    , REPLACE(LEFT(dnl1tx, 10) + LEFT(dnl5tx, 5), ' ', '') AS AddressMatch
FROM vendorschema.vendortable;
GO

The important thing to note here is the SCHEMABINDING.  This tells SQL Server that the view is dependent upon the object (in this case, vendorschema.vendortable) staying the same.  If we were to go into vendorschema.vendortable and attempt to drop the dnl1tx column, SQL Server would give me an error informing me that the cwi.[Customer Sold To Indexed Address Match] view depends upon that object and the drop would fail.

Once I have the computed column set up, I can build an index on it just as I would any other column:

CREATE UNIQUE CLUSTERED INDEX [IX_lwt_CustSoldToIndexAddressMatch_AddressMatch] ON [cwi].[Customer Sold To Indexed Address Match]
(
   [AddressMatch] ASC,
   [CMP Company] ASC,
   [CST Customer #] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO

Now when I run the same query that was before taking about 3 hours, the results are returned in a minute and a half. That’s an improvement any DBA can be proud of!

Advertisements

Why triggers sometimes suck

We took our annual trip Up North this last week. My son really wanted to get his first walleye this year, and I did everything I could to make it happen. The lake we were on is certainly not a “numbers” lake for any species except dinky bluegill. However, our second-to-last night we did manage to hook up with a great 24″ walleye. In many ways, seeing the thrill on kids’ faces as they catch a big fish is even more fun that catching the fish yourself.
Nate Walleye

Triggers are an extremely useful tool in SQL Server. However, I mostly hate them. Like any tool, they have a proper place and are quite effective when used properly. Unfortunately, in actual use they are more often than not used improperly.

Homer Fixes Camera

Tools used improperly

I’m not saying many third party databases have been created by Homer Simpson, but for some of them, it wouldn’t surprise me.  I came across this “interesting” setup while investigating a deadlocking issue on a third party vendor’s database:

CREATE TABLE [cwi].[dwtran]
(
    [ibcomp] [DECIMAL](3, 0) NOT NULL
  , [ibponr] [DECIMAL](12, 0) NOT NULL
  , [ibe2vn] [VARCHAR](3) NOT NULL
  , [ibkggm] [VARCHAR](80) NOT NULL
  , [ibhody] [DECIMAL](7, 0) NOT NULL
  , [ibectm] [DECIMAL](6, 0) NOT NULL
  , [ibamry] [VARCHAR](1) NOT NULL
  , [ibhpdy] [DECIMAL](7, 0) NOT NULL
  , [ibedtm] [DECIMAL](6, 0) NOT NULL
  , [ibhqdy] [DECIMAL](7, 0) NOT NULL
  , [ibeetm] [DECIMAL](6, 0) NOT NULL
  , [ibkncf] [VARCHAR](10) NOT NULL
  , [id] [INT] IDENTITY(1, 1) NOT NULL
) ON [PRIMARY];
GO

CREATE TRIGGER [cwi].[assign_key]
ON cwi.dwtran
AFTER INSERT
AS
BEGIN
    UPDATE dw
    SET dw.ibponr = id
    FROM cwi.dwtran dw
        INNER JOIN Inserted i ON i.id = dw.id;
END;
GO

What is this trigger doing? It’s grabbing the id identity value for the row that just got inserted and setting the ibponr column to the exact same value.  This setup would probably work ok on a system that always accesses the table serially.  However, this table is a change history table that captures all transactions in a busy order management system for insertion into a data warehouse.  There are several application servers accessing the database at the same time.  The trigger was deadlocking with subsequent INSERTS causing transactions to fail.  Additionally, each insert into the table also required an update, effectively costing two transactions for a single insert.

In this case, since the incremental data warehouse load only ran once per day, I was able to create a SQL Agent job that update the missing ibponr records in a single update.  I ran the SQL Agent job right before the data warehouse load.  In a perfect world, the data warehouse load (which is run by the application, not in the database) would update those ibponr values as its first step.

Another reason I dislike triggers is that when they cause a rollback, the error can be vague and the reason for the rollback is often hard to find.

One last thing to keep in mind from a development perspective is that triggers have to be written from the perspective that multiple rows can be inserted simultaneously.  The trigger I showed above assumes that only one row will be inserted at a time, but that isn’t always the case!

Comparing SQL Server Logins on Mirror Servers

I think it is great to pass on a hobby like fishing to the future generations.  I really enjoy taking my kids out fishing, and I’m glad they seem to enjoy it as much as they do.  Unfortunately, at the younger ages you often have to keep a close eye on them to make sure they aren’t going to hook themselves or cast into a tree.

SQL Server database mirroring also requires a rather watchful eye.  One problem that can occur is when a login is created on the principal server but not on the mirror.  Mirroring does not automatically move logins from the principal to the mirror servers, so monitoring is required to make sure the logins stay in sync.

This can be done with a linked server from one to another, but we may not always want to allow a linked server for security purposes.  Instead, I’ve created a Powershell script to compare the two servers and send an email if the mirror server is missing logins that the principal server has.  I use a Windows Scheduled Task to run this script on a regular basis.

$MirrorSQLLoginList = Invoke-SQLCmd -query "SELECT s.name FROM master.sys.syslogins AS s INNER JOIN master.sys.server_principals AS sp ON sp.sid = s.sid WHERE sp.is_disabled = 0 ORDER BY s.name;" -Server "MirrorServer" | select -Expand name
$PrincipalSQLLoginList = Invoke-SQLCmd -query "SELECT s.name FROM master.sys.syslogins AS s INNER JOIN master.sys.server_principals AS sp ON sp.sid = s.sid WHERE sp.is_disabled = 0 ORDER BY s.name;" -Server "PrincipalServer" | select -Expand name
$MissingLogins = $PrincipalSQLLoginList | Where {$MirrorSQLLoginList -NotContains $_}
if ($MissingLogins.count -gt 0)
{
    $AlertEmail = new-object Net.Mail.MailMessage
    $AlertEmail.IsBodyHTML = $true
    $AlertEmail.From = "Powershell@mydomain.com"
    $AlertEmail.To.Add("dgugg@mydomain.com")
    $AlertEmail.Subject = "Logins on the primary database server missing from the mirror"
    $AlertEmail.Body = ($MissingLogins -join "<br/>")
    $smpt = new-object Net.Mail.SMTPClient("mydomain-com.mail.protection.outlook.com","25")
    $smpt.send($AlertEmail)
}

 

 

A deeper dive into XML in SQL Server

When I was younger, I remember my family taking a camping trip near a lake with trout. As best I can remember, the lake was stocked with rainbow and brown trout. When we visited the lake to fish there was a crowd of people along one of the shores. There were a mass of small trout in the water, and everyone was trying to get one to bite. Very few of the people were succeeding. Seeing the fish there, I joined in the group, desparate to try to get one of the 8″ – 10″ fish to bite. Suddenly from a shoreline halfway across the lake, I heard a triumphant cry. Looking up, I saw a man holding an enormous trout in the air. He’d caught the fish by avoiding the crowd of people and small fish, and exploring a new shoreline. I learned two lessons that day. First, if you target smaller fish because you know where they are, you’ll never catch the big ones. Second, if you never explore unfamiliar water you’ll miss out on some good fishing.
One of our developers was recently pulling data from SQL Server to create a catalog feed for a website. This feed included our products, which have a parent child hierarchy between a base product and the skus beneath that product. His plan was to create two stored procedures – one for the base product and one for the skus. Then in his code he was going to take the results of those two stored procedures and merge them into a single XML document.
I had recently taken a SQL quiz and ran into the hierarchy data type, and thought this might be a good way to experiment with using it. After doing some initial work, I found that the hierchary id was overkill for this situation. The Hierarchy data type seems to work best when you have several levels you want be able to delve in to. Instead, I decided to attempt the feed directly in SQL using the FOR XML clause. I have used this clause a few times before, but never to build something as complex as this document. Here are some things I did or observed that may be helpful:
1. Concatenating XML
The final output document had several sections in it that I was forced to create with separate SELECT statements. I couldn’t get these together into the same XML document by unioning or concatenating them together. Instead I had to place them each into variables, then SELECT the variables together in a separate statement that used the FOR XML clause.

DECLARE @xmlheader XML, @xmlnonskuitem AS XML, @xmlsizeonlyitem AS XML, @xmlcoloronlyitem AS XML, @xmlcolorandsizeitem AS XML, @xmlskus AS XML;

SELECT @xmlheader =  ...;
SELECT @xmlnonskuitem = ...;
SELECT @xmlcoloronlyitem = ...;
SELECT @xmlsizeonlyitem = ...;
SELECT @xmlcolorandsizeitem = ...;
SELECT @xmlskus = ...;

SELECT @xmlheader, @xmlnonskuitem, @xmlcoloronlyitem, @xmlsizeonlyitem, @xmlcolorandsizeitem, @xmlskus
--End root
FOR XML RAW () ;

2. Building depth into the XML tree
The xml we were creating had several child elements, some of which needed to be repeated for each variation, and some of which were used to encapsulate all variants for a particular base product. To get this layering, I had to create some of the elements with varchar characters that could be converted to XML. Complicating this, I needed to create XML inside of those text characters, and join them together. The general pattern for this is to convert the inner most XML to varchar, then concatenate those together with the hard coded elements, then convert the whole thing back to XML.

SELECT @xmlsizeonlyitem
    =
(
    SELECT RTRIM(LTRIM(i.[Field2])) AS "@product-id"
         , CONVERT(
                      XML
                    , '<attributes><variation-attribute attribute-id="size" variation-attribute-id="size"><display-name>Size</display-name><variation-attribute-values>'
                      + CONVERT(VARCHAR(MAX)
                              , (
                                    SELECT LTRIM(RTRIM(s.size)) AS [@value]
                                         , RTRIM(LTRIM(s.size)) AS [display-value]
                                    FROM db.SCHEMA.tblSku AS s
                                    WHERE s.Field1 = 125
                                          AND s.Field2 = i.Field2
                                    FOR XML PATH('variation-attribute-value'), TYPE
                                )
                               ) + '</variation-attribute-values></variation-attribute></attributes><variants>'
                      + CONVERT(VARCHAR(MAX)
                              , (
                                    SELECT RTRIM(LTRIM(s.Field2)) + ' ' + RTRIM(LTRIM(s.Field3)) AS [@product-id]
                                    FROM db.SCHEMA.tblSku AS s
                                    WHERE s.Field1 = 125
                                          AND s.Field2 = i.Field2
                                    FOR XML PATH('variant'), TYPE
                                )
                               ) + '</variants>'
                  )
    FROM db.SCHEMA.tblBaseItem i
    WHERE i.Field1 = 125
          AND i.Field2 = 'BaseItemId'
    GROUP BY i.Field1
           , i.Description
    FOR XML PATH('product')
);

3. Elements versus attributes
It’s very easy to change the field you are returning from the database to either an element or an attribute. For attributes you put the @ sign in the front of the name. For elements, you just leave the name as is.

SELECT @xmlcolorandsizeitem = (SELECT RTRIM(LTRIM(i.[Field2])) AS "@product-id"
    , RTRIM(LTRIM(i.Description)) AS [display-name]
FROM db.SCHEMA.baseitemtable i
    WHERE i.field1 = 125
          AND i.field2 = '7972'
         GROUP BY i.field1, i.Description
FOR XML PATH('product')) ;

20180216 XML Elements vs Attributes
4. Renaming ROOT and ROW
You can rename the ROOT element to something else by putting that in the parenthesis after ROOT.
You can also rename the first element, usually defaulted as ROW, by putting the name in the parenthesis after RAW or PATH. This does not work in EXPLICIT or AUTO modes, which don’t have a ROW element.

FOR XML RAW ('catalog') ROOT ('catalogs');

20180216 XML Root and Raw.PNG
5. Adding namespace
To add a namespace to the XML, you can use the WITH XMLNAMESPACES clause. This only works for RAW, AUTO, or PATH modes.

WITH XMLNAMESPACES('http://www.company.com/xml/impex/catalog/2106-1-31' AS Co)

20180216 XML Namespace

I hope these tips give you a better understanding of how to work with XML in SQL.

Inserting into SQL Server from Powershell using parameters

Sometimes fishing, especially muskie fishing, just doesn’t go your way. I recently took a trip to Northern Wisconsin to do some hunting and fishing. I spent five days beating the water to a froth with my lures, but ended up seeing only a single follow. When you’re faced with that level of failure, it’s common to search everywhere for reasons. Was the weather too warm? Was I fishing in the wrong spots? Was I using the wrong types of lures? Was I fishing too fast? Was I fishing too slow? Am I just a bad fisherman? Muskie fishing is all about developing patterns, but it’s awfully tough to find a pattern when you aren’t seeing any fish. I’m going to chalk my failure on this trip up to poor weather. Although it’s now fall, the temperatures have been setting record highs. I’m thinking the fish are just waiting for the water to start cooling so they can binge feed and put on the winter fat. But who knows? Maybe I am just a bad fisherman.

Bad Weather

Bad weather chased me off the water after just two hours of fishing on one day.

I recently constructed an IT dashboard. This dashboard, built in SSRS, compiled data from several sources into a SQL Server database where it could be quickly grabbed by SSRS. The part of this task that was new for me was grabbing performance counters from an array of remote servers and inserting them into the SQL Server table. I was able to make use of a Powershell SYSTEM.DATA.SQLCLIENT.SQQLCOMMAND for this. Below I’ll show how I did it.
First I need a table to hold the data. For this specific set of metrics I’m going to be collecting memory used by the servers.

CREATE TABLE [dbo].[IT_DB_Memory](
   [ServerName] [VARCHAR](255) NOT NULL,
   [MemoryUsed_GB] [DECIMAL](5, 2) NOT NULL,
   [CaptureDateTime] [DATETIME2](7) NOT NULL,
 CONSTRAINT [PK_IT_DB_Memory] PRIMARY KEY CLUSTERED 
(
   [ServerName] ASC,
   [CaptureDateTime] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

Now that I have a table, I will build the powershell script. This is going to run over multiple servers. First I need to set up an array with the list of servers I plan to survey.

$serverarray = @(('Muskie'),('NorthernPike'),('Walleye'))

That’s right, I name my servers after fish. Next I’ll create a connection to SQL Server.

$sqlConn = New-OBJECT SYSTEM.DATA.SqlClient.SQLConnection
$sqlConn.ConnectionString = "Server=SmallmouthBass;Database=DBAReporting;Integrated Security=True;"
$sqlConn.OPEN()

Now I create the command that will be run. Note the SQL Server parameters as distinguished by @ in the front of it.

$sqlCmnd = New-OBJECT SYSTEM.DATA.SqlClient.SqlCommand
$sqlCmnd.CONNECTION = $SqlConn
$sqlCmnd.CommandText = "
SET NOCOUNT ON;

INSERT INTO DBAReporting.dbo.IT_DB_Memory
    ( ServerName
    , MemoryUsed_GB
    , CaptureDateTime )
VALUES
    ( @ServerName
    , @MemoryUsed / 1073741824
    , GETDATE());"

Next I’ll actually create those parameters in the Powershell SQL command.

$sqlCmnd.Parameters.ADD((New-OBJECT DATA.SQLClient.SQLParameter("@ServerName",[Data.SQLDBType]::VarChar, 255))) | OUT-NULL
$sqlCmnd.Parameters.ADD((New-OBJECT DATA.SQLClient.SQLParameter("@MemoryUsed",[Data.SQLDBType]::DECIMAL, 5,2))) | OUT-NULL

This next step is what does the actual work. I’ll loop through the array and use the GET-COUNTER command to get the Memory Used. The way I have it set up will give sample the memory five times, once per second, and then return the average of those five samples.

foreach ($server in $serverarray) {
$sqlCmnd.Parameters[0].Value = $server
$Memory = GET-COUNTER -COUNTER "\Memory\Committed Bytes" -SampleInterval 1 -MaxSamples 5 -ComputerName $server |
    select -ExpandProperty countersamples | select -ExpandProperty cookedvalue | Measure-Object -Average
$sqlCmnd.Parameters[1].Value = $Memory.Average
$sqlCmnd.ExecuteScalar()
}

The last step in Powershell is simply to close the database connection.

$SqlConn.Close()

Now I can set this to run on a regular basis using Windows Task Scheduler, and I’ll have a history of how my application servers are using memory throughout the day.

Tally Tables

Occasionally I like to take a break from Muskie fishing and spend time catching some easier species. This is especially true when I’m taking friends out fishing. Not many people like to cast for hours with only a few follows to show for it. Last month I took my brother Steve out onto a smaller lake about five minutes from my house. This lake is overrun with invasive weeds, and I tend to think of it as a garbage lake. However, we had a great time catching fish. My brother caught several bass and a bonus walleye, while I managed this fat 30″ pike. The pike took a good 5 minutes to get in the boat since I was using fairly light tackle and we had no net.

Little Cedar Northern Pike.jpg

SQL is a set based language. It is built with the idea that the engine will handle any looping in the background, without the author needing to specify the best way to loop. There are a few rare exceptions, but if you are creating a loop in SQL, you are usually doing something wrong or much less efficiently. One great way to get around loops is to create a Tally Table. Originally defined by SQL Server legend Jeff Moden in 2008, the Tally Table is simply a table with a single column of very well indexed sequential numbers.
If you’re a programmer or developer, you’re probably going to think of something like this to build a Tally Table:

--Create the Tally Table
CREATE TABLE #Tally
(
    N INT
  , CONSTRAINT PK_Tally_N
        PRIMARY KEY CLUSTERED (N)
);

--Set up a increment counter
DECLARE @TallyCounter INT;
SET @TallyCounter = 1;

--Fill the Tally Table with a Loop
WHILE @TallyCounter <= 11000
BEGIN
    INSERT INTO #Tally
    (
        N
    )
    VALUES (@TallyCounter);

    SET @TallyCounter = @TallyCounter + 1;
END;

Running on my server, this code took an avergage of 432 milisecond while requiring 22,426 reads and 407 CPU. A more efficient way to generate the table will be like this:

--Create and populate table
SELECT TOP 11000
    IDENTITY(INT, 1, 1) AS N
INTO #Tally
FROM MASTER.sys.syscolumns sc1
   , MASTER.sys.syscolumns sc2;

--Add Primary Key Clustered
ALTER TABLE #Tally
ADD CONSTRAINT PK_Tally_N 
PRIMARY KEY CLUSTERED (N) WITH FILLFACTOR = 100;

This took me only 73 miliseconds to run, and required only 885 reads and 78 CPU.
In Oracle this is even easier to create:

CREATE TABLE tempTally AS
SELECT LEVEL AS N
FROM DUAL
CONNECT BY LEVEL <= 11000
ORDER BY LEVEL;

So now we’ve got a table full of sequential numbers from 1 to 11,000. What can we use this for?
From a programmer or developer perspective, loops are often used with strings. Let’s say we want to step through and display each character in a string. With a loop, you’d do something like this:

DECLARE @StepThroughMe VARCHAR(100), @i INT;
SELECT @StepThroughMe = 'Looping through this string is a waste of time.', @i = 1;

WHILE @i <= LEN(@StepThroughMe)
BEGIN
    SELECT @i, SUBSTRING(@StepThroughMe, @i, 1);
   SELECT @i = @i+1
END;

Using a Tally table, you can do it in a way that is simpler to write and runs in less than a tenth of the time:

DECLARE @TallyThroughMe VARCHAR(100);
SELECT @TallyThroughMe = 'Using a Tally Table is an efficient use of time.'

SELECT t.N, SUBSTRING(@TallyThroughMe, t.N, 1)
FROM #Tally AS t
WHERE t.N <= LEN(@TallyThroughMe);

One other way I used this was to create my Date table in my date warehouse.

WITH cte
AS (SELECT DATEADD(DAY, N - 1, '2000-01-01') AS Date
    FROM #Tally
   )
SELECT YEAR(cte.Date) * 10000 + MONTH(cte.Date) * 100 + DAY(cte.Date) AS DateKey
     , cte.Date
     , YEAR(cte.Date) AS YEAR
     , DATEPART(QUARTER, cte.Date) AS Quarter
     , MONTH(cte.Date) AS MONTH
     , RIGHT('0' + CAST(MONTH(cte.Date) AS VARCHAR(2)), 2) + '. ' + DATENAME(MONTH, cte.Date) AS MonthName
     , DATEPART(ww, cte.Date) + 1 - DATEPART(ww, CAST(DATEPART(mm, cte.Date) AS VARCHAR) + '/1/' + CAST(DATEPART(yy, cte.Date) AS VARCHAR)) AS WeekOfMonth
     , DATEPART(wk, cte.Date) AS WeekOfYear
     , DATEPART(dw, cte.Date) AS DayOfWeek
     , RIGHT('0' + DATEPART(dw, cte.Date), 2) + '. ' + DATENAME(dw, cte.Date) AS DayOfWeekName
     , DAY(cte.Date) AS DayOfMonth
     , DATEPART(DAYOFYEAR, cte.Date) AS DayOfYear
     , CASE
           WHEN DATEPART(QUARTER, cte.Date) IN ( 1, 2 ) THEN
               'Spring'
           ELSE
               'Fall'
       END AS RetailSeason
FROM cte;

This worked for loading my permanent table, but you could also use it to load a temp table or table variable that could be joined to a data set to get a full range of dates even when your data set is missing data on some of the dates.

Tally tables can be used to improve performance in a number of different scenarios. Next time you’re not sure whether you may need a loop, stop and consider whether your situation may benefit from a Tally Table.

SQL Agent Properties

It’s been a rather windy summer thus far, making it less fun to be out on the water.  I don’t know many guys who enjoyed being blasted by the wind while bobbing up and down on the waves for hours on end.  I went out on Pewaukee Lake a few weeks ago with a buddy from work.  We had picked the day in advance since it was supposed to be dry and calm.  We got the dry, but not the calm.  We had a stiff wind blowing out of the west that drove us back into the western half of the lake after trying to fish the narrows in the middle.

I spent the day focusing my fishing efforts on making the bait look real.  I tried hard to avoid retrieving the lure in a rhythmic fashion.  I was paid off with a nice upper 30s muskie:

June 2017 Pewaukee Muskie.jpg

My fishing buddy hooked one a short time later, but couldn’t keep it pinned and we lost it.

Recently, I blogged about migrating the SQL Server installation onto a different drive.  I did find one problem after this move that I had to address.  I ran into a problem with the SQL Agent and I wasn’t able to diagnose the issue.  If I remember correctly it was actually an SSRS subscription that failed, and I needed details to find out why.  I found that the SQL Agent has properties, and the error log was still pointing back at the previous location on the C: drive, which no longer existed.  There is a stored procedure you can execute to see those properties, in addition to looking at them in the SSMS UI:

EXEC msdb..sp_get_sqlagent_properties

20170712 SQL Agent Properties.PNG

Lastly, just update the value with the corresponding SET stored procedure and restart the SQL Agent:

EXEC msdb..sp_set_sqlagent_properties 
   @errorlog_file = N'D:\SQLSERVERINSTALL\SQLAGENTERRORLOG.log'

Now your SQL Agent properties have been update.