Growing your DBA Job Skills

I prefer to fish for musky.  They are huge fish that put up a great fight, and holding one up for a picture is one of the most rewarding things.  They also take forever to catch.  Sometimes its fun to target a fish that are a bit easier to catch.  Walleye are a bit easier, but can be very finicky and often require great finesse.  I like to turn to smallmouth bass if I’m looking for some fun action.  Smallmouth put up a great fight.  They can jump, they dive deep, and they are very powerful for their size.  Earlier this summer my son got this great one during a fishing trip that we took up north.

Nate SM

Technology changes fast. That statement is obvious to us all, yet its implications are sometimes hard or uncomfortable to consider. If you work in IT, you can’t afford to be complacent. If you are, advances in technology will leave you in the dust and your job skills will no longer be marketable. There are many ways to stay current with technology, including college courses, blog posts, online tutorials, and on-the-job work. Regardless of how you choose to stay current in your field, you need to do something.
I have decided to move on from my current employer. I am doing this for a variety of reasons, but one is to round out my job skills. My current employer has a small environment. I manage one Production SQL Server along with its warm standby SQL Server, and one Development SQL Server. I also manage a small Production Oracle database and its corresponding Development database. I also work pretty heavily in Business Intelligence, writing reports in SSRS, Power BI, and PowerPivot. I also spend a portion of my time doing non-DBA type work. The thing I dislike most about my work at my current employer is the third party application support. My current IT team is myself (the DBA), one developer, one desktop support technician, and one IT manager.
My new employer is much different. I will be working on a team with three other SQL Server DBAs. We will be managing thousands of SQL Servers. I will need to work closely with members of other teams who support Servers, Operating Systems, and Storage Devices. It is going to be a huge change for me. I will probably not be doing the same variety of work that I am now. But I do expect the work to challenge me to improve as an employee and as a DBA, and I excited for that challenge. I will most likely be working harder and longer hours than in my current role. My skills with SQL Server will grow much deeper, even as my overall exposure may get narrower. I’m ok with this trade-off, as long as I continue to grow.

So find ways to learn new things and grow.  If nothing else, spend 15 to 30 minutes each week checking out some DBA websites, such as Brent Ozar, SQLServerCentral, or SQLPerformance.  It will make your job easier and make you more valuable to your employer.

Advertisements

Index a Computed Column to Improve Query Performance

This summer started out with a below-average fishing volume. The birth of my fourth child in June used up much of the time I would normally spend fishing. As the summer grew late, I was able to get out a bit more, but it’s mostly been a small lake near my house. I’ve only gotten down to Pewaukee to muskie fish once, though I am happy to say I did have some luck that outing. The fish came in hot behind my lure, but veered off when it got to the side of the boat. As I began the figure eight, I turned to my brother-in-law to let him know I saw a fish and to estimate the size. While I was looking at him the fish came back and struck. After a brief but spirited fight, we netted the fish, a 40.5″ muskie. She was very healthy and I feel blessed to have gotten one my first time out this summer.

20180728 Pewaukee Musky

Doing a figure-eight while musky fishing is something you should do on every cast, just like making sure every query you’re executing is using an index as expected.  In some cases though, indexes can be hard to use.  I wrote an earlier post about SARGability – making sure your indexes can be used.  As I explained in that post, some things, such as functions on the columns being searched, can prevent SQL Server from doing a seek on the index.

I recently was facing a situation like that while loading a data warehouse table.  These loads use a large number of row and can be very time consuming, so it is important to make sure the performance is as efficient as possible.  In this case, I was matching customers in one table to customers in another by comparing their addresses and cities.  The relevant part of the join was:

REPLACE(LEFT(Table1.StreetAddress, 10) + LEFT(Table1.City, 5), ' ', '') = REPLACE(LEFT(Table2.StreetAddress, 10) + LEFT(Table2.City, 5), ' ', '')

Both the REPLACE and LEFT functions prevent the index from being seeked.  An initial test run of the join on the two big tables came in at about 3 hours, much longer than I needed to make it.

To solve the problem, I created a view that added a computed column to hold the match code:

CREATE VIEW [cwi].[Customer Sold To Indexed Address Match]
WITH SCHEMABINDING

AS
SELECT dncomp AS [CMP Company]
     , dnjynb AS [CST Customer #]
    , [...] AS [Many other columns]
    , REPLACE(LEFT(dnl1tx, 10) + LEFT(dnl5tx, 5), ' ', '') AS AddressMatch
FROM vendorschema.vendortable;
GO

The important thing to note here is the SCHEMABINDING.  This tells SQL Server that the view is dependent upon the object (in this case, vendorschema.vendortable) staying the same.  If we were to go into vendorschema.vendortable and attempt to drop the dnl1tx column, SQL Server would give me an error informing me that the cwi.[Customer Sold To Indexed Address Match] view depends upon that object and the drop would fail.

Once I have the computed column set up, I can build an index on it just as I would any other column:

CREATE UNIQUE CLUSTERED INDEX [IX_lwt_CustSoldToIndexAddressMatch_AddressMatch] ON [cwi].[Customer Sold To Indexed Address Match]
(
   [AddressMatch] ASC,
   [CMP Company] ASC,
   [CST Customer #] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO

Now when I run the same query that was before taking about 3 hours, the results are returned in a minute and a half. That’s an improvement any DBA can be proud of!

Why triggers sometimes suck

We took our annual trip Up North this last week. My son really wanted to get his first walleye this year, and I did everything I could to make it happen. The lake we were on is certainly not a “numbers” lake for any species except dinky bluegill. However, our second-to-last night we did manage to hook up with a great 24″ walleye. In many ways, seeing the thrill on kids’ faces as they catch a big fish is even more fun that catching the fish yourself.
Nate Walleye

Triggers are an extremely useful tool in SQL Server. However, I mostly hate them. Like any tool, they have a proper place and are quite effective when used properly. Unfortunately, in actual use they are more often than not used improperly.

Homer Fixes Camera

Tools used improperly

I’m not saying many third party databases have been created by Homer Simpson, but for some of them, it wouldn’t surprise me.  I came across this “interesting” setup while investigating a deadlocking issue on a third party vendor’s database:

CREATE TABLE [cwi].[dwtran]
(
    [ibcomp] [DECIMAL](3, 0) NOT NULL
  , [ibponr] [DECIMAL](12, 0) NOT NULL
  , [ibe2vn] [VARCHAR](3) NOT NULL
  , [ibkggm] [VARCHAR](80) NOT NULL
  , [ibhody] [DECIMAL](7, 0) NOT NULL
  , [ibectm] [DECIMAL](6, 0) NOT NULL
  , [ibamry] [VARCHAR](1) NOT NULL
  , [ibhpdy] [DECIMAL](7, 0) NOT NULL
  , [ibedtm] [DECIMAL](6, 0) NOT NULL
  , [ibhqdy] [DECIMAL](7, 0) NOT NULL
  , [ibeetm] [DECIMAL](6, 0) NOT NULL
  , [ibkncf] [VARCHAR](10) NOT NULL
  , [id] [INT] IDENTITY(1, 1) NOT NULL
) ON [PRIMARY];
GO

CREATE TRIGGER [cwi].[assign_key]
ON cwi.dwtran
AFTER INSERT
AS
BEGIN
    UPDATE dw
    SET dw.ibponr = id
    FROM cwi.dwtran dw
        INNER JOIN Inserted i ON i.id = dw.id;
END;
GO

What is this trigger doing? It’s grabbing the id identity value for the row that just got inserted and setting the ibponr column to the exact same value.  This setup would probably work ok on a system that always accesses the table serially.  However, this table is a change history table that captures all transactions in a busy order management system for insertion into a data warehouse.  There are several application servers accessing the database at the same time.  The trigger was deadlocking with subsequent INSERTS causing transactions to fail.  Additionally, each insert into the table also required an update, effectively costing two transactions for a single insert.

In this case, since the incremental data warehouse load only ran once per day, I was able to create a SQL Agent job that update the missing ibponr records in a single update.  I ran the SQL Agent job right before the data warehouse load.  In a perfect world, the data warehouse load (which is run by the application, not in the database) would update those ibponr values as its first step.

Another reason I dislike triggers is that when they cause a rollback, the error can be vague and the reason for the rollback is often hard to find.

One last thing to keep in mind from a development perspective is that triggers have to be written from the perspective that multiple rows can be inserted simultaneously.  The trigger I showed above assumes that only one row will be inserted at a time, but that isn’t always the case!

Comparing SQL Server Logins on Mirror Servers

I think it is great to pass on a hobby like fishing to the future generations.  I really enjoy taking my kids out fishing, and I’m glad they seem to enjoy it as much as they do.  Unfortunately, at the younger ages you often have to keep a close eye on them to make sure they aren’t going to hook themselves or cast into a tree.

SQL Server database mirroring also requires a rather watchful eye.  One problem that can occur is when a login is created on the principal server but not on the mirror.  Mirroring does not automatically move logins from the principal to the mirror servers, so monitoring is required to make sure the logins stay in sync.

This can be done with a linked server from one to another, but we may not always want to allow a linked server for security purposes.  Instead, I’ve created a Powershell script to compare the two servers and send an email if the mirror server is missing logins that the principal server has.  I use a Windows Scheduled Task to run this script on a regular basis.

$MirrorSQLLoginList = Invoke-SQLCmd -query "SELECT s.name FROM master.sys.syslogins AS s INNER JOIN master.sys.server_principals AS sp ON sp.sid = s.sid WHERE sp.is_disabled = 0 ORDER BY s.name;" -Server "MirrorServer" | select -Expand name
$PrincipalSQLLoginList = Invoke-SQLCmd -query "SELECT s.name FROM master.sys.syslogins AS s INNER JOIN master.sys.server_principals AS sp ON sp.sid = s.sid WHERE sp.is_disabled = 0 ORDER BY s.name;" -Server "PrincipalServer" | select -Expand name
$MissingLogins = $PrincipalSQLLoginList | Where {$MirrorSQLLoginList -NotContains $_}
if ($MissingLogins.count -gt 0)
{
    $AlertEmail = new-object Net.Mail.MailMessage
    $AlertEmail.IsBodyHTML = $true
    $AlertEmail.From = "Powershell@mydomain.com"
    $AlertEmail.To.Add("dgugg@mydomain.com")
    $AlertEmail.Subject = "Logins on the primary database server missing from the mirror"
    $AlertEmail.Body = ($MissingLogins -join "<br/>")
    $smpt = new-object Net.Mail.SMTPClient("mydomain-com.mail.protection.outlook.com","25")
    $smpt.send($AlertEmail)
}

 

 

Working with SFTP sites in SSIS

The year is almost half over and I haven’t fished at all yet. I did no ice fishing this year and I haven’t gotten out in the first few weeks of the open water season either. Life has been busy, with a small increase in work responsiblities along with a large increase in family busyness. I hope to get out soon, and if so I will provide a fishing report. The spring was cold, so I expect a few weeks to pass before fishing really gets going this year anyway.

I often use SSIS for collection of files. My current employer pulls data from several different locations, and since my background is much more SQL based than programming based, I default to SSIS. One thing to note when working in SSIS is that while Integration Services has an ftp task, it doesn’t support pulling files from an sftp site. This can be confusing when you first try to use the ftp task to pull data from an sftp site. There is no explicit warning that you are using an incorrect component. You can adjust the port to the standard sftp port, but the connection to the site fails.

To get around this limitation, I use the scripting capabilities of WinSCP. WinSCP is a great tool – I love the UI for interacting with the ftp/sftp sites, the scripting language is intuitive and easy to use, and the options for more secure protocols have always met my needs.

To demonstrate how it works, I will show a package I set up to extract data and put it on an sftp site. The package has three components. The first is a data flow task which extracts the data from SQL Server and places it on the file system as a flat file. The second task is an Execute Process task which kicks off a batch file that runs the WinSCP script to transfer the file. The third task archives the exported file to an Archive folder. The process is set up to run daily, so each file has the date appended to its name.
20180515 SSIS Package
The batch file is quite simple and just kicks off the WinSCP scripting program with the pre-written script. Here is an example of such a batch file:

cd/
cd "Program Files (x86)\WinSCP"
WinSCP.com -script="C:\Project\3rd Party Vendor Unsubscribe Feed\lib\winscp ftp script.txt"
exit

Finally, the WinSCP script is fairly simple too. First make sure to turn off any interactive prompts, since this will be run in an automated fashion. Then connect to the sftp site with the “open” command and place the file out there with the “put” command. (Similarly, to pull files you can use the “get” command). The WinSCP site has many great examples for writing scripts to work with ftp/sftp files and folders. The WinSCP scripting language is even robust enough to allow for variables, such as the %TIMESTAMP#yyyymmdd% one you can see in my script below.

# Automatically abort script on errors
option batch abort
# Disable overwrite confirmations that conflict with the previous
option confirm off
open sftp://username:password@sftp.domain.com:22 -hostkey="ssh-rsa 2048 XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX"
put "C:\Project\3rd Party Vendor Unsubscribe Feed\Files\UnsubscribeEmails_%TIMESTAMP#yyyymmdd%.txt" "/subscriptions/unsub/"
exit

Using this simple technique you can still use SSIS to transfer files to and from an sftp site.

How Index Fill Factor affects new rows

Fishing is not what it used to be. I recently began reading the Wisconsin Outdoor News regularly. It has helped me to understand just how pervasive fish stocking is. Without it, the population of fishermen in Wisconsin, which includes me, would quickly overcome the population of fish, leaving our lakes and rivers largely empty. Stocking fish isn’t a bad thing; it allows more people to enjoy fishing that much more. It allows for bigger bag limits and greater participation. But I do believe we should not trick ourselves into thinking that fishing is the same now as it used to be before we settled and tamed our great state. Many of the lakes we enjoy fishing have species of fish that are not native to that body of water. So yes, it’s great that my local lake has a nice population of walleye (most of which have been stocked), but the natural balance between the species of fish in the lake has been artificially disrupted for the purposes of catering to fishermen who prize walleye above all else. Is it fair to fisherman who enjoy fishing other species, or even to the ecosystem of the lake, to maintain this stocking? That’s a tough question to ask. We just don’t know the long-term affects of maintain such an unnatural lack of balance will be.

Fish Stocking.jpg
One thing I like about SQL Server is you can test almost anything, so you can be sure what the long term affects of a change will be. I recently answered a trivia question regarding Index Fill Factor incorrectly, so I decided to test out the question in SQL. The question asked whether fill factor affects newly inserted rows on the table. The Fill Factor of an index controls how much empty space the database engine will attempt to leave on each page for the purposes of minimizing page splits during updates or inserts.

For example, if I have a page that is 100% full and someone updates a varchar(1000) field to go from a value that is using 10 bytes to a value using 250 bytes, SQL Server’s got some work to do. It can’t just fill in the empty space on the page because their is none. It’s going to have to take some rows of data on the page, move it to another page, create a pointer on the original page to the new page, and then perform the data update. This is a lot of extra work and it increases index fragmentation. So if you have a table where you expect many updates, you might intentionally leave some blank space on the page. That way updates that increase used space in a row can simply push the extra data down into the free space without doing a bad page split.
Now, I assumed that when SQL Server inserts new rows onto the end of a table, it will insert those rows with a fill factor equal to the existing fill factor on the index of the table. However, when I submitted that answer to the trivia question, it was marked incorrect. So I decided to set up this test. I create an index with a low fill factor, then insert a bunch of rows on the end. If the overall used space as seen in the sys.dm_db_index_physical_stats view goes up, then we can be confident the new rows are being inserted at close to 100%. If it stays about the same, then we know the new rows are being inserted with the fill factor defined on the index.

First I create a new table for the test:

CREATE TABLE [dbo].[GuggTest_20180411](
   [id] [int] IDENTITY(1,1) NOT NULL,
   [somevarchardata] [varchar](100) NULL,
   [somenumericdata] [decimal](10, 2) NULL,
 CONSTRAINT [PK_GuggTest_20180411] PRIMARY KEY CLUSTERED 
(
   [id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 20) ON [PRIMARY]
) ON [PRIMARY]
GO

Next I’m going to fill it in with some data:

DECLARE @i INT = 0;

WHILE @i < 100000
BEGIN
    INSERT INTO dbo.GuggTest_20180411
    (
        somevarchardata
      , somenumericdata
    )
    SELECT CONVERT(VARCHAR(100), NEWID())
         , ROUND(RAND(CHECKSUM(NEWID())) * (100000000), 2);

    SELECT @i = @i + 1;
END;

Now I’ll run the system dmv to see the existing page space usage:

20180411 Index Usage Before Rebuild

The leaf level is where we expect to see the page space used around 20%. To see the leaf level we look for the lowest index level in the DMV. So we can see even now the average space used is close to 100% despite having created the index with a fill factor of 20%.  To get it down to 20 %, I’ll rebuild the index:

ALTER INDEX [PK_GuggTest_20180411] ON [dbo].[GuggTest_20180411] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 20)

Now I’ll rerun the DMV to see the space used, which should be closer to 20%.

20180411 Index Usage After Rebuild.png

So we can see the average space used is now just over 20%.  (As an aside, we can also see the page count has ballooned to 3704 from 770.  This shows the danger of reducing the fill factor.  Selecting all rows from this table just became 481% more expensive!)

I’ll now insert another 10,000 rows into this table.  If the new rows go in with a fill factor of 20%, the average page space used metric should remain around 20%.  If they go in at 100%, I expect the average space used will go up significantly.

DECLARE @i INT = 0;

WHILE @i < 100000
BEGIN
    INSERT INTO dbo.GuggTest_20180411
    (
        somevarchardata
      , somenumericdata
    )
    SELECT CONVERT(VARCHAR(100), NEWID())
         , ROUND(RAND(CHECKSUM(NEWID())) * (100000000), 2);

    SELECT @i = @i + 1;
END;

And now I check the DMV one last time:

20180411 Index Usage After Insert.png

We can see the space used increased to 34%, indicating the rows are not inserted with the specified fill factor as I had assumed.

Reading Transaction Log Backups

Like it a business, where advances in technology fuel greater efficiencies, advances in fishing technology make for greater fishing opportunities. Just look at all the different ways to rig live bait for examples. My preferred method for fishing close to, but not right on, the bottom is a Lindy Rig. This rig is great when there isn’t much vegetation – it has a small bobber between a weight and the hook which lifts the bait off the bottom. The weight is usually a walking slip sinker weight, which has a large hole for the line to flow through so it doesn’t get hung up. The rig’s made so the fish can grab the bait and swim away with it without feeling any resistance. Then when the fish stops to eat the bait, you set the hook. It’s great fun to sit there with your bail open and a finger on the line, trying to feel the fish make some initial nibble on the bait before grabbing it and running. Next comes the excitement as line peels off your reel while the fish runs. Lastly, once it stops you reel in steadily until you feel the weight of the fish on the end, then set the hook sharply. Using a Lindy Rig is a fun and exciting way to fish with live bait.
Last week we ran into a situation where some user had changed a large amount of data in our OMS – but we didn’t know who or why. We weren’t even sure if the changes had been made through the OMS UI or via a direct database insert. Obviously having change tracking turned on who had made the investigation easier, but that doesn’t help you after the fact. The change had taken place the previous day, so it was my job to try to figure out who made it. For this, my only option was to read the Transaction Log Backups. There is an undocumented system stored procedure in SQL Server called fn_dump_dblog which allow you to get some basic information from the transaction log. Calling the function is a bit interesting – the first two parameters are the starting and ending LSN, which can usually just be left NULL to read the whole log backup. The third paramter is either DISK or TAPE indicating where your backup is stored. The fourth parameter is the backup number within the file. For most of us this will be 1 since we’re not using multi-backup media sets. Lastly, the file path and name is the fifth parameter. After that there are 63 nonoptional DEFAULT parameters! So when you do an initial call, it will look like this:

SELECT *
FROM fn_dump_dblog (NULL, NULL, N'DISK', 1, N'\\NASServer\SQLBackup\INSTANCEServer\Database\LOG\InstanceServer_database_LOG_20180306_120001.trn',
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT)

For my particular issue, I needed to find all INSERTS into a specific table. I used the fn_dump_dblog function in a common table expression and filtered the operation on inserts and the PartitionId on the table I was interested in. Then I joined that CTE to another SELECT from the function on the Transaction ID and filtered that table on the LOP_BEGIN_XACT operation to get when the INSERT began. I was able to use the SUSER_SNAME function on the Transaction SID to reveal the database user who had made the INSERTs:

WITH CTE
AS
       (SELECT [Transaction ID], COUNT(*) AS InsertedRows
       FROM fn_dump_dblog (NULL, NULL, N'DISK', 1, N'\\NASServer\SQLBackup\InstanceServer\DatabaseName\LOG\InstanceServer_DatabaseName_LOG_20180306_120001.trn',
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT)
       WHERE Operation = ('LOP_INSERT_ROWS')
       AND [PartitionId] IN (SELECT sp.partition_id
                            FROM DatabaseName.sys.objects so
                            INNER JOIN DatabaseName.sys.partitions sp ON so.OBJECT_ID = sp.OBJECT_ID
                            WHERE name = 'InsertedOnTable'
                           AND sp.index_id = 1)
       GROUP BY [Transaction ID]
       )
SELECT [Current LSN], a.[Transaction ID], [Transaction Name], [Operation], [Begin Time], SUSER_SNAME([TRANSACTION SID]) AS LoginName, CTE.InsertedRows
FROM fn_dump_dblog (NULL, NULL, N'DISK', 1, N'\\NASServer\SQLBackup\InstanceServer\DatabaseName\LOG\InstanceServer_DatabaseName_LOG_20180306_120001.trn',
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT) AS a
INNER JOIN cte ON a.[Transaction ID] = cte.[Transaction ID]
WHERE Operation = ('LOP_BEGIN_XACT')

20180315 Query Tlog Backup
Change tracking would give more detail and be easier to query, but at least for now I was able to see that the user making the change was the login that the application uses to connect to the database, so I was able to see that the change was indeed made through the UI.