Casting and retrieving big, heavy muskie lures for a full day can actually be quite a bit of work. Lures weighing up to a pound can be tough to cast, and lures with heavy action take a lot of effort to pull through the water. Using such large, heavy gear also requires standing instead of sitting, and (not to sound like a wimp but…) a full day of standing adds to the fatigue.
Not all fishing is such hard work. There is something both satisfying and relaxing about sitting back with your feet up, enjoying a cold beer, and holding a rod waiting to feel the sharp tap of a fish bite. Walleye fishing often provides this type of relaxing leisure time. Walleye are a popular fish to catch due to their taste, though they are also quite challenging. They have a reputation for being a finicky fish and will refuse many different presentations before choosing one to bite. Their bite is also quite subtle, and if you aren’t paying attention you can often miss it. By the time you realize you have a fish on your line, the walleye will have felt the hook and spit out the bait. Another complication is that small panfish will often find and steal your bait before the walleye get a chance. Many a time I have tried to set the hook on what I thought was a good sized walleye, only to pull up a small bluegill, or even worse, a hook with only a small scrap of worm because the bluegills have pulled it apart and eaten the rest.
How nice it would be to have some way to watch and monitor the bait, so it wouldn’t be stolen by smaller fish and we wouldn’t miss the walleye when they do show up. Many database administrators have the same feelings about their databases. We want to be able to see exactly what is changing, when, and who is making those changes. While we try to lock down our environments so most of, if not all, the changes have to go through us, there are often business requirements that force us to give the ability to make minor changes others. This is especially true in lower environments, where developers can often be tasked with creating tables or stored procedures that will be migrated up to production.
Fortunately, SQL Server gives us the ability to capture all DDL changes through the use of a DDL Trigger. DDL stands for Data Definition Language, and it refers to statements that work against objects rather than the data within them. Some examples are DROP TABLE, CREATE STORED PROCEDURE, and ALTER VIEW. You can capture these statements, including who ran them, from where, and at what time through the use of the DDL Trigger. Here’s Microsoft’s explanation on DDL triggers. The first thing to do is to create a table to hold all the captured events:
CREATE TABLE Sandbox.dbo.DDLEvents ( ID INT IDENTITY(1, 1) , EventDate DATETIME2 , EventType NVARCHAR(100) , EventDDL NVARCHAR(MAX) , EventXML XML , DatabaseName NVARCHAR(MAX) , SchemaName NVARCHAR(255) , ObjectName NVARCHAR(255) , HostName NVARCHAR(255) , IPAddress VARCHAR(20) , ProgramName NVARCHAR(1000) , LoginName NVARCHAR(255) ) GO
Next we need to create the actual trigger.
USE [master] GO CREATE TRIGGER [DDLTrigger_CaptureEvents] ON ALL SERVER WITH EXECUTE AS 'sa' FOR CREATE_PROCEDURE, ALTER_PROCEDURE, DROP_PROCEDURE, CREATE_INDEX, ALTER_INDEX, DROP_INDEX, CREATE_TABLE, ALTER_TABLE, DROP_TABLE, CREATE_FUNCTION, ALTER_FUNCTION, DROP_FUNCTION, CREATE_ROLE, ALTER_ROLE, DROP_ROLE, CREATE_LOGIN, ALTER_LOGIN, DROP_LOGIN, CREATE_TRIGGER, ALTER_TRIGGER, DROP_TRIGGER, CREATE_USER, ALTER_USER, DROP_USER, CREATE_VIEW, ALTER_VIEW, DROP_VIEW AS BEGIN SET NOCOUNT ON; DECLARE @EventData XML = EVENTDATA(); DECLARE @ip VARCHAR(32) = ( SELECT client_net_address FROM sys.dm_exec_connections WHERE session_id = @@SPID ); INSERT Sandbox.dbo.DDLEvents ( EventDate , EventType , EventDDL , EventXML , DatabaseName , SchemaName , ObjectName , HostName , IPAddress , ProgramName , LoginName ) SELECT GETDATE() , @EventData.value('(/EVENT_INSTANCE/EventType)', 'NVARCHAR(100)') , @EventData.value('(/EVENT_INSTANCE/TSQLCommand)', 'NVARCHAR(MAX)') , @EventData , @EventData.value('(/EVENT_INSTANCE/DatabaseName)', 'NVARCHAR(MAX)') , @EventData.value('(/EVENT_INSTANCE/SchemaName)', 'NVARCHAR(255)') , @EventData.value('(/EVENT_INSTANCE/ObjectName)','NVARCHAR(255)') , HOST_NAME() , @ip , PROGRAM_NAME() , @EventData.value('(/EVENT_INSTANCE/LoginName)','NVARCHAR(255)'); END
A couple important notes to make on this create statement:
- This trigger must be created in the master database
- I added the “WITH EXECUTE AS ‘sa'”. Without this clause, each user who executes a statement that fires the trigger will need permission on everything the trigger uses. Since this is going into a holding table, giving everyone permission for this table would be a huge hassle. It is VERY IMPORTANT to understand if the user/process has insufficient permissions, the entire transaction will be rolled back. This can also be difficult to find as rollbacks due to insufficient permissions within triggers often do not pass out helpful error messages.
- On the “FOR CREATE_PROCEDURE, ALTER_PROCEDURE,…” you can use this to specify which events should fire the trigger. Maybe you’re only interested in finding out who is dropping tables so you’d only need to add that to the trigger.
- The IP address of the user/process that is executing the statement had to be retrieved from a separate DMV.
One last step to make after creating the trigger is to enable it:
ENABLE TRIGGER [DDLTrigger_CaptureEvents] ON ALL SERVER
Now after running a few DDL statements I see them logged in my table:
So there is one way to tell who’s doing what DDL in your database. Next time you need to confront one of your “bluegill” teammates, you’ll have proof that he has been messing around with your bait/database.