Experimenting with SQL Server Memory

In my “career” as a muskie fisherman, I’ve caught or seen fish in some very unlikely places. I’ve caught fish over very deep water with seemingly no nearby structure as well as in water as shallow as just a few feet. I’ve had muskies strike or chase northern pike and walleye as I brought them into the boat. I’ve had bites when I was expecting nothing. The key to catching more fish is being will to test new locations, techniques, and lures.

I recently wondered what would happen if I was forced to drastically reduce memory available to SQL Server. Most DBAs know that if SQL Server is low on available memory and there is RAM to spare on the server, you can increase the memory SQL Server can use in the server properties. But what happens if you reduce the memory SQL Server can use below what it is currently using? I decided to try it out in our Test environment.

The initial maximum memory usage was 7 GB:


In looking at the SQL Server process in Windows Task Manager, I see it is using all plus a bit:


I turned the max down to 4.5 GB:


Looking at the Windows Task Manager shortly after the change, I can see the amount be using by SQL Server has been reduced as expected.


If I look at my Memory Usage tracking at that time, I can see buffer pool was the area that was directly affected.  The buffer pool is where any data pages in SQL Server are cached for quick reading/writing.


So next time you need to adjust the max memory downward, be aware this will immediately reduce the number of pages in the buffer pool.

Testing Disk Speed

It has been a rather busy couple months. We’ve had lots of big projects at work and lots of departmental changes to go with them. Additionally, it’s finally summer and I’ve been fishing as much as possible. This being my second year with a boat, I’ve been making the most of my free time including lot’s of fishing with the kids at nearby Silver and Little Cedar lakes, and a fair amount of muskie fishing with friends and family down at Pewaukee Lake.
20150708 Northen Cropped
Actually, just last night I was fishing with a couple of friends on Pewaukee. We didn’t have much luck, but we did find a 34″ northern pike floating in the middle of the lake. It clearly hadn’t been dead more than an hour or two. It had some thin, deep cuts toward the back of it, probably the result of another fish biting it.
Anyway, my boat is fairly small for three guys, and even with the motor on full throttle the best speed we were able to make was about 18 mph. It felt pretty slow when I had recently been out with a couple different people in their boats, one that could get up to 40 mph and the other was at 45 mph (and may have even been able to go faster). There are things you can do to increase your speed while moving across the lake, such as making sure your boat is level, but the size of the engine really does make a big difference.
The week before the big database server upgrade that I was recently working on, I decided to test the disk speed on the new server. I was confident we’d be getting a big boost because we were going from spinning disks on the old server to SSDs, which are supposed to be up to 40X faster. I was pretty excited to run some tests and see what kind of improvement I got from it.
I decided to use a tool called diskspd to measure the speeds. This tool is available at Microsoft Technet here. Although it was not created specifically for SQL Server, it can be configured to run effective tests on SQL Server disks. I created the following batch file to run tests on the OS disks, the Data File disks, the Log File disks, and the TempDB disks.

@echo off
echo Automated I/O testing with Diskspd, Microsoft tool available at https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
REM Author: 		Dave Gugg
REM Create Date: 	2015-05-06
REM Description: This script will automate I/O testing on a new database server.  It is a good idea to
REM           	 run this test on both the new and old server (make sure old server database is offline)
REM			 	 to get an idea of how I/O will change when moving a SQL Server instance to a new server.

REM Description of applicable diskspd arguments:
REM -b: Block size.  SQL Server uses 8K pages, but also sometimes reads full extents (64K) and read-aheads (512K).
REM -c: Size of the workload files to create.  Compare this to your mdf or ldf file sizes.
REM -d: Test duration, in seconds
REM -h: Disable hardware and software caching, from within the Windows OS layer.  This mimics SQL Server behavior.
REM -L: Capture latency info.
REM -o: Outstanding I/Os (queue depth) per worker thread.  Make this higher to increase intesity of test.
REM -r: Random reads/writes.  Use random for data file drives.
REM -s: Sequential reads/writes.  Use sequential for log file drives.
REM -t: Worker threads.  Set this to number of cores on server.
REM -w: Write percentage.  The remaining percentage will be reads, so setting this to 20 will test based on 80% reads and 20% writes.
REM -Z: Separate read and write buffers, and size the write source buffer.  A larger buffer will have more randomized data to write.

cd /d "C:\Diskspeed"

echo Starting OS drive tests
REM System databases are kept on the OS drive

diskspd -b8K -d60 -h -L -o8 -t32 -r -w5 -Z250M -c5G c:\DiskSpeed\io.dat>OS_IO_8k_result.txt
diskspd -b64K -d60 -h -L -o8 -t32 -r -w5 -Z250M -c5G c:\DiskSpeed\io.dat>OS_IO_64k_result.txt
diskspd -b512K -d60 -h -L -o8 -t32 -r -w5 -Z250M -c5G c:\DiskSpeed\io.dat>OS_IO_512k_result.txt

echo OS drive tests complete
echo Starting data drive tests
REM Data drives hold the mdf and ndf files

diskspd -b8K -d60 -h -L -o8 -t32 -r -w20 -Z1G -c50G D:\SQLData\io.dat>DF_IO_8k_result.txt
diskspd -b64K -d60 -h -L -o8 -t32 -r -w20 -Z1G -c50G D:\SQLData\io.dat>DF_IO_64k_result.txt
diskspd -b512K -d60 -h -L -o8 -t32 -r -w20 -Z1G -c50G D:\SQLData\io.dat>DF_IO_512k_result.txt

echo Data drive tests complete
echo Starting log drive tests
REM Log drives hold the ldf files

diskspd -b8K -d60 -h -L -o8 -t32 -si -w90 -Z5G -c10G E:\SQLLogs\io.dat>LF_IO_8k_result.txt
diskspd -b64K -d60 -h -L -o8 -t32 -si -w90 -Z5G -c10G E:\SQLLogs\io.dat>LF_IO_64k_result.txt
diskspd -b512K -d60 -h -L -o8 -t32 -si -w90 -Z5G -c10G E:\SQLLogs\io.dat>LF_IO_512k_result.txt

echo Log drive tests complete
echo Starting tempdb drive tests
REM Tempdb needs its own drive since it can get very intense usage

diskspd -b8K -d60 -h -L -o8 -t32 -r -w33 -Z5G -c20G G:\TempdbData\io.dat>TF_IO_8k_result.txt
diskspd -b64K -d60 -h -L -o8 -t32 -r -w33 -Z5G -c20G G:\TempdbData\io.dat>TF_IO_64k_result.txt
diskspd -b512K -d60 -h -L -o8 -t32 -r -w33 -Z5G -c20G G:\TempdbData\io.dat>TF_IO_512k_result.txt

echo Tempdb drive tests complete
echo All testing complete, press any key to exit...
pause > nul

Next I took the SQL Server Service offline on our current production server (during a maintenance window) so I could run the tests when both servers were mostly idle. The first thing I noticed is OS drive performance did not improve. This made sense since the OS drives on the new server were not SSDs, they were the built-in spinning disks. Here are the results from the rest of the tests:

20150723 Diskspd results

Overall, I saw about a 20X improvement in speed. It will be interesting to see how this improves the performance of the applications using this database server, as well as running ad-hoc queries.

How to get Read/Write percentages for a SQL Server database

The 2015 fishing opener started out with some frustration, but ended up being quite successful for me. I got up at 4:40 AM, which was just a bit later than I had wanted. I also took a bit too much time showering and eating breakfast, so I got to the boat launch on Okauchee lake at 6:10 AM, 25 minutes later than I had intended. Now this was my first fishing opener with a boat, and I clearly underestimated how many other people were as pumped up about it as I was, because the boat launch was full! I was disappointed because I had really wanted to fish Okauchee, but the public launch on Pewaukee at Naga-Waukee is only a ten minute drive from there, so I decided to head over to Pewaukee. Upon arrival, I was shocked to see that this launch, which is much larger than Okauchee with 71 trailer parking places, was also full. I ended up going over to Smokey’s Bait Shop on the far east end of the lake and launching there. I’d never fished the east end of Pewaukee before but I knew it was shallow and weedy all over. As I motored out from the launch, I discovered good weeds almost immediately, and decided this was as good a spot as any to search out some muskies. The wind was coming from the southwest, and I drifted toward the north shore of the lake. I saw a muskie pretty quickly, although it didn’t seem to be following my bait, it was still a good sign that their were fish in the area. The weeds were pretty high in this area, and I was cleaning seaweed off the hooks of my perch colored shallow raider after almost every cast. I began to get close to shore and was about to stop fishing so I could motor back toward the middle of the lake when I saw a hard weedline stretching parallel to the shore. The weeds seemed to completely stop when they got about 20 – 30 feet from shore. This looked like an excellent ambush location, so I fired off a cast west along the weedline. As the lure got to the boat I looked down to see a beautiful muskie following the bait closely. She bit the bait and gave a head shake, and I quickly pulled the rod tip up and drove the hook into her mouth. The fight wasn’t too intense, although she did walk me completely around the boat once. I netted and unhooked her, got a couple of pictures, and then released her back into the water.
20150502 Muskie Enhanced[2]
The rest of the outting was good, but I didn’t land any more muskies. I saw three others in the water, saw four other boats land muskies, and also landed a pretty solid largemouth bass. It was a fun day, and great to be back out on the water after a long winter.

We are going to be replacing our product SQL Server in the next couple months. We have already picked out the new hardware, and I have begun the process of installing SQL Server and running some tests. One important test that should always be run when replacing a server is disk I/O, especially if you are not using a SAN. There is a great tool that can be used to measure disk I/O, but I will demonstrate that in a later post. First we need some preliminary information, the most important one being the read/write ratio of your databases. Fortunately, SQL Server tallies each and every read and write in the form of index stats in the sys.dm_db_index_usage_stats DMV. Using this view, we can get a great idea of the percentage of reads and writes in each database.

I wrote this query, which takes the sum of all reads, both system and user, and divides it by the sum of all reads and writes to get the total read percentage. Similarly, it takes the sum of all writes and divides it by the sum of all reads and writes to get the write percentage. Lastly, I group by database so I can see the difference in each database on the instance. This wouldn’t be required if you wanted an overall picture of all databases on the instance, but I know what my most important and largest databases are, and I want to really know the read/write percentage on those databases.

SELECT  DB_NAME(s.database_id) AS Database_Name ,
        SUM(s.user_seeks +0.000
           + s.user_scans 
           + s.user_lookups 
           + s.system_seeks
            + s.system_scans 
           + s.system_lookups) / SUM(s.user_seeks
                                       + s.user_scans
                                       + s.user_lookups
                                       + s.user_updates
                                       + s.system_seeks
                                       + s.system_scans
                                       + s.system_lookups
                                       + s.system_updates) AS Read_Percentage ,
        SUM(s.user_updates + 0.000
           + s.system_updates) / SUM(s.user_seeks
                                        + s.user_scans
                                        + s.user_lookups
                                        + s.user_updates
                                        + s.system_seeks
                                        + s.system_scans
                                        + s.system_lookups
                                        + s.system_updates) AS Write_Percentage
FROM    sys.dm_db_index_usage_stats AS s
GROUP BY s.database_id
ORDER BY DB_NAME(s.database_id);

20150506 Read Write Percentages
A couple notes on this query:

  • The stats are cumulative since the last SQL service restart.  The longer you wait to run this since the last server reboot or service restart, the more meaningful the stats will be.
  • The DMV used in this query does not exist on versions prior to SQL Server 2008.

Stay tuned for an upcoming post where I will use the results of this query to measure disk I/O for a new server compared to the current server.