How Index Fill Factor affects new rows

Fishing is not what it used to be. I recently began reading the Wisconsin Outdoor News regularly. It has helped me to understand just how pervasive fish stocking is. Without it, the population of fishermen in Wisconsin, which includes me, would quickly overcome the population of fish, leaving our lakes and rivers largely empty. Stocking fish isn’t a bad thing; it allows more people to enjoy fishing that much more. It allows for bigger bag limits and greater participation. But I do believe we should not trick ourselves into thinking that fishing is the same now as it used to be before we settled and tamed our great state. Many of the lakes we enjoy fishing have species of fish that are not native to that body of water. So yes, it’s great that my local lake has a nice population of walleye (most of which have been stocked), but the natural balance between the species of fish in the lake has been artificially disrupted for the purposes of catering to fishermen who prize walleye above all else. Is it fair to fisherman who enjoy fishing other species, or even to the ecosystem of the lake, to maintain this stocking? That’s a tough question to ask. We just don’t know the long-term affects of maintain such an unnatural lack of balance will be.

Fish Stocking.jpg
One thing I like about SQL Server is you can test almost anything, so you can be sure what the long term affects of a change will be. I recently answered a trivia question regarding Index Fill Factor incorrectly, so I decided to test out the question in SQL. The question asked whether fill factor affects newly inserted rows on the table. The Fill Factor of an index controls how much empty space the database engine will attempt to leave on each page for the purposes of minimizing page splits during updates or inserts.

For example, if I have a page that is 100% full and someone updates a varchar(1000) field to go from a value that is using 10 bytes to a value using 250 bytes, SQL Server’s got some work to do. It can’t just fill in the empty space on the page because their is none. It’s going to have to take some rows of data on the page, move it to another page, create a pointer on the original page to the new page, and then perform the data update. This is a lot of extra work and it increases index fragmentation. So if you have a table where you expect many updates, you might intentionally leave some blank space on the page. That way updates that increase used space in a row can simply push the extra data down into the free space without doing a bad page split.
Now, I assumed that when SQL Server inserts new rows onto the end of a table, it will insert those rows with a fill factor equal to the existing fill factor on the index of the table. However, when I submitted that answer to the trivia question, it was marked incorrect. So I decided to set up this test. I create an index with a low fill factor, then insert a bunch of rows on the end. If the overall used space as seen in the sys.dm_db_index_physical_stats view goes up, then we can be confident the new rows are being inserted at close to 100%. If it stays about the same, then we know the new rows are being inserted with the fill factor defined on the index.

First I create a new table for the test:

CREATE TABLE [dbo].[GuggTest_20180411](
   [id] [int] IDENTITY(1,1) NOT NULL,
   [somevarchardata] [varchar](100) NULL,
   [somenumericdata] [decimal](10, 2) NULL,
 CONSTRAINT [PK_GuggTest_20180411] PRIMARY KEY CLUSTERED 
(
   [id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 20) ON [PRIMARY]
) ON [PRIMARY]
GO

Next I’m going to fill it in with some data:

DECLARE @i INT = 0;

WHILE @i < 100000
BEGIN
    INSERT INTO dbo.GuggTest_20180411
    (
        somevarchardata
      , somenumericdata
    )
    SELECT CONVERT(VARCHAR(100), NEWID())
         , ROUND(RAND(CHECKSUM(NEWID())) * (100000000), 2);

    SELECT @i = @i + 1;
END;

Now I’ll run the system dmv to see the existing page space usage:

20180411 Index Usage Before Rebuild

The leaf level is where we expect to see the page space used around 20%. To see the leaf level we look for the lowest index level in the DMV. So we can see even now the average space used is close to 100% despite having created the index with a fill factor of 20%.  To get it down to 20 %, I’ll rebuild the index:

ALTER INDEX [PK_GuggTest_20180411] ON [dbo].[GuggTest_20180411] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 20)

Now I’ll rerun the DMV to see the space used, which should be closer to 20%.

20180411 Index Usage After Rebuild.png

So we can see the average space used is now just over 20%.  (As an aside, we can also see the page count has ballooned to 3704 from 770.  This shows the danger of reducing the fill factor.  Selecting all rows from this table just became 481% more expensive!)

I’ll now insert another 10,000 rows into this table.  If the new rows go in with a fill factor of 20%, the average page space used metric should remain around 20%.  If they go in at 100%, I expect the average space used will go up significantly.

DECLARE @i INT = 0;

WHILE @i < 100000
BEGIN
    INSERT INTO dbo.GuggTest_20180411
    (
        somevarchardata
      , somenumericdata
    )
    SELECT CONVERT(VARCHAR(100), NEWID())
         , ROUND(RAND(CHECKSUM(NEWID())) * (100000000), 2);

    SELECT @i = @i + 1;
END;

And now I check the DMV one last time:

20180411 Index Usage After Insert.png

We can see the space used increased to 34%, indicating the rows are not inserted with the specified fill factor as I had assumed.

Advertisements

Reading Transaction Log Backups

Like it a business, where advances in technology fuel greater efficiencies, advances in fishing technology make for greater fishing opportunities. Just look at all the different ways to rig live bait for examples. My preferred method for fishing close to, but not right on, the bottom is a Lindy Rig. This rig is great when there isn’t much vegetation – it has a small bobber between a weight and the hook which lifts the bait off the bottom. The weight is usually a walking slip sinker weight, which has a large hole for the line to flow through so it doesn’t get hung up. The rig’s made so the fish can grab the bait and swim away with it without feeling any resistance. Then when the fish stops to eat the bait, you set the hook. It’s great fun to sit there with your bail open and a finger on the line, trying to feel the fish make some initial nibble on the bait before grabbing it and running. Next comes the excitement as line peels off your reel while the fish runs. Lastly, once it stops you reel in steadily until you feel the weight of the fish on the end, then set the hook sharply. Using a Lindy Rig is a fun and exciting way to fish with live bait.
Last week we ran into a situation where some user had changed a large amount of data in our OMS – but we didn’t know who or why. We weren’t even sure if the changes had been made through the OMS UI or via a direct database insert. Obviously having change tracking turned on who had made the investigation easier, but that doesn’t help you after the fact. The change had taken place the previous day, so it was my job to try to figure out who made it. For this, my only option was to read the Transaction Log Backups. There is an undocumented system stored procedure in SQL Server called fn_dump_dblog which allow you to get some basic information from the transaction log. Calling the function is a bit interesting – the first two parameters are the starting and ending LSN, which can usually just be left NULL to read the whole log backup. The third paramter is either DISK or TAPE indicating where your backup is stored. The fourth parameter is the backup number within the file. For most of us this will be 1 since we’re not using multi-backup media sets. Lastly, the file path and name is the fifth parameter. After that there are 63 nonoptional DEFAULT parameters! So when you do an initial call, it will look like this:

SELECT *
FROM fn_dump_dblog (NULL, NULL, N'DISK', 1, N'\\NASServer\SQLBackup\INSTANCEServer\Database\LOG\InstanceServer_database_LOG_20180306_120001.trn',
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
                    DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT)

For my particular issue, I needed to find all INSERTS into a specific table. I used the fn_dump_dblog function in a common table expression and filtered the operation on inserts and the PartitionId on the table I was interested in. Then I joined that CTE to another SELECT from the function on the Transaction ID and filtered that table on the LOP_BEGIN_XACT operation to get when the INSERT began. I was able to use the SUSER_SNAME function on the Transaction SID to reveal the database user who had made the INSERTs:

WITH CTE
AS
       (SELECT [Transaction ID], COUNT(*) AS InsertedRows
       FROM fn_dump_dblog (NULL, NULL, N'DISK', 1, N'\\NASServer\SQLBackup\InstanceServer\DatabaseName\LOG\InstanceServer_DatabaseName_LOG_20180306_120001.trn',
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
       DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT)
       WHERE Operation = ('LOP_INSERT_ROWS')
       AND [PartitionId] IN (SELECT sp.partition_id
                            FROM DatabaseName.sys.objects so
                            INNER JOIN DatabaseName.sys.partitions sp ON so.OBJECT_ID = sp.OBJECT_ID
                            WHERE name = 'InsertedOnTable'
                           AND sp.index_id = 1)
       GROUP BY [Transaction ID]
       )
SELECT [Current LSN], a.[Transaction ID], [Transaction Name], [Operation], [Begin Time], SUSER_SNAME([TRANSACTION SID]) AS LoginName, CTE.InsertedRows
FROM fn_dump_dblog (NULL, NULL, N'DISK', 1, N'\\NASServer\SQLBackup\InstanceServer\DatabaseName\LOG\InstanceServer_DatabaseName_LOG_20180306_120001.trn',
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
   DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT) AS a
INNER JOIN cte ON a.[Transaction ID] = cte.[Transaction ID]
WHERE Operation = ('LOP_BEGIN_XACT')

20180315 Query Tlog Backup
Change tracking would give more detail and be easier to query, but at least for now I was able to see that the user making the change was the login that the application uses to connect to the database, so I was able to see that the change was indeed made through the UI.

Documentation – The ‘Hit-by-a-Bus’ Handbook

A huge part of success in fishing is preparation. Without careful preparation fishing will be slow, you can lose or break equipment, and you can even put yourself in danger. Several of my previous posts have detailed how preparing by looking at weather, depths, fishing reports, etc. can improve your fishing “luck”. Another part of preparation is inspecting your fishing equipment before use. I can remember at least three times where I’ve casted my lure right off the end of my line. It’s great to see a bomb cast really sail out there, but it’s not so great the moment you realize it’s no longer attached to the end of your line. Lastly, let’s not forget the thought of unhooking that deep hookset on a northern pike when you neglected to make sure your pliers was in your tackle box.
I was considering preparation this week after I read this great post by Monica Rathbun on how to survive as a Lone DBA. One of the great tips she has is to create a ‘Hit-by-the-Bus’ handbook. This would have all the information you need to pass on to the next DBA if you were to suddenly leave the job. With this documentation, the next DBA should be able to step right in without missing a beat. As I deeply considered my current environment, it became clear to me that, were I to suddenly leave my position, the next DBA would have a lot of questions that he or she would have to do some deep and time consuming investigation to answer. As such, I’ve decided to make a major commitment to improving my documentation. From a DBA perspective, this will include detailed documentation on:

  • Daily/Weekly/Monthly/Ad-Hoc tasks. How to do them and why they are done.
  • SQL Agent Jobs. What are they doing and how to troubleshoot if they break. How urgently do they need to be fixed?
  • SSIS packages. What are they doing at a quick glance? Stepping through an SSIS package to figure out what it is doing can be all kinds of frusterating.
  • Backups. What is our strategy? Where are backups going and how would I address various restore situations?

I believe this documentation will bring value both for myself (because my memory is not great) and for my successor, whenever that day may come.

A deeper dive into XML in SQL Server

When I was younger, I remember my family taking a camping trip near a lake with trout. As best I can remember, the lake was stocked with rainbow and brown trout. When we visited the lake to fish there was a crowd of people along one of the shores. There were a mass of small trout in the water, and everyone was trying to get one to bite. Very few of the people were succeeding. Seeing the fish there, I joined in the group, desparate to try to get one of the 8″ – 10″ fish to bite. Suddenly from a shoreline halfway across the lake, I heard a triumphant cry. Looking up, I saw a man holding an enormous trout in the air. He’d caught the fish by avoiding the crowd of people and small fish, and exploring a new shoreline. I learned two lessons that day. First, if you target smaller fish because you know where they are, you’ll never catch the big ones. Second, if you never explore unfamiliar water you’ll miss out on some good fishing.
One of our developers was recently pulling data from SQL Server to create a catalog feed for a website. This feed included our products, which have a parent child hierarchy between a base product and the skus beneath that product. His plan was to create two stored procedures – one for the base product and one for the skus. Then in his code he was going to take the results of those two stored procedures and merge them into a single XML document.
I had recently taken a SQL quiz and ran into the hierarchy data type, and thought this might be a good way to experiment with using it. After doing some initial work, I found that the hierchary id was overkill for this situation. The Hierarchy data type seems to work best when you have several levels you want be able to delve in to. Instead, I decided to attempt the feed directly in SQL using the FOR XML clause. I have used this clause a few times before, but never to build something as complex as this document. Here are some things I did or observed that may be helpful:
1. Concatenating XML
The final output document had several sections in it that I was forced to create with separate SELECT statements. I couldn’t get these together into the same XML document by unioning or concatenating them together. Instead I had to place them each into variables, then SELECT the variables together in a separate statement that used the FOR XML clause.

DECLARE @xmlheader XML, @xmlnonskuitem AS XML, @xmlsizeonlyitem AS XML, @xmlcoloronlyitem AS XML, @xmlcolorandsizeitem AS XML, @xmlskus AS XML;

SELECT @xmlheader =  ...;
SELECT @xmlnonskuitem = ...;
SELECT @xmlcoloronlyitem = ...;
SELECT @xmlsizeonlyitem = ...;
SELECT @xmlcolorandsizeitem = ...;
SELECT @xmlskus = ...;

SELECT @xmlheader, @xmlnonskuitem, @xmlcoloronlyitem, @xmlsizeonlyitem, @xmlcolorandsizeitem, @xmlskus
--End root
FOR XML RAW () ;

2. Building depth into the XML tree
The xml we were creating had several child elements, some of which needed to be repeated for each variation, and some of which were used to encapsulate all variants for a particular base product. To get this layering, I had to create some of the elements with varchar characters that could be converted to XML. Complicating this, I needed to create XML inside of those text characters, and join them together. The general pattern for this is to convert the inner most XML to varchar, then concatenate those together with the hard coded elements, then convert the whole thing back to XML.

SELECT @xmlsizeonlyitem
    =
(
    SELECT RTRIM(LTRIM(i.[Field2])) AS "@product-id"
         , CONVERT(
                      XML
                    , '<attributes><variation-attribute attribute-id="size" variation-attribute-id="size"><display-name>Size</display-name><variation-attribute-values>'
                      + CONVERT(VARCHAR(MAX)
                              , (
                                    SELECT LTRIM(RTRIM(s.size)) AS [@value]
                                         , RTRIM(LTRIM(s.size)) AS [display-value]
                                    FROM db.SCHEMA.tblSku AS s
                                    WHERE s.Field1 = 125
                                          AND s.Field2 = i.Field2
                                    FOR XML PATH('variation-attribute-value'), TYPE
                                )
                               ) + '</variation-attribute-values></variation-attribute></attributes><variants>'
                      + CONVERT(VARCHAR(MAX)
                              , (
                                    SELECT RTRIM(LTRIM(s.Field2)) + ' ' + RTRIM(LTRIM(s.Field3)) AS [@product-id]
                                    FROM db.SCHEMA.tblSku AS s
                                    WHERE s.Field1 = 125
                                          AND s.Field2 = i.Field2
                                    FOR XML PATH('variant'), TYPE
                                )
                               ) + '</variants>'
                  )
    FROM db.SCHEMA.tblBaseItem i
    WHERE i.Field1 = 125
          AND i.Field2 = 'BaseItemId'
    GROUP BY i.Field1
           , i.Description
    FOR XML PATH('product')
);

3. Elements versus attributes
It’s very easy to change the field you are returning from the database to either an element or an attribute. For attributes you put the @ sign in the front of the name. For elements, you just leave the name as is.

SELECT @xmlcolorandsizeitem = (SELECT RTRIM(LTRIM(i.[Field2])) AS "@product-id"
    , RTRIM(LTRIM(i.Description)) AS [display-name]
FROM db.SCHEMA.baseitemtable i
    WHERE i.field1 = 125
          AND i.field2 = '7972'
         GROUP BY i.field1, i.Description
FOR XML PATH('product')) ;

20180216 XML Elements vs Attributes
4. Renaming ROOT and ROW
You can rename the ROOT element to something else by putting that in the parenthesis after ROOT.
You can also rename the first element, usually defaulted as ROW, by putting the name in the parenthesis after RAW or PATH. This does not work in EXPLICIT or AUTO modes, which don’t have a ROW element.

FOR XML RAW ('catalog') ROOT ('catalogs');

20180216 XML Root and Raw.PNG
5. Adding namespace
To add a namespace to the XML, you can use the WITH XMLNAMESPACES clause. This only works for RAW, AUTO, or PATH modes.

WITH XMLNAMESPACES('http://www.company.com/xml/impex/catalog/2106-1-31' AS Co)

20180216 XML Namespace

I hope these tips give you a better understanding of how to work with XML in SQL.

Managing Exchange in Office 365 with Powershell

I have a lot of memories of fishing growing up.  Some are great and some are not.  One memory I have was seeing a giant muskie in the water while fishing as a child.  I was very inexperienced, and didn’t know anything about muskie fishing.  Well, the fish in the water was just sitting down near the bottom.  I cast a shad rap out a ways, then retrieved it past the muskie’s face.  The fish acted as if it didn’t even see the lure.  I figured I’d try a different lure, so I pushed the swivel lock to open it up and it broke.  The whole wire simply snapped in half.  I was crestfallen.  Here I had a giant fish right below me, and I had no way to put a lure onto my line.  You see, this was before I had learned to tie a fishing knot.  Disappointed, I came back to the dock with a story about how I missed out on a fish.

I used that experience as a lesson though, and soon learned to tie a fishing know so well that I could do it in seconds without even looking.

Working in IT can often mean a range of duties and responsibilities, particularly with smaller companies.  Today I was asked to fix an issue in Office 365.  The user had a shared mailbox she could send out of, but the emails did not show up in her Sent folder.  After some online searching I found this Microsoft blog post that described how to enable this feature for an O365 shared mailbox.  I’ve worked with Powershell, but I’ve never administered O365 with it, so it was a good learning experience.

First, I had to connect to Exchange Online.  I followed the instructions on this Technet Page.  First I used the Get-Credential command to input my O365 credentials.

20180212 Get Credential.PNG

Next I had to create a session with Exchange Online.

20180212 Connect To Exchange

Next I had to import that session into my local session.

20180212 Import PS Session

From here I attempted to run the Set-Mailbox command, but I came up with an error:

20180212 SetMailbox Failure

This was because I was using the wrong account.  I have a local user account and a local admin account.  Well, my local user account is a Global Admin in O635 (go figure), but my local admin account is nothing.  So I closed it everything down, making sure to use the Remove-PSSession $Session command as suggested by the page I linked to earlier.  Then I went through all the steps again, this time inputting my user account instead of my admin account.  This time, the command was successful:

20180212 SetMailbox Success

To see whether you have permission to make this change, you can check if your user is a Global Admin in the O365 portal:

20180212 O365 Global Admin.PNG

Using this method I was able to solve the user’s problem.

Meltdown and Spectre

I’m a huge fan of comedian Brian Regan. I love that his humor is clean, and he is a genuinely funny guy. One of his older routines has a bit on watching fishing on TV. He asks his audience, “Do you ever watch fishing on TV for like 15 minutes and then just go: ‘Boy, I’d better get a life. I’m watching fishing. I’m not even fishing, I’m watching fishing. I’m too lazy to fish… I’m taping fishing, to watch again later!'” While it does seem silly to watch fishing instead of going out to do it, I really enjoy being able to watch fishing during the winter months. When you can’t be out in a boat, it’s nice to be able to remember the warmth of the sun and the kiss of fresh lake air on your skin, not to mention the thrill of a strong fish tugging from the end of your line. In addition to the sweet memories these shows can evoke, they can also teach new tactics that can be used to improve your techniques. It’s important to pay attention to what the professionals are doing in order to improve yourself.
As a DBA, we should also be paying attention to news in the industry. A few weeks ago we were met with news of a few security flaws that have the potential to cause big problems. These flaws, named Meltdown and Spectre, exploit a vulnerability in the processor, so they affect almost all computers. As a DBA you have at least two places to address this: in the server OS software and in SQL Server. Here is a Microsoft’s guidance for protecting SQL Server from these vulnerabilities. It’s important to note that Microsoft is advising us to evaluate the performance impact of these patches. They do fix the vulnerability, but they have the potential to degrade performance, at least a bit. Those that use certain extensibility mechanisms with SQL Server, such as linked servers or CLR, are most impacted by this issue. On the OS side, unfortunately as of the time of writing this blog we still don’t have OS patches for two version of Windows Server – 2008 and 2012. If you are using either of these versions, check back on this page often to find out when these patches become available, and get them applied.

Moving UNDO and TEMP files in Oracle

This summer I had the pleasure of reading a couple of really good muskie fishing books. The first was Musky Strategy by row-trolling legend Tom Gelb. There were a number of cool things in this book, but the main thing that struck me was how scientific Tom’s approach to muskie fishing is. He went so far as to test the depth his lures ran while trolling by rowing parallel to a shoreline over and over from deep to shallow to see when the lure started contacting the bottom. The book was a fun read, and showed how can you succeed without all the modern electronic tools as long as you’re willing to put in the time. The second was Time on the Water by Bill Gardner. This book was a story instead of a nonfictional book to teach tactics. It tells the story of one man’s quest to catch a big muskie while fishing the entire season for a year in Northern Wisconsin. It was a fun read, and the main takeaway here was just how difficult it is to catch a big muskie, even when fishing for them every day. If you like fishing, check out the books!
Loss of free space on a disk is something that we as DBAs are always dealing with. Perhaps we are able to ask our SAN admin to give us more space, but in many cases we are not. In those cases, moving database files from one logical drive to another can answer this challenge. Note that care should be taken when considering moving database files. Drives can be different speeds, and their may be a reason a database file is on one drive an not another.
I recently needed to move a couple Oracle system tablespace files (UNDO and TEMP) from the C: drive (where they never should have been put in the first place) to a different drive.
20171215 OracleDB Drive Space
The method for doing this is relatively simple. Create a second tablespace for each on the “other” drive, make it the default tablespace, and drop the original tablespace.
CREATE TEMPORARY TABLESPACE temp2 TEMPFILE 'D:\APP\ORACLE\ORADATA\LEGENDOPRD\TEMP03.DBF' size 30G AUTOEXTEND ON NEXT 1G MAXSIZE unlimited;
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES;

CREATE UNDO TABLESPACE undotbs2 datafile 'D:\APP\ORACLE\ORADATA\LEGENDOPRD\UNDO02.DBF' size 20G;
ALTER SYSTEM SET undo_tablespace=undotbs2;
DROP tablespace UNDOTBS1 INCLUDING CONTENTS;

I found after doing this I still needed to delete the old files from the OS folder, and I couldn’t do that until the Oracle database service had been recycled, but after that my drive space was much healthier.