Removing Schema from Oracle

I took my kids to the park on the local lake here this weekend.  While we went mainly to play on the playground, we also brought along a fishing pole and some worms.  I didn’t think the fish had moved up in the shallows yet, but we brought the pole just in case.  Unfortunately, I was right.  The weeds were still brown and low, and there was no sign of any fish.  They should be coming up pretty soon though.  We have a little over two weeks until the fishing opener!

We recently stopped using a third party software that had a back-end in our Oracle database.  After waiting a sufficient amount of time, I was finally able to remove the user and data from the database.  I’d never done a large data removal like this from Oracle before.  My first step was to export a copy of the data using data pump.  Although the data should never be needed again, it doesn’t hurt to grab one last copy before deleting it into oblivion.

20170418 expdp

I zipped that up and placed it into archive storage, probably never to be touched again.

Next I dropped the user.  In order to remove a user that has objects in it, you must use cascade.

DROP USER usertodrop CASCADE;

Lastly, I noticed that the tablespaces that were created to house the objects in that schema were still there. I needed to drop the tablespaces. I made a mistake the first time around by omitting including contents and datafiles. Without this part of the command, the files will stay on the disk and you will have to manually delete them using the OS. In my case, I had to wait several hours for the Oracle service to relinquish its lock on the files. The second time around I used the proper command, and the disk space was returned to the server immediately:

DROP TABLESPACE ts_usertodrop_pd INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE ts_usertodrop_td INCLUDING CONTENTS AND DATAFILES;

After this, the user was gone!

Advertisements

Data Warehouse Fact Tables

Being an avid fisherman, I love good fishing art.  I found the picture above using a Google search.  The pictures online are good, but aren’t comparable to seeing them in person.  I recently saw a picture of a muskie called Windfall Muskie, painted by Terry Doughty.  This is the best painting of a muskie I’ve seen so far, a beautiful piece of art.  I love seeing how an artist can capture the thrill of fishing on a canvas.

Previously, I described how to begin setting up a data warehouse by choosing a business process, selecting the grain, and building the dimension tables.  This post will focus on the last step, building the fact tables.  The fact tables contain a foreign key link to each associated dimension table as well as the measures that we will be using in the reports.  The measures are usually numeric values that can be aggregated in the report.  For example, for retail orders I may want to measure product price, discount, tax paid, and other metrics related to price and cost paid in an order.

Fact tables will usually be very long and narrow, as opposed to dimension tables which are generally short and wide.  Fact tables will hold one line for each transaction in your business process, based on your chosen grain.  For retail orders, each line in a transaction table might be an order line.  The table would have an Order ID, Order Line Number, Line Price, Quantity Purchased, and then foreign keys to all the descriptive dimension tables, such as Customer, Product, Purchase Date, Purchase Location, etc.

Once the fact table is built and loaded, you can easily find totals at a level aggregated by any of the descriptors.  For example, you can get the quantity purchased by store, the total order value by customer, or the most popular product by date.  Here is an example of how you can join the fact to dimension tables:

SELECT  dc.CustomerState
      
, dp.LongSkuDivision
      
, dd.MONTH
      
, oc.OrderChannel
      
, SUM(fop.OrderQuantity)
      ,
SUM(fop.SalePrice)
FROM    dbo.factOrderPlaced fop
INNER JOIN dbo.dimCustomer dc ON dc.CustomerKey = fop.CustomerKey
INNER JOIN dbo.dimProduct dp ON dp.ProductKey = fop.ProductKey
INNER JOIN dbo.dimDate dd ON dd.DateKey = fop.OrderDateKey
INNER JOIN dbo.dimOrderChannel oc ON oc.OrdChanKey = fop.OrdChanKey
GROUP BY dc.CustomerState
      
, dp.LongSkuDivision
      
, dd.MONTH
      
, oc.OrderChannel;

I hope this three part series has given you a basic idea of how to go about starting a data warehouse.

Data Warehouse Dimension Tables

There is a joke among fisherman about what boat really means. The common acronyms are Break Out Another Thousand or Bankruptcy On A Trailer. These jokes are funny, but also true – owning a boat is expensive. This winter I decided to take the boat in to the marina to have it tuned up. It was having trouble starting consistently, but aside from that it ran pretty well. Now that I’ve gotten it back, I’m anxious to take it out and see how much better it performs. It feels good to know that it should be running better, but it did set me back a pretty penny.
This is part 2 in my 3 part series on the high level overview of setting up a data warehouse. My previous post dealt with the first two decisions, selecting the business process and choosing the level of granularity. This part will focus on identifying and building the dimension tables. Dimensions provide the who, what, when, where, why, and how context surrounding the business process. For my example business process, retail orders, these will include:

  • Who is making each purchase?
  • What are they buying?
  • Which purchase channel or store are they purchasing it from?
  • When are they making the purchase?

To answer these questions, I’ve identified four distinct dimension tables that will be needed: Customer, Product, Order Channel, and Date.
I’ll go over a few best practices for creating dimension tables, then I’ll provide examples for creating the first three, then I’ll describe the Date table in a bit more detail.
Each table needs a primary key that will be linked to the fact table through the use of a foreign key. The best practice is to use a surrogate key. Although using the system of record’s natural key may seem like a good idea, this can cause major complications down the line, should anything in the source system change. Additionally, this allows for the easy integration of multiple source systems, improves performance, and supports attribute change tracking (I may get into slowly changing attribute dimension tables in a later post). That said, I would include the natural key as an attribute in the dimension table.
Each table should be as denormalized as possible. Using a star schema, with all or most dimension tables joining only to the fact table, will get much better performance than the snowflake schema, where a dimension may have to go through one or more other dimension tables to eventually be related to the fact table. As an example, if you have a product table where each product has a department, category, and color, you would include the descriptions for these right in the dimension table rather than having a separate product department, product category, and product color tables that link to the product dimension table. Although this uses more space in the database, the space used by dimension tables pales in comparison to the fact tables. Additionally, the performance is much better without having to do all the additional joins to get the data that is needed.
Each table needs to have a row for the unknown value. This will allow us to report when we don’t have all the information. For example, let’s say someone returns a product that we have never sold. The employee working in the returns area accepts it back by mistake. Now to get rid of it, we list it as a non-SKU’d product for a low price. We still want to report on the sale in our data warehouse, but we don’t have any real product to tie it to. In that case, we’d tie it to a product called “Unknown”. By doing this we can keep NULLs out of our fact table foreign keys.
The last thing I’ll note on dimension tables is the load process. Because of the relative simplicity of our system, do the ETL all within stored procedures in SQL Server. Many data warehouses will require the use of Integration Services (SSIS). I would recommend sticking with the stored procedures when possible, for simplicity’s sake. However, there will be scenarios where SSIS is required. I have fact tables in my environment that are truncated and reloaded each night, as well as fact tables that are incrementally loaded (only new and changed rows are loaded). Because of this, I am required to preserve the surrogate keys in my dimension tables. With that in mind, my load process is as follows:

  1. Load any newly created attributes (new products, customers, etc.)
  2. Update any aggregate info (customer lifetime order value, product last purchase date, etc.).

I include logging on the load process, but I’ll show that in greater detail on the next post when I discuss fact tables.
So to show examples of tables for our scenario, I might have the following tables:

CREATE TABLE [dbo].[dimCustomer](
   [CustomerKey] [INT] IDENTITY(1,1) NOT NULL,
   [CustomerID] [DECIMAL](9, 0) NOT NULL,
   [CustomerName] [VARCHAR](100) NULL,
   [CustomerAddress] [VARCHAR](100) NULL,
   [CustomerCity] [VARCHAR](75) NULL,
   [CustomerStateAbbrv] [VARCHAR](3) NULL,
   [CustomerCountry] [VARCHAR](30) NULL,
   [CustomerZIP] [VARCHAR](20) NULL,
   [CustomerEmail] [VARCHAR](255) NULL,
   [LifetimeOrderValue] [DECIMAL](18, 2) NULL,
   [FirstOrderChannel] [VARCHAR](25) NULL,
   [FirstOrderHadFreeShipping] [BIT] NULL,
 CONSTRAINT [PK_dimCustomer] PRIMARY KEY CLUSTERED 
(
   [CustomerKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]


CREATE TABLE [dbo].[dimProduct](
   [ProductKey] [int] IDENTITY(1,1) NOT NULL,
   [Style] [varchar](20) NULL,
   [StyleDescription] [varchar](40) NULL,
   [SkuCode] [varchar](14) NULL,
   [SkuDescription] [varchar](40) NULL,
   [ColorCode] [varchar](4) NULL,
   [ColorName] [varchar](40) NULL,
   [LongSkuDivision] [varchar](30) NULL,
   [LongSkuDepartment] [varchar](30) NULL,
   [CountryOfOrigin] [varchar](30) NULL,
 CONSTRAINT [PK_dimProduct] PRIMARY KEY CLUSTERED 
(
   [ProductKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

CREATE TABLE [dbo].[dimOrderChannel](
   [OrdChanKey] [INT] IDENTITY(1,1) NOT NULL,
   [OrderChannel] [VARCHAR](50) NULL,
 CONSTRAINT [PK_dimOrderChannel] PRIMARY KEY CLUSTERED 
(
   [OrdChanKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

The date table requires a bit more explanation. This is the only dimension table where using a logical primary key, rather than a simple identity integer, makes sense. I prefer creating the primary key as yyyymmdd.
Virtually every model will need a date table, because almost every process that is being reported on occurred at or during a specific time. The date table can be built in advance, can be used across multiple models, and can be used in a single model multiple times. For example, Amazon may want to report on orders both on the purchase date/time and the ship date/time. Both dimensions can use a single date dimension table. It is helpful to put tons of descriptors into your date table. It is not uncommon to see an array of info including day of week, day of year, fiscal day of year, holiday indicator, etc. Below is the code I used to create my date dimesion table.

CREATE TABLE [dbo].[dimDate](
   [DateKey] [INT] NOT NULL,
   [Date] [DATETIME2](0) NULL,
   [Year] [SMALLINT] NULL,
   [Quarter] [TINYINT] NULL,
   [Month] [TINYINT] NULL,
   [MonthName] [VARCHAR](20) NULL,
   [WeekOfMonth] [TINYINT] NULL,
   [WeekOfYear] [TINYINT] NULL,
   [DayOfWeek] [TINYINT] NULL,
   [DayOfWeekName] [VARCHAR](14) NULL,
   [DayOfMonth] [TINYINT] NULL,
   [DayOfYear] [SMALLINT] NULL,
 CONSTRAINT [PK_dimDate] PRIMARY KEY CLUSTERED 
(
   [DateKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

SELECT TOP 10000
        IDENTITY( INT,1,1 ) AS N
INTO    #tally
FROM    MASTER.dbo.syscolumns sc1
        CROSS JOIN MASTER.dbo.syscolumns sc2;

WITH    cte
          AS ( SELECT   DATEADD(DAY, N - 1, '2000-01-01') AS Date
               FROM     #tally
             )
            INSERT INTO dbo.dimDate
    SELECT  YEAR(cte.Date) * 10000 + MONTH(cte.Date) * 100 + DAY(cte.Date) AS DateKey,
           Date ,
            YEAR(cte.Date) AS YEAR ,
           DATEPART(QUARTER,cte.Date) AS Quarter,
            MONTH(cte.Date) AS MONTH ,
            DATENAME(MONTH, cte.Date) AS MonthName ,
           DATEPART(ww, cte.Date) + 1 - DATEPART(ww, CAST(DATEPART(mm, cte.Date) AS VARCHAR) + '/1/' + CAST(DATEPART(yy, cte.Date) AS VARCHAR)) AS WeekOfMonth ,
           DATEPART(wk, cte.Date) AS WeekOfYear ,
           DATEPART(dw,cte.Date) AS DayOfWeek,
            DATENAME(dw, cte.Date) AS DayOfWeekName ,
           DAY(cte.Date) AS DayOfMonth ,
            DATEPART(DAYOFYEAR, cte.Date) AS DayOfYear ,
    FROM    cte
   UNION
   SELECT -1,
   NULL,
   NULL,
   NULL,
   NULL,
   'Unknown',
   NULL,
   NULL,
   NULL,
   'Unknown',
   NULL,
   NULL;

   
DROP TABLE #tally;

So I gave you a high level overview of creating the dimension tables in a data warehouse. Stay tune for part three, where I look at the fact tables.

Data Warehouse Creation

Spring fishing for walleye on the Wolf River can be really hot.  When the walleye are running up-river to spawn in the marshes, they can be extremely thick.  Catching them can be fairly easy.  The one bad thing about this is that almost every angler knows it.  As you can see in the picture above, boats can get stacked right on top of each other.  I was hoping to head up one day to try to get a limit of eaters, but I haven’t been in the mood to fight the crowds lately.

I’ve recently implemented a data warehouse at work.  A data warehouse is a storehouse for information collected from a wide range of sources across an organization.  This storehouse must make it easy for users to access the information they need in a timely manner, it must be consistent, it must be adaptable to change, and most importantly it must be trustworthy.  This is the first time I’ve ever set up a data warehouse.  I’m going to spend the next couple posts explaining the steps I followed in setting it up.

I started by studying the Ralph Kimball method for dimensional modeling.  I used The Data Warehouse Toolkit, 3rd Edition.  I feel it’s very important to spend time researching and planning in advance, because poor design can be very difficult and onerous to fix.

The Kimball method proposes a four step dimensional design process:

  1. Select the business process
  2. Declare the grain
  3. Identify the dimensions
  4. Identify the facts

We chose retail orders as the business process we wanted to report on.  It’s a good idea to choose a fairly simple process to start with.

I’m going to save the dimension and fact creation for later blog posts, but I will discuss the grain here.  The grain is basically the detail level of the fact table.  The Kimball method suggests starting at the atomic grain, or the lowest level at which data is captured by a given business process.  For my purpose, since I began with retail orders, the lowest level is the order line.  Other grains that I could have considered would have been at the order level or even the daily, weekly, or yearly order level.  Every time you go up a level you lose details about the order.  For example, at the order line level I can see a separate product for each line.  But if I look at the order level, I can no longer see the individual products within the order.  If I go up another level and look at all order taken on a day, I lose the different customers that placed orders.

The only advantage of using a higher level is that you will be dealing with less data since it has been aggregated, which will make processing run faster.  To offset this disadvantage at the lower levels, Analysis Cubes can be used.  These cubes pre-aggregate various cross sections of the data so analysis can be performed quickly at the aggregate level while preserving the pre-aggregated details.

Stay tuned for my next post where I will define and describe dimension table creation.

Credentials in Powershell

I had the opportunity to attend the Madison Fishing Expo a few weekends ago.  It was a great way to stay excited for the upcoming fishing year during these cold winter months.  I didn’t get any new information, but I did let my son pick out a few cheap lures to add to his tackle box.

Choosing fishing lures

The warm weather has completely melted the ice off our area lakes (nice and early!), but we, along with almost the entire rest of the country, got a round of winter weather this week, so we’re back to almost a foot of snow on the ground.  It’ll be at least a few more weeks before I launch the boat for the first time this year.

The company I work for has been in the process of strengthening its security posture for the last few years.  Recently, they took the step of creating separate administrator accounts to use when we are doing things that require administrative permissions.  Up until now, I only had one account – an administrator-level account.  I expected at least a few things to break once they turned off my higher privileges, and those expectations were met.  The thing I’m going to touch on today is Powershell privileges.

I use a Powershell script that is run daily to collect various health statistics regarding my SQL databases and servers.  This script is run from Windows Task Scheduler, and is run from my laptop using my Windows AD account user.  Once that user lost its admin privileges, a few of the collection methods failed.  In order to get them to work, I needed to plug in my admin account for that specific method.  I found a neat way to do that using Powershell’s Credential object.

First, I stored the account password in a text file.  The password is encrypted and placed as a file on the disk by using the following Powershell command:

20170316 Powershell Credential Create Encrypted PW

Opening the new file contains the following:

20170316 Powershell Credential Encrypted PW

So you can see that the password is definitely encrypted.

Now I can reference that file whenever I need to enter credentials.

#Create a credential for connecting to the server
$user = "Domain\adminuser"
$pw = cat "C:\Temp\Password.txt" | convertto-securestring
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $user, $pw

#Access the Disk Info using my new credntial
$disks = Get-WmiObject -ComputerName $instance -Class Win32_LogicalDisk -Filter "DriveType = 3" -Credential $cred;

Using this method you can pass credentials to your Powershell script without having to store them in plain text on your computer. The only downside in my case is I will have to update my encrypted password file whenever my admin account has a password change.

Cross Database Certificates – Trouble with Triggers

The weather has been awesome here for the last few days.  Sixty plus degree temperatures has made it feel more like May than February.  It isn’t supposed to last much longer, but I have enjoyed it.  I took the boat in for an engine tune-up this weekend, which means I should get it back just in time for most the ice to be coming off the lakes.  I’m hoping to take a couple more shots at the Wolf River walleye run this spring.  Last year didn’t provide good results.

I took my sons to a park on the edge of a lake this past weekend and happened to be watching while an unfortunate ice fisherman’s ATV fell through the ice.  I’m not sure how these ice fishermen know what ice is good versus what ice is bad, but you can see from the main picture above that not all of them know either.  Fortunately, only the front tires went through and another ATV came over and pulled him out.

I ran into an issue with cross database certificates recently.  I have blogged about how to set these certificates up here – they are a handy way to enable permissions across databases.  However, I ran into a problem where the permission chain failed due to a trigger on the original table that updated a separate table.  Here is the SQL  to replicate the issue:

CREATE LOGIN [GuggTest] WITH PASSWORD=N'abcd', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF

CREATE DATABASE A;
CREATE DATABASE B;

USE A;

CREATE TABLE dbo.SPtoUpdate
    (
      ID INT
    , ILoveFishing VARCHAR(255)
    );
INSERT INTO dbo.SPtoUpdate
        ( ID , ILoveFishing )
VALUES  ( 1,'Musky'),( 2,'Pike'),( 3,'Yellow Perch');
CREATE TABLE dbo.TriggerToInsert
    (
      ID INT
    , ILoveFishing VARCHAR(255)
    , ChangeDate DATETIME2
    );
GO

CREATE TRIGGER dbo.SPtoUpdateTrigger ON dbo.SPtoUpdate
    FOR UPDATE
AS
    DECLARE @datetime DATETIME2;
    SELECT  @datetime = GETDATE()

    INSERT  INTO dbo.TriggerToInsert
            ( ID , ILoveFishing , ChangeDate )
    VALUES  ( 1 , 'Yes' , @datetime );
GO

CREATE CERTIFICATE BExecutor
   ENCRYPTION BY PASSWORD = 'Obfuscated'
   WITH SUBJECT = 'Execute sp from B to A',
   START_DATE = '20140101', EXPIRY_DATE = '20300101'
GO

BACKUP CERTIFICATE BExecutor TO FILE = 'C:\temp\crossdbcert.cer'
WITH PRIVATE KEY (FILE = 'C:\temp\crossdbcert.pvk' ,
                  ENCRYPTION BY PASSWORD = 'Obfuscated',
                  DECRYPTION BY PASSWORD = 'Obfuscated')
GO

CREATE USER BExecutor FROM CERTIFICATE BExecutor

GRANT UPDATE ON dbo.SPtoUpdate TO BExecutor
GRANT SELECT ON dbo.SPtoUpdate TO BExecutor
--Also give insert on dbo.TriggerToInsert
GRANT INSERT ON dbo.TriggerToInsert TO BExecutor

USE B
GO

CREATE USER [GuggTest] FOR LOGIN [GuggTest];
EXEC sp_addrolemember N'db_owner', N'GuggTest'
GO

CREATE PROCEDURE dbo.UpdateTableInA
AS
    BEGIN
        UPDATE  A.dbo.SPtoUpdate
        SET     ILoveFishing = 'Walleye'
        WHERE   ID = 2;
    END

GO


CREATE CERTIFICATE BExecutor FROM FILE = 'C:\temp\crossdbcert.cer'
WITH PRIVATE KEY (FILE = 'C:\temp\crossdbcert.pvk' ,
                  ENCRYPTION BY PASSWORD = 'Obfuscated',
                  DECRYPTION BY PASSWORD = 'Obfuscated')
GO

EXEC MASTER..xp_cmdshell 'DEL C:\temp\crossdbcert.*', 'no_output'
GO

ADD SIGNATURE TO dbo.UpdateTableInA BY CERTIFICATE BExecutor
    WITH PASSWORD = 'Obfuscated'
GO

--Log In or Change execution context to GuggTest, then EXEC dbo.UpdateTableInA

It turns out you can counter sign a trigger with the certificate, and this will allow the permission chain to succeed. By doing this, you don’t even need to grant the certificate user permission to the second table. Here is the syntax to do that:

ADD COUNTER SIGNATURE TO dbo.SPtoUpdateTrigger
BY CERTIFICATE BExecutor
WITH PASSWORD = 'Obfuscated';

Use this technique to work with cross database permissions that have to access tables with triggers.

Recursive Common Table Expressions

Wind can be an ally or an enemy of the fisherman.  Both in terms of comfort and in changing the mood and location of the fish, wind is something that can’t be ignored.  As it relates to the fish, wind can often turn fish on.  The term “muskie chop” refers to medium sized waves that can help create good conditions for fishing.  The wind does a couple things: it restricts the light by creating waves that break up the sun, and it also creates a current that can move fish to specific locations that can be targeted.  The other factor to consider related to wind if fisherman comfort.  I love fishing the colder months, but you’d better make sure you’re dressed for the weather.  There is no indoors in a fishing boat, so if it’s going to be windy and cold, bundle up.  At the same time on those hot, sunny, humid July days, you may not want to even be out unless there is some wind to cool you down.  Keeping all these factors in mind, it’s important to remember that wind is strongest when it has a large open space to build up it force.  If you want to avoid the wind, head to the upwind side of the lake.  If you want to embrace the wind, head to the downwind side.

In SQL Server, a recursive common table expression (CTE) could be compared to wind building up power as it moves over the lake.  A recursive CTE will call itself, and in doing so use the previous results to build to a final results set.

I recently had a perfect use case for this concept.  I had to take dollars given to me on a monthly level and distribute it to each day within the month.  Using a recursive CTE, I told SQL Server to give me the monthly total divided by the days in the month for each day in the month.  Below is an example of how I set it up:

CREATE TABLE #SalesTotalsByMonth
    (
      FirstOfMonth DATE
    , Channel VARCHAR(10)
    , SalesTotal DECIMAL(10 , 2)
    );
INSERT  INTO #SalesTotalsByMonth
        ( FirstOfMonth , Channel , SalesTotal )
VALUES  ( '2016-01-01' , 'Web' , 165473.99 ),
        ( '2016-01-01' , 'In-store' , 56998.45 ),
        ( '2016-01-01' , 'Mail' , 4645.85 )
,       ( '2016-02-01' , 'Web' , 27463.56 ),
        ( '2016-02-01' , 'In-store' , 61423.78 ),
        ( '2016-02-01' , 'Mail' , 5341.56 )
,       ( '2016-03-01' , 'Web' , 487356.67 ),
        ( '2016-03-01' , 'In-store' , 15734.56 ),
        ( '2016-03-01' , 'Mail' , 3104.85 )
,       ( '2016-04-01' , 'Web' , 478236.78 ),
        ( '2016-04-01' , 'In-store' , 24675.67 ),
        ( '2016-04-01' , 'Mail' , 1024.56 )
,       ( '2016-05-01' , 'Web' , 167524.89 ),
        ( '2016-05-01' , 'In-store' , 31672.78 ),
        ( '2016-05-01' , 'Mail' , 1798.67 )
,       ( '2016-06-01' , 'Web' , 347652.19 ),
        ( '2016-06-01' , 'In-store' , 41675.19 ),
        ( '2016-06-01' , 'Mail' , 801.78 )
,       ( '2016-07-01' , 'Web' , 247653.02 ),
        ( '2016-07-01' , 'In-store' , 59713.02 ),
        ( '2016-07-01' , 'Mail' , 2097.19 )
,       ( '2016-08-01' , 'Web' , 891642.23 ),
        ( '2016-08-01' , 'In-store' , 67134.23 ),
        ( '2016-08-01' , 'Mail' , 3752.02 )
,       ( '2016-09-01' , 'Web' , 342591.24 ),
        ( '2016-09-01' , 'In-store' , 77123.24 ),
        ( '2016-09-01' , 'Mail' , 2406.23 )
,       ( '2016-10-01' , 'Web' , 246758.25 ),
        ( '2016-10-01' , 'In-store' , 81214.24 ),
        ( '2016-10-01' , 'Mail' , 3012.24 )
,       ( '2016-11-01' , 'Web' , 267423.26 ),
        ( '2016-11-01' , 'In-store' , 91023.26 ),
        ( '2016-11-01' , 'Mail' , 2034.24 )
,       ( '2016-12-01' , 'Web' , 265219.56 ),
        ( '2016-12-01' , 'In-store' , 34167.02 ),
        ( '2016-12-01' , 'Mail' , 1010.26 );

WITH    recurse
          AS ( SELECT   stbm.Channel
                      , stbm.SalesTotal / DATEDIFF(DAY , stbm.FirstOfMonth , DATEADD(MONTH , 1 , stbm.FirstOfMonth)) AS Revenue
                      , DATEDIFF(DAY , stbm.FirstOfMonth , DATEADD(MONTH , 1 , stbm.FirstOfMonth)) AS daysleft
                      , stbm.FirstOfMonth AS [Sales Day]
               FROM     #SalesTotalsByMonth stbm
               UNION ALL
               SELECT   recurse.Channel
                      , recurse.Revenue
                      , recurse.daysleft - 1
                      , DATEADD(DAY , 1 , recurse.[Sales Day])
               FROM     recurse
               WHERE    recurse.daysleft > 1
             )
    SELECT  recurse.[Sales Day]
          , recurse.Channel
          , SUM(recurse.Revenue) AS Revenue
    FROM    recurse
    GROUP BY recurse.Channel
          , recurse.[Sales Day];

DROP TABLE #SalesTotalsByMonth;

The important thing to note here is the general pattern for a recursive CTE – the initial expression with a UNION ALL that calls the CTE.  Be sure to put the upper limit in the WHERE clause of the bottom half to avoid infinite recursion.

My final results gave me the total per day.