Moving data between Oracle Instances with Linked Server

We are officially one week away from the open of the general fishing season here in southeast Wisconsin. I am incredibly excited to get back on the water and try to catch some big fish. I haven’t yet decided where I’m headed for the opener, but I had great luck on the east side of Pewaukee Lake last year, so I’m thinking about heading back there.
In the meantime, there are a few opportunities to get fishing before the general opener. Last weekend I went up to the Wolf River near Fremont to try to catch some walleyes. The entire Winnebago system, which includes the Wolf and Fox rivers and several lakes includeing Winnebago, Butte de Mortes, Partridge, and Poygan are all open all year round. Additionally, the walleye and whitebass from the lake systems head up and down the rivers to spawn around this time of year, and the fishing can be nothing short of amazing. Unfortunately, we didn’t happen to be on at the right time and ended up catching no fish. It was cool to try it once, especially since the fishing is legendary, it gets rather crowded. We fished about 1/2 mile stretch of the Wolf River just north of Partridge lake and we must have been among at least 40 boats. I’m personally not a big fan of crowded fishing, but it wasn’t as bad as I thought it would be.

There are a few ways to move data between different instances of Oracle database. In this post I will demonstrate how to set up a database link and use that to transport data from one environment to the other. The reason I did this recently was to update a small subset of our Test database with data from Prod. We didn’t want to update an entire database, tablespace, or even table. We simply wanted to update a single column on one table in the Test environment with data from Prod. The first thing I did was to create a database link from the Test environment to Production.

    (ADDRESS = (PROTOCOL = TCP)(HOST = = 1521))
      (SERVICE_NAME = myProdDB)
	  (UR = A)

A list of database links can be found by querying the DBA_DB_LINKS view:


Next to physically move the data from the Prod to the Test instance, I create a new table with the data I want, using the data from Prod:

CREATE TABLE prod_Copied_Table AS
FROM    mySchema.CopiedTable@myProdDB;

Immediately I drop my database link – I don’t need it anymore and don’t want anyone else to use it.


Now I have all the data under my user’s schema in Test. The last step is simply to update data I want changed:

UPDATE SCHEMA.TestTable test
SET test.myDate = (SELECT myDate
                      FROM prod_Copied_Table prod 
                      WHERE test.myCOMPANY_ID = prod.myCOMPANY_ID 
                           AND test.myVENDOR_ID = prod.myVENDOR_ID 
                           AND test.mySKU = prod.mySKU 
                           AND test.myVENDOR_PART_ID = prod.myVENDOR_PART_ID)
                      WHERE test.myDate = TO_DATE('2016-04-29','yyyy-mm-dd');

Now my data migration is complete. I drop the unneeded table from my user and I’m finished.

DROP TABLE prod_Copied_Table;

I hope you have found this example of moving data from one instance to another helpful.


Use Oracle Flashback Query to restore data fast

Winter has dragged into Spring here in Southern Wisconsin.  We were teased with some 70 degree weather in early March, but since then we’ve been hovering around 40 degrees.  I was expecting the fish to get active a little earlier this year, but now I’m not so sure.

One activity that I enjoy doing with my kids while waiting for fishing to start up is making a trip to Cabela’s.  We are blessed to have a Cabela’s about 10 minutes away from our house, so making a trip down is easy and a good way to enjoy the feeling of the outdoors without actually venturing out into the chilly weather.  For those that don’t know, Cabela’s is a large retailer that specializes in outdoors activities.  They have huge stores stocked with fishing and hunting gear, camping equipment, boats, guns, and outdoor apparel.  They also have some very impressive displays that include mounts of just about every kind of game animal, and three large aquariums that have all kinds of Wisconsin fish.  These fish include trout, sturgeon, walleye, bass, catfish, gar, and several kinds of panfish.  A few years ago they had an old and fairly large, sickly looking muskie, but the last several times we went we couldn’t find it.  I assume it had died.  During our recent trip, we saw two new muskies – small ones, probably about two years old.  The kids love seeing the fish swimming around, and I get to browse through the great fishing gear that I can’t afford.  A trip to Cabela’s is a great way to spend a few hours on a cold, blustery spring day.

Spending a few hours on a data recovery can have big implications for a DBA.  Let’s face it – the expression time is money may have been invented with our profession in mind.  Taking a database offline so we can recover consistent data can cost our companies any amount of money.  Imagine Amazon going offline during Black Friday or Cyber Monday.  The amount of money lost could be measured in the millions.  With these kinds of pressures confronting us on a daily basis, it is important to use every tool in our arsenal to do our job effectively.

I recently discovered that Oracle has a very clever way to recover data quickly without using RMAN.  This trick has been available since version 9i and uses the Undo tablespace to see old versions of data.  As a reminder, the Undo tablespace stores versions of data when transactions are in progress.  So if a transaction is performed and then rolled back, the previous version of the data is grabbed from Undo and put back into the active data in the table.  Once the transaction has been either committed or rolled back, the information is no longer needed and will eventually be overwritten.  However, using the Undo Retention Period property, we can attempt to keep that old information as long as possible so it can be used for other purposes, such as the flashback query I am going to demonstrate.  As a further clarification, the flashback query can be used in any edition of Oracle database, while the similar but different flashback table is only available in Enterprise Edition.

The flashback query is a perfect way to recover accidentally deleted data.  Let’s look at an example.  First, I’ll show that Enterprise flashback database is off:

20160404 Flashback Off

Next I’ll create a table with about 100 rows of dummy data.

20160404 Create Table

This created a table with 98  rows:

20160404 Initial Table Rowcount

Now I’m going to delete half of the rows:

20160404 Table delete and new rowcount

The next step is to grab a copy of the table from before the delete.  In this case I deleted the data after 1:50 PM, so I’m going to go back to that time.

20160404 Create Recovered Table20160404 Create Recovered Table2

I now double check and make sure I have all the data I need.  I’ll do this by checking the row count.

20160404 Recovered table rowcount.PNG

The last step is to add the missing rows back into the original table.  This can be done with a simple left join.

INSERT INTO fbq_test
FROM fbq_test_recovery r
LEFT OUTER JOIN fbq_test o

The thing to keep in mind when using this method is to try and catch the “old” data as soon as possible. Once the Undo tablespace needs more room, the existing old data will be removed.  Use this method to recover accidental deletes and updates without having to bother with RMAN.