I would like to present a solution to a problem that I've been dealing with for a while.
The scenario is 5 tables that are related to each other ( but have no other dependencies) that I want to copy to other schemas.
These are the following:
TTEI_ENUM_ITEMS ( FK to TTEN)
TTEL_ENUM_LABEL ( FK to TTEI)
TTEG_ENUM_GROUP ( FK to TTEN)
TTEI_ENUM_GROUP_ENUM_ITEM ( FK to TTEG and TTEI or TTEN).
We use these tables to create files for the Frontend of a JAVA application ( and for some other purposes).
The basic problem is that if I create a compare script for all 5 tables, they will always fail the first time, as the DELETES of TTEN will fail becase of the FK from TTEI and if TTEI is run first the INSERTS will fail because of the missing records in TTEN.
Even if there is only INSERTS, there is no way of defining which table will be inserted, if this is done in a single compare.
I will show the steps to create a workflow to work from the top down to Insert the records and then bottom up to do the deletes.
If you need this for various scenarios, you may use standard file names as compare_script_1, compare_script_2 etc.
The first step is to overwrite all the existing sql scripts, so that if the compare fails to write any script, we wont be running any old inserts, updates or deletes that are stale.
The way to do this is to add a find and replace node that copies a dummy script to all the compare result files.
This dummy script must have a valid SQL statement, so that Toad doesn't hang. It could be a simple select.
I will try to add screenshots later.
The nest step is to create simple compare scripts for the tables and have the results written to compare_script_1, compare_script_2, etc.
The third step is "fixing" the resulting files so that they don't delete anything.
This is done by adding a find and replace node the replaces DELETE with -- DELETE.
I also had to replace the first line of the script which is SET SQLID.
Now you can run all the inserts in the order necessary.
The next step is to run the compare again from bottom to top and either executing them or saving them to other files and running them in the next step.
As I need to replace the SET SQLID string, I choose to have the resulting SQL run in a later step.
P.S. As I had to this for eight different schemas, I copied all the compare scripts into different directories and then used notepad++ to replace the schema.
This needs carefull planning and creating a directory structure that reflects the name of SERVER and SCHEMA, so that the replace will replace both, the comtent of the DCP files and also the path of the compare files in the TAS file.
Directly editing TAS and DCP files is not recommended by Quest, but it did the job.