Best method to do a schema refresh of size 450GB other than TTS

Hi All,

Anyone has a better way to do a schema refresh which is mapped to individual tablespace other than Transportable tablespace method.

Please let me know.

Thanks in advance.

The only other suggestion is data pump export/import with compression (if you are licensed to use it that is). It will not be anywhere near as fast as a simple TTS export - but would be the next logical choice.

Thank you Bert.

is there any way to use TSPITR for restoring it in new tablespace name (target)?.

so we use our backup tape for refresh and source will also be in read write mode.

Let us know your view.

don’t know the answer to that one - good question - maybe someone better with RMAN will reply …

Evening Vasanth,

Anyone has a better way to do a schema refresh which is mapped to
individual tablespace other than Transportable tablespace method.
Transportable tablespaces are "fun", sometimes. If the database is pre 11.2.0.3 and you have spatial indexes, then avoid TT like the plague.
Another foible of TTs is that you have to take the source tablespace(s) into read only mode prior to doing the expdp of the metadata, and while you are copying the actual data files to "a safe place". That can affect the users of the system if they need to write anything.
Also, on the receiving database, the tablespace(s) must not exists, but the users who own the objects coming across, must exist.
Your 450 Gb is not really a huge database, so expdp (or exp) and impdp (imp) will be usable. You can even cut out the middleman and not have a dump file, provided the destination database can see the source database over a database link. You simply run the impdp at the destination end, and specify a db link, created on the destination database, pointing back at the source database, as the source of the data.
Using RMAN, depending on the version, might help.
If you need to go TT anyway, and the database is 10.2.0.5 or higher, then you can do cross platform, cross endian RMAN transports, in fact, you can take the whole database across if necessary.
If the database is less than 10.2.0.5, then cross endian is not permitted.
You might be able to clone the source database onto the destination server as a standby, then disconnect it from the source (ahem, however you do that!) and open it as a primary. Again, depending on the version, that might be possible without needing a database backup in the middle. DUPLICATE ... FROM ACTIVE DATABASE being your command of choice.
I don't think a TSPITR will help you here though. What RMAN tends to do, if I remember correctly, is clones the database to a working location, then drops (or doesn't create) the tablespaces it is not interested in, then pretends that the recovered tablespace is a backup of the current tablespace, then swaps the current one for the "recovered" one. As I said, I'm not certain you can do that in your scenario as you are trying to get the tablespace into another database which won't have the same DBID. You could try it, SET DBID source_dbid; might work in RMAN, but it's not an option for data files as far as I recall.
Sorry I haven't been of much help. :frowning:
I would go with these, in the following order myself:

  • Impdp directly to the destination database over a database link to the source.
  • Expdp (with parallel settings set) and impdp (with the same parallelism).
  • Expdp/impdp with transportable tablespaces, if at all possible.
    The latter is the quickest, provided you can get your data files transferred to the destination server easily. Last time I did it, I had to courier the tapes from one location to the other because there was an "air gap" and the production (source) databases could never be seen from the test/qa/pre-production databases (destination) servers.
    Good luck.
    Have fun.
    -- Cheers,
    Norm. [TeamT]

+1 to the DataPump import over a dblink, if possible. Having just done this for a Phase 1 of a major project, in our case there was much less down time and elapsed time required when compared to the TTS method. Why? Due to data archiving, there was ~75% whitespace in the tablespaces, so there was that much less data (and indexes!) to push over the network.

There are still caveats though. Parallelism could reduce the overall time substantially, although the Enterprise Edition of Oracle is required. Even then, with “large” tables, it’s generally much faster to import the data only, then rebuild indexes and constraints later. Of course, even that can’t be easy, as Oracle will refuse to use parallelism when creating PKs with “VALIDATE”. They must be created “NOVALIDATE”, then issue a separate “ALTER TABLE…VALIDATE…” in order to be parallel (up to Oracle v11.2 – not sure about v12).

Being on separate hardware, RMAN’s “DUPLICATE” command could also be useful, but I haven’t used it outside of Data Guard.

BTW, Toad’s DataPump interface is very useful for major data copies like this! It’s also nice to be able to watch the progress in a separate window in Toad, which allows one to go about other Toad business while it’s running.

GL!

Rich

Still on the matter of TTs and foibles, as I mentioned previously, beware if you are importing a TT from an Enterprise Edition database, into a Standard Edition database if your EE database has deferred segment creation turned on - which is the default.
What will happen is that the import will barf on the SE database, with the error message that your tablespace is READ ONLY. Well, of course it is, in fact, it was Oracle that created it as part of the import so if anyone made a mistake, it was the database! (Other opinions are available.) (They are wrong!)
The problem is that the EE database has tables with no rows present, and because of deferred segment creation, no actual tablespace has been used for the empty tables. This is an EE only option, so SE can’t cope and every object in an SE database must have space allocated, even if it is an empty table.
The first part of the solution is not to raise a bug with Oracle as they closed mine down with “executing as expected” and didn’t want to update the docs or the error message, which is misleading, to correct things.
The second is just to let the entire TT import run, note the broken tables listed, then recreate them with an expdp/impdp (or exp/imp) from the source database.
HTH
– Cheers,
Norm. [TeamT]

450 GB is huge amount of data.

Placing tablespace in RO mode might work the best if other resources are quick enogh not to break UNDO of other transactions).

If this is raw bytes then tablespace is more than 600 GB.

Really “refresh” of that volume depends on many things (network, I/O space, volum load etc) and you have to think separately on each resource that might get hit in any kind of process.