Rebuild a BIG table by smaller parts

Hi !

Toad knows to rebuild tables saving al it’s data/indexes/etc. It is very useful sometimes.

But if my table is really BIG, and temp/undo tablespace is too SMALL, it will not work because Toad copies data in a one request “insert into … select …”.

My advice is to make Toad to check temp/undo space (or ask user) and to copy data by some small parts and commit each part to free undo space.

Thanx!

Insert /*+ APPEND */ select * …

should be the best you can do, because this is the best you can do in this moment.

append does a direct path load (if it can, it is not a promise, you are requesting and we may or may not do it for you - silently)

if you direct path load, the transaction that did the direct path load CANNOT query that segment - but other transactions can, they just cannot see the newly loaded data.

if you direct path load, you never use any existing free space, it always writes above the high water mark.

if you direct path load, we bypass UNDO on the table - only the table - modifications

if you direct path load, you’ll maintain indexes - we build mini indexes on the newly loaded data and merge them into the ‘real’ indexes in bulk. A direct path load of large amounts of data will maintain indexes very efficiently.

if you direct path load you can bypass redo on the TABLE in archivelog mode, if the database is set up to allow nologging and you have the segment set to nologging

direct path loading bypasses the buffer cache, you write directly to the datafiles.

direct path loading is only appropriate for the first load of a segment or an increment load of lots of data - or an increment load into a table that never has any deletes (so there is no free space to consider)

-transactional systems - you probably won’t use it.

-warehouses - a tool you’ll use a lot

So when UNDO is solved now TEMP tablespace …
But this valu depends directly on your select statement…how will it use TEMP and how much, and here you cannot do nothing if plans is like that…
But you can then try to optimize select statement only … to reduce TEMP usage. Try to use Toad explain plan and enable TEMP use column to see this value

Hope this helps

Now, came to mine mind another approach, that use partial commit. Idea is to use bulk collection and LIMIT clause which retrieve set of records. Folow next statements that will fill table “new_data” with content of user_tables view, 3 by 3 records.

create table new_data as select * from user_tables where 1=2;

set serveroutput on size unlimited;

DECLARE
TYPE t_new_data_tab IS TABLE OF new_data%ROWTYPE;

l_tab t_new_data_tab;

CURSOR c_new_table IS
SELECT * FROM user_tables ;
BEGIN
OPEN c_new_table;
LOOP
FETCH c_new_table
BULK COLLECT INTO l_tab LIMIT 3;

EXIT WHEN l_tab.COUNT = 0;

FORALL i IN 1..l_tab.COUNT
  INSERT INTO new_data VALUES l_tab(i);

dbms_output.put_line('Inserted : '||l_tab.COUNT||' of records');
COMMIT;

END LOOP;
CLOSE c_new_table;
END;
/

Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 3 of records
Inserted : 1 of records
PL/SQL procedure successfully completed.

select * from new_data;

to see that it holds the same content as user_tables view.

Hope this helps.

Cheers.