Toad World® Forums

Slow "Export Dataset" process


I use Toad for Oracle 10.1 on my computer.

When selecting “Export Dataset” to an Excel File or Delimited Text (into my local drive) for 18,000 records in Data Grid, it takes over 40 minutes to complete.

Does anyone have suggestions what could have caused “Export Dataset” process to take so long in Toad for Oracle 10.1, and possible solutions?

Thank you.

I have the same issue.

Yes - in export data there was an option in older versions of toad to export using a new cursor not tied to the data grid in toad (so does not take up memory to copy data to grid and out to excel) and an option to use the data grid to send data out to excel. Choose the option use a separate (new) cursor and it will run much faster.

Make sure that “Display all results in grid” is UNCHECKED. This will make a huge difference.

There has to be a better answer… I’m using and sometimes the Export Dataset runs in the blink of an eye and sometime it just totally locks at 500 rows, regardless of whether I check “Display all results in grid” or not. (I understand that by unchecking that box, the query runs again, right?)

Assume that I have a query that returns 50,000 rows. What could possibly make a difference between exporting those records quickly with no hesitation or when it exports 1500 rows at a time with a 10 minute delay between “batches”?

Very frustrating, especially when I have to export over 5 million records. I have to change the “where” parameters so that I limit the records to about 20,000 at a time (do the math :slight_smile: )

When “display all results in grid” is checked, the query runs in a bi-directional mode. We use this mode for grids, so you can scroll forwards and backwards. The disadvantage is that records have to be held in memory to allow for the bidirectional scrolling. When that option is unchecked, we run the query again, but in a uni-directional mode. This means that as soon as we scroll off of a record, we know that we will never need to return to it, so we can release that memory. So if your data won’t easily fit into memory, it’s always best to have this option unchecked.

it sounds like when you get to the 500 or 1500 row mark, that’s when we have to do a fetch (the number of rows per fetch depends on the width of your rows). Sometimes, especially in views or complex queries, you’ll see a pause like that between fetches while Oracle has to do some work to retrieve the next set of rows. 10 minutes is pretty long though - I don’t know if the delay is Oracle processing time, or if it’s a slow fetch of data across the network or what.

Exporting to text files is the fastest - in this format, there is the least amount of processing of the data as it is placed in the file. But it sounds like your problem is somewhere on the Oracle side. On the queries that export slowly, do you see the same lag as you scroll in the grid past the 500 or 1500 row mark?

When trying to export lots of data SQL*Net client settings such as DTU and MTU can play a big part in speeding up network xfers. So can using jumbo frames on your NIC’s and switches - if they support it. But I can export 50,000 rows in just over a minute even with my database being remote (i.e. accessed over network). So it is most likely either oracle and/or netwrok config settings on your end. Sorry - not answer you want to hear :frowning:

John , Yes, when I scroll in the grid, it takes just as long. I wouldn’t doubt that the problem is on the Oracle side.

Thanks Bert and John. You’re right, not the answer I wanted, what I suspected, and probably nothing I can get corrected. :slight_smile:


If it’s a complex query or view that you are exporting from, and you might try "create table blah as ",.

Then export table blah, and drop it when you are done.

It might take a while to create the table, but I suspect it will save you from having to export 20,000 records at a time.

John, I don’t know very much about anything. I was wondering the other day if I could just use SQL to write directly to a file, but no one I talked to seemed to know what I was talking about, or didn’t want to know. No one suggested creating a table (it is a complex query).

I understand what you are telling me to do, but I’m just a dumb accountant trying to pull data out of a system that hates me :slight_smile: and I’m afraid of making an error that will alter data in our production system (I want to keep my job).

I don’t know if I have the permissions to create tables, and I don’t know the correct syntax. I’ll go digging in the help files to try to figure it out, but would you mind showing me a quick example please? I understand “create table”, just don’t know exactly what to put in “blah” LOL.

Thank you very much for your help!


“Blah” is just a table name. Based on your reply, I’m guessing that you don’t have permission to create tables. You might want to go to your DBA with the query and show him how slowly it runs. Maybe he can do something about that.

Anyway, a real statement (with made up table names and an incomplete query) would look like this:

Create table mytable

as select *

from some_other_table t1, yet_another_table t2

where t1.key = t2.key

and …;

That would end up creating a table called “mytable” with the structure and data of whatever query you provide. Since querying from this table would no longer be a complex query, it should go a lot faster.

Never mind… I found an example:…/6854.create-table-examples.aspx

Thanks John!