Does setting the block size actually do what it indicates? By default it's set to 2000. If we increase it to 10,000 for example on a bulk load of 3.9M records it should reduce the query volume to hundreds rather than thousands of query executions. However the block size displayed when it's loading appears to be increments of 10K and we're still seeing the spool capacity being pushed past the limits we're trying to work within.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Toad Data Point Insert Query Issue | 4 | 1304 | April 1, 2020 | |
| 2 imports-1 loads 2000 records at a time the other load individual records | 2 | 767 | October 12, 2021 | |
| Import works slow | 7 | 2187 | September 12, 2014 | |
| Simple query in Toad using ODBC returns junk / {null} data | 8 | 1133 | October 28, 2009 | |
| Issues with downlaoding if data >1M lines | 1 | 419 | April 13, 2011 |