This is a general question that I have not been able to resolve with a review of other threads or a google search…
I have played with the Limit Grid Fetch and OCI Array to see if that would affect an ‘Out of Memory’ error I am receiving when submitting a general select for all columns of a table. If I request a single column from the table, then I get it, but not if I select *
This is a general question that I have not been able to resolve with a
review of other threads or a google search...
I have played with the Limit Grid Fetch and OCI Array to see if that would
affect an 'Out of Memory' error I am receiving when submitting a
general select for all columns of a table. If I request a single column from
the table, then I get it, but not if I select *
Does this ring a bell for anyone?
Not offhand, but is it for any datatype of column on the single select (e.g.
"CLOB")?
Also, version numbers of Toad, Oracle Database, Oracle Client would help
troubleshoot the issue.
Rich -- [TeamT]
Disclaimer: I have never locked anyone in an electrical closet.
If it’s like 30,000 columns and 10 rows, I’d see that taxing memory
pretty wicked, both to construct to grid and the actual data.
OCI wouldn’t matter, b/c it’s based on the number of rows, not
columns.
Do you have a DDL script we can see for the underlying object(s) used in the
SELECT * query? If it’s a table, you can get the script from the Schema
Browser, scripts tab.
Number of columns is not outrageous - around 100 or so and the row count is limited to 500 (with about 6K over all) but it errors out even before that.
So, we need to know more information. For the hundred columns, are any of
those datatypes CLOB/BLOB/LONG RAW, etc? The amount of data coming back for
those might be overkill.
You might want to try the ‘Deferred LOB’ read and esp the
‘Stop data fetch when available memory becomes less than X’ options.
You have your low memory threshold set to 50mb. How much physical memory do
you have available when you get the error?
You said you have around 100 columns. Can I see a ‘create table’
statement for your table? I’m guessing that you have some
‘exotic’ datatypes in there….
When I click ‘Execute as script’ I can see in task manager how during execution of this query amount of used memory (PF Usage) raises up to extra 300 (!) Mb and stays there even after query is completed. The only way to reset it back is to restart TOAD.
When the same query is executed using F9 or ‘Execute Statement’ button it runs way faster and with no memory usage change even after I scroll to the bottom of result grid.
Toad 9.5, WinXP
If I don’t use ‘where rownum’ and execute statement as script it takes all available memory (4Gb) and then hangs with alert message ‘Out of memory’.
This only happens when table has at least 50-60 columns and has about 50-60k records.
No CLOBs, etc, just VARCHAR2 and NUMBER columns.
I use GL_INTERFACE here as a common example.
Any ideas how this can be fixed?
Remember that when running a script (F5), ALL the information is pulled back and
put both in the grid and in the script output. In addition, we have to allocate
the amount of memory for each column so if you have a column defined as a
VARCHAR2(4000), Toad will have to allocate that much memory for the column to
set the proper spacing for the script output. One way you can minimize the
memory is to turn off the history tab and grid output. If you need the grid
output, your best bet is to run it via F9 instead since it will use
significantly less memory.
Thanks for your quick reply.
I am aware that running as script fetches by default all records (but can be limited through Execute/compile - Script Output - Limit Results to)
You may see I limited amount of fetched records by ‘rownum<500’; also, when I run it as a statement and scroll to the bottom of result grid, it also fetches all 500 records, right? But there is no memory change at all in this case. Also, as you can see, gl_interface contains only VARCHAR2 with max length of 240 bytes
So I see the issue that when TOAD fetches the same amount of records in ‘Script’ mode it somehow wastes huge amount of memory, and, what is really bad, it does not release it after query is done or even after session is closed. And if you run the same script again it will add another 300Mb and so on until it crashes. I don’t think this is normal.
I unchecked ‘Show script grids’ so it does not use any memory for grid display;
Also, I just verified, that script output in my case makes 2 Mb and when I clear it amount of used memory is decreased exactly by 2 mb… but what about other 298Mb??
Message was edited by: kdorohov_109
That makes a good point, but I don’t think we can upgrade to v.10 without buying new license or something…and, actually, could anybody verify that this particular issue is fixed in v.10?
You need to be current on support/maint, no new license required.
We don’t have a scenario to test with yet. Can you provide specifics
using a common schema like SCOTT/HR/SH?
You can trust Greg assertions for the script behavior, he’s intimately
familiar with that code. I think they’re even considered common-law
spouses in Georgia.
I have to tell this was NOT a good point.
Just ran the same query as a script under Toad 10.1 Trial,
500 records increased PF usage by 1 Gb.
Nice.
so, as I understand your replies, it looks like this issue has not been reported/addressed and has few chances to get fixed, right?
Oh well…
actually, why I tried to run this as script:
the idea was to estimate query performance adding set timing on/set timing off.
In 9.5, one of the biggest memory hogs was the history tab. Turning that off
should help. Also, if you don’t need grid output, turn off the grids. Having the
grids on makes a second copy of the data so will use twice the memory.
Greg
On Mar 18, 2010, at 4:57 PM, “Jeff Smith” wrote:
You need to be current on support/maint, no new license required.
We don’t have a scenario to test with yet. Can you provide specifics using
a common schema like SCOTT/HR/SH?
You can trust Greg assertions for the script behavior, he’s intimately
familiar with that code. I think they’re even considered common-law
spouses in Georgia.