Out of memory on SELECT query - Toad for Oracle

I am using Toad 11.0.0.116 on Oracle 10g. When i am running a select * on hr_pump_batch_lines (one of the tables in Oracle R12) i am getting “Out of Memory” Error. There is no other messages on the error window. I can continue working normally after this, it happens only when i query this table.

I can see the data if i use F4 and goto Data tab. The table has only 50 rows in it. The error happens even if i add a where clause to it. I can query the table from SQL Developer.

Why this is happening and is there a solution? I really dont want to jump back and forth in tools and i really like Toad.

I’m not familliar with the structure of that table, but I am guessing it has a lot of columns, maybe a lot of varchar2(4000) columns.

When you do a select from a table in Toad, we have to reserve memory for a number of rows before we do a fetch. It doesn’t matter if the table has less than “that number” of rows because we don’t know how many rows there are until after the fetch. You can control “that number” by going to Options|Oracle|General and adjusting “OCI Array Buffer Size”. The default is 500 and most of the time, that’s a good value. But if you are selecting from tables like this, you might want to bump it down. The default for SQL*Plus is 25.

If you stop selecting that table from the Editor, and go to the SB-Tables-Data tab instead, you don’t need to adjust the option. This is because in the Schema Browser, we know the structure of the table before we select from it, and we adjust that setting on the query for the data tab.

Hi jdorlon ,
Thanks for the suggestion.

I tried changing the option you mentioned below. Till OCI Array Buffer size=2331, the error is Out of Memory. For OCI Array Buffer Size > 2332, I am getting another error:
Access violation at address 007F054C in module ‘Toad.exe’. Write of address 00000000 This is happening till 9000. I am not able to run the query.

I need to be able to run the SELECT queries since i would need to use complex conditons.

Thanks
Manu

OK, but what happens when you set it to a small value, like 25?

There is very little benefit in setting it any higher than 500. Increasing OCI_ARRAY_BUFFER_SIZE means fewer fetches to retrieve a table, but it also means that more memory is required for each fetch. Once you get past 500, the law of diminishing returns really kicks in. My advice is to leave it at 500 unless you are having ‘out of memory’ errors, in which case you should lower the setting until the errors go away.

I just had another thought. If you are using F5 to run the query in the editor, stop doing that. Use F9 instead.
F5 retrieves all rows. F9 retrieves only some of them, unless you scroll down to see the rest.

Setting the OCI Buffer size to 25 worked! Yipee! Thanks!

Ahhhhh…yes F9. Solved issue for me, again (I’m also out of memory…)

Bazinga,

Bill

Perhaps you should use “commit” if you are creating a new table.

this happens to me :slight_smile:

CREATE TABLE TEMPO_DB.TEMP_06_TBL AS

SELECT /* +PARALLEL */ CDR.ACCOUNT_NO, NVL (SUM(NVL (CDR.NET_TOTAL_AMT/100, 0)), 0) AS UNBILLED_USAGES

FROM schema1.cdr_threshdata CDR

WHERE CDR.ACCOUNT_NO IN (SELECT DISTINCT BILL_ACCT_ID FROM TEMPO_DB.TEMP_DATA_TBL)

and CDR.bill_ref_no = 0

and profile_id not in (30,36)

GROUP BY CDR.ACCOUNT_NO

order by account_no asc

;

Thanks John. it worked.