Evening Shiva,
But could you help me in understanding these IO stats and calculating
how BAD it is running.
I can't tell you how bad it is, what the figures show is:
Sid 69 has retrieved 275,589,521 blocks in this session so far. This
covers EVERY statement executed by SID 69 since SID 69 logged in.
Of those blocks read, 1,290,911 came directly from the disc devices -
they were not found in the buffer cache. The remainder were read from
the buffer cache (275,589,521 minus 1,290,911) so that's quite a good
fetch from cache result.
Session 69 has also performed 2,366,599,375 consistent gets which means
that it had to do that many reads of blocks in the cache to recover data
values that were present at the start of the select, for example.
So, if you started a transaction at 13:00 and it runs for 20 seconds,
ALL the results you get back will be consistent as of 13:00 when your
statement started.
If, during the run of your statement, my session updates all the rows
you have not yet read, when you come to read them, Oracle will read undo
blocks from the buffer cache to re-create the data that I changed since
your query started.
If your query is returning millions of rows, I have two questions:
- Why? Who will be reading all these rows?
- This really better be a batch system. It is batch, isn't it?
If this is an OLTP system, then reading millions of rows is a
non-starter I'm afraid.
In order to find out if the query is performing, running a trace may
prove a wee bit problematical due to the extremely high numbers of
rows/blocks being read. I'd be looking at an explain plan first myself -
in case it's possible to find out if an index is not being used, for
example.
It may simply be that the query has to read so many millions of rows
that a full table scan is all that the CBO can do to efficiently read
the data you have requested.
If it comes up with a full scan, and you are filtering some (many ?)
rows out by way of the WHERE clause, then maybe you need to index those
column(s) to reduce the amount of I/O taking place.
In addition, just because the vast majority of your blocks were read
from cache doesn't meam=n that the query is good. It's far far better
not to read blocks rather that to read them from cache. You need to
perform filtering before the data gets retrieved back.
I know some applications, ACCESS via ODBC for example, will read EVERY
row in all the tables - sometimes cartesian joined - and filter out what
Access doesn't want when it gets the data. This is wrong - leave the
data you don't want behind, and only fetch what you really need.
MAke sure your application is not doing something similar - Java
developers, bless them, have a nasty habit of doing all sorts of
"clever" stuff that kills performance.
Good luck and Happy Christmas!
Cheers,
Norm. [TeamT]