Toad World® Forums

Interpreting IO stats by TOAD


#1

Hi … I am working on a long running SQL.

Most probably it is going to return 5-10 Million records.

I am trying to monitor and evaluate it’s bottle neck via Toad Schema Browser.

Good news is that this SQL keeps running and the IO stats keep changing.

But could you help me in understanding these IO stats and calculating how BAD it
is running.

Thanks,

Shi va

image001.jpeg
Outlook.jpeg


#2

You should trace the session and look at the trace file with Toad’s Trace
File Browser (assuming you have the DBA module). Then you’ll be able to
see exactly what the query is doing. If you don’t have the DBA module, you
could at least run your trace file through tkprof.
Outlook.jpeg


#3

Hi Shiva,

Hi … I am working on a long running SQL.
Most probably it is going to return 5-10 Million records.

I am trying to monitor and evaluate it’s bottle neck via Toad Schema
Browser.
Good news is that this SQL keeps running and the IO stats keep changing.

But could you help me in understanding these IO stats and calculating
how BAD it is running.

“BAD” is subjective. You say it’s going to return millions of rows. If
that’s what’s required, then I’m not sure what anyone can offer you.

But I can offer you this: Go right now and buy a copy of “Optimizing Oracle
Performance” by Cary Millsap with Jeff Holt. There’s even a preview on
Google Books. You can skip the middle section of the book if you’re not
interested in things like the math behind Queuing Theory (and how it can
help you!), but it is by far the most-used Oracle book I have. And the
general performance tuning advice it has is the key to tuning ANYTHING,
Oracle, SQL Server, or airport baggage claims. Really. Read it. Learn it.
Know it. Love it.

Also, buy any and all books by Tom Kyte. Make sure to read through his
advice on anything Oracle at http://asktom.oracle.com One of the more
poignant points that Tom makes throughout his advice is when asking a
question, you need to give us enough information to help you. He uses an
analogy of asking “I was driving down the road at 55mph when my car slowed
down to 45mph. What caused it?” There simply isn’t enough information
there for anyone to be able to accurately answer it.

What you’re really after isn’t so much Toad help, but performance tuning
help. You’ll find that the folks here, while some are bona fide Oracle
experts, concentrate on helping with Toad issues.

Performance tuning is an art based in science. Once you get yourself
grounded using the materials I mentioned, resources like the folks at the
Oracle-L mailing list at http://freelists.org may also be able to help you.

Best of luck!

Rich – [TeamT]

Disclaimer: I’m not Norm, but I sometimes try to play Norm at work.


#4

You also could send the SQL code to TOAD’s SQL Optimizer and let it find possible rewrites and/or missing indexes that would fiux the problem - that’s part of TOAD Xpert edition


#5

Evening Shiva,

But could you help me in understanding these IO stats and calculating
how BAD it is running.

I can’t tell you how bad it is, what the figures show is:

Sid 69 has retrieved 275,589,521 blocks in this session so far. This
covers EVERY statement executed by SID 69 since SID 69 logged in.

Of those blocks read, 1,290,911 came directly from the disc devices -
they were not found in the buffer cache. The remainder were read from
the buffer cache (275,589,521 minus 1,290,911) so that’s quite a good
fetch from cache result.

Session 69 has also performed 2,366,599,375 consistent gets which means
that it had to do that many reads of blocks in the cache to recover data
values that were present at the start of the select, for example.

So, if you started a transaction at 13:00 and it runs for 20 seconds,
ALL the results you get back will be consistent as of 13:00 when your
statement started.

If, during the run of your statement, my session updates all the rows
you have not yet read, when you come to read them, Oracle will read undo
blocks from the buffer cache to re-create the data that I changed since
your query started.

If your query is returning millions of rows, I have two questions:

  1. Why? Who will be reading all these rows?
  2. This really better be a batch system. It is batch, isn’t it?

If this is an OLTP system, then reading millions of rows is a
non-starter I’m afraid.

In order to find out if the query is performing, running a trace may
prove a wee bit problematical due to the extremely high numbers of
rows/blocks being read. I’d be looking at an explain plan first myself -
in case it’s possible to find out if an index is not being used, for
example.

It may simply be that the query has to read so many millions of rows
that a full table scan is all that the CBO can do to efficiently read
the data you have requested.

If it comes up with a full scan, and you are filtering some (many ?)
rows out by way of the WHERE clause, then maybe you need to index those
column(s) to reduce the amount of I/O taking place.

In addition, just because the vast majority of your blocks were read
from cache doesn’t meam=n that the query is good. It’s far far better
not to read blocks rather that to read them from cache. You need to
perform filtering before the data gets retrieved back.

I know some applications, ACCESS via ODBC for example, will read EVERY
row in all the tables - sometimes cartesian joined - and filter out what
Access doesn’t want when it gets the data. This is wrong - leave the
data you don’t want behind, and only fetch what you really need.

MAke sure your application is not doing something similar - Java
developers, bless them, have a nasty habit of doing all sorts of
“clever” stuff that kills performance.

Good luck and Happy Christmas!

Cheers,
Norm. [TeamT]


#6

Thanks Norm … as usual your explanations are very lucid.

This is what I was looking for.

( If you feel comfortable please send me your personal e-mail/contact number at
shiva_de_novo@yahoo.com )

Most of the time we ( Designers ) are called to make a bad idea do miracles.

This was one of them :frowning:

Some one wanted to run a complex SQL Query ( fairly tuned query ) in TOAD get
app 5 Million records and save it in Excell on a daily basis.

Now I can show my burn marks and convince them to run it via batch against
backup copy of Production DB.

Nonetheless,I am going to enhance my DBA skills.

Happy Holidays and Merry Christmas to all !!!

Thanks,

Shiva


#7

Evening Shiva,

Thanks Norm … as usual your explanations are very lucid.
Welcome.

( If you feel comfortable please send me your personal e-mail/contact
Sorry, anything I have to say on matters Oracle and/or Toad are to be
found here on these lists or on my blog (Rants & Raves on matters Oracle
and other stuff) at http://qdosmsq.dunbar-it.co.uk/blog.

That way, other people get to “benefit” as well, plus, when I get things
wrong, we all get corrections!

Anyone who contacts me directly is usually opening themselves up for an
extortionately charged contract! :wink:

Most of the time we ( Designers ) are called to make a bad idea do miracles.
I find it’s usually the DBAs who have to sort out the problems caused by
the designers, the architects, the Java Developers (and the others ones
too) and it’s always the database that is the problem, never anything
else. Hmmm!

Some one wanted to run a complex SQL Query ( fairly tuned query ) in
TOAD get app 5 Million records and save it in Excell on a daily basis.
I’m not sure that even the latest version of Excel can cope with that
much data. If I’m not mistaken, Toad will split the results into ‘many’
worksheets to get all the rows into the Excel file and avoid any row
limits in the various versions of Excel in this manner.

Now I can show my burn marks and convince them to run it via batch
against backup copy of Production DB.
Good luck.

Nonetheless,I am going to enhance my DBA skills.
Read books by Tom Kyte, Jonathan Lewis, James Morle - you can find an
online copy of his Scaling Oracle book at
http://www.scaleabilities.co.uk/index.php/View-details/Books/13-scalingOracle8i.pdf

  • don’t worry about the 8i bit, it’s still relevant.

And don’t forget, books by Bert Scalzo as well!

Happy Holidays and Merry Christmas to all !!!
Indeed.

Cheers,
Norm. [TeamT]