Slow Speed - Statistics?

We have a new Oracle 11g server that seems to be a bit slow at times.

Prior to this , we played DBAs and didn’t really have a lot of problems.

I was told that our new server is slower because as we load a table, statistics
need to be run.

Has anyone heard of this?

Is there a way that if that needs to happen, that it runs automatically?

Yes – starting with Oracle 10g the DBA needs to collect stats and/or
histograms on their objects – because all queries default to the cost
based optimizer – which requires them.

While there are automated jobs under both 10g and 11g – you really should
do this manually such that you know exactly what you’re working with.
Without the stats and/or histograms you cannot get the best explain plans. You
also might want to copy your production stats to dev/test machines so that they
behave just like prod J

Thanks Bert.

Since we load data into tables, run PL/SQL, then export the tables and repeat
with different data, do we need to do this each time after a load?

Is it time intensive?

Collecting stats on small percentage of data (e.g. 5%) usually runs in just a
few minutes or less - and can make all SQL statements faster. Sounds to me like
you should be doing this as one of your steps.

Collecting stats on small percentage of data (e.g. 5%) usually runs in just a
few minutes or less

Depending on the nature of the data, this may or may not be useful.

We have a table with a code as well as unique key. Both are indexed. When
you’re trying to find all “active” records, going with the
code index is best performance.

We ran into a situation that the optimizer (setting was choose) was deciding to
go with the code index when you provided both a code and unique key when looking
for an inactive record. In examining the data, we concluded that redoing the
stats on the data utilizing histograms would get the optimizer performing
properly.

I should note that the “active” records amount to 0.09% of the total
records in the table. A second of the four values in the code is about the same.
The other two each amount to a substantial greater piece of the pie at 42.8% and
57%.

So… depending on the nature of the data and what appears in that sample
that is selected, it may or may not be sufficient for well running SQL.

Roger S.

Running stats can actually make things WORSE. Having no stats at all is
generally never a good thing though.

Having no stats at all is generally never a good thing though.

True… generally :wink:

The original poster indicated they load the data, run sql (one report? many
reports? simple query? complex query? tends to utilize a low percentage of the
data? or high percentage? includes links to other tables?), dump the data, then
repeat with a new set.

If a simple report which utilizes most of the data and only does a single full
table scan is run, then it may actually be in the OP’s interest not to run
stats.

The main point I was trying to make – without actually saying it –
would be that the individual with knowledge of the data in question would be in
the best position to decide what kind of stats to collect.

In short, while people can make suggestions, it would be best to outline various
situations those suggestions would work within.

I know it’s not really what the OP wants to hear, but it really is the OP
who is in the best position to answer his own question. That can only be done
with examining the data that is loaded for each load. After all… if the
datasets deviate significantly between loads, different strategies for the stats
may be required for each set.

Roger S.

This is why Oracle has the feature of SAVING STATS that work and then fixing the
plan on it. I forget the terminology. There is also the possibility of OUTLINES
so as to tell ORACLE to use a certain plan no matter what.

I think the statement should have been “ Running stats can actually in a
very few specific cases can make things WORSE. Having no stats at all is
generally (especially in 10g and up) never a good thing though ”
image001.jpeg

Unfortunately tuning can’t be done by the numbers. You need a touchy-feely
who works by both detailed knowledge and instinct. And if you have one never let
him go.
image001.jpeg

Yep, when I was going through Oracle training back in the late 90’s I had
already been working with SQL for 15+ years, but I was looking forward to the
SQL tuning class so I could learn the “scientific method” rather
than the trial and error method I had been successfully but laborious employing
up to the point.

The first thing the teacher said is “SQL statement tuning isn’t a
science, it’s an art” and I thought “aww sh*t”. He was
right, and it’s still true. Unfortunately.

Like everything else in life, it’s a little bit of theory and a lot of
practice.

Even the Performance Tuning book is confused and confusing on the subject.
Consider A rather then B, but the next paragraph says consider B rather than A.
Even with a background in how the optimizer works and the precedence of
operations table, it’s still mostly trial and error.

Even when you use a tuning tool like Quests Tuning Lab it will suggest and try
numerous variations to see what actually works better because it’s not
black and white.

Precisely!

If the original poster has been paying attention, sufficient information has
been provided that they will understand they need to actually put some effort
into finding the answer for their specific situation(s) and that all we can do
is offer various – what appear to be conflicting but are all true –
opinions/solutions/suggestions on optimization.

Roger S.

The fact that things like tuning are 50% intuitive and 50% following the book
are good. Because if everything worked as smooth as the salesmen claim it does
most of us would be saying “would you like fries with that”.

So any time you see an Oracle bug or an Oracle stupidity or bad documentation do
not complain but instead go KACHING , YES, JOB SECURITY.
image001.jpeg

Our sets are identical except that they vary in volume (# of records).

We run a “standard” PL/SQL process against the same tables each time
and it saves to the same tables each time.

We report using Access ODBC (sorry – it’s really straight forward
and easy) and then dmp the input tables keeping a one line general summary in
standard tables.

So let me see if I understand:

Stats are required to assist Oracle in determining the best way to handle
specific tables taking into account the indexes, triggers, table sizes, etc.

If we run stats nightly and load/unload these same tables with the only
difference being the size, the optimizer may not tweak it to perfection, but it
is not a totally inefficient run.

Is that correct?

Hi Gerry,

do I remember correctly and you said that you had upgraded the database
to a new version and now the report is taking forever where it used to
run quickly on the previous version?

If so, the Access/ODBC part might be the cause. Regardless of how simple
it all is, Access/ODBC might be butchering your query as per something I
discovered years ago with Access 97. (It may still apply!)

  • Basically a single table SELECT passed to Oracle would be fine.

  • Join two tables and it might get passed to Oracle correctly as a
    joined pair of tables query, or, it might get passed to Oracle as a
    couple of queries as follows:

    select stuff from table_one;
    for (each row in the above query) loop
    select more_stuff from table_two
    where table_two.something = :bind_variable;
    end loop;

The “:bind_variable” in the second bit is the join column(s) from
table_one. The columns that it should have just joined on. So the
following SQL:

select emp.ename, dept.dname
from emp,dept
where emp.dept_id = dept.dept_id;

Would hit the database as:

select emp.ename, emp.dept_id
from emp;

Followed by a lot of:

select dept.dname
from dept
where dept.dept_id = :dept_id;

And the “:dept_id” bind would be every single one from the list of
employees in the first query. Obviously the problem is that while there
are possibly hundreds (or millions!) of employees, there are only 10
departments, but we need to fetch the department name hundreds (of
millions) of times. Not good for performance.

It does even worse things at times as well. But I won’t go there!

I’m wondering if your query is being abused in a manner similar to the
above? What do you see on the database in database->monitor->session
browser (do you have the DBA option?) when Access is doing it’s thing?

Cheers,
Norm. [TeamT]

Information in this message may be confidential and may be legally privileged. If you have received this message by mistake, please notify the sender immediately, delete it and do not copy it to anyone else. We have checked this email and its attachments for viruses. But you should still check any attachment before opening it. We may have to make this message and any reply to it public if asked to under the Freedom of Information Act, Data Protection Act or for litigation. Email messages and attachments sent to or from any Environment Agency address may also be accessed by someone other than the sender or recipient, for business purposes. If we have sent you information and you wish to use it please read our terms and conditions which you can get by calling us on 08708 506 506. Find out more about the Environment Agency at www.environment-agency.gov.uk

Information in this message may be confidential and may be legally privileged. If you have received this message by mistake, please notify the sender immediately, delete it and do not copy it to anyone else.

We have checked this email and its attachments for viruses. But you should still check any attachment before opening it.
We may have to make this message and any reply to it public if asked to under the Freedom of Information Act, Data Protection Act or for litigation. Email messages and attachments sent to or from any Environment Agency address may also be accessed by someone other than the sender or recipient, for business purposes.

If we have sent you information and you wish to use it please read our terms and conditions which you can get by calling us on 08708 506 506. Find out more about the Environment Agency at www.environment-agency.gov.uk

Another possibility is the ODBC drivers. We went from 10g to 11g and an application that worked fine with Microsoft ODBC for Oracle drivers failed and it only worked once we used Oracles client. I presume it is a client because the name of the driver had ORACLE and HOME and a version number in it.

But maybe I am wrong because in our case the application failed. In your case it is painfully slow.

The first thing the teacher said is "SQL statement tuning isn't a science,
it's an art" and I thought "aww sh*t". He was right, and it's still true.
Unfortunately.

I am really struggling with this one. My belief is that it is a science,
but one that few of us (myself very very much included) do not fully
understand. The "art" is the filler for the gaps in my understanding of the
science.

There are a LOT of variables that go into the CBO, which can or should be
one avenue to investigate when perf tuning SQL. Wolfgang Breitling seems to
have mastered (or at least knows waaay more than I do) the subject. Google
his "A Look Under the Hood of CBO" paper about using the 10053 event to
investigate exactly why CBO does what it does.

Sitting here at home with a sick kid (and another really bored one) and
thinking about this. Perhaps the "art" is knowing the limitations of the
science -- the (lack of) accuracy of statistics comes to mind. Histograms
are good, unless the limitations of the storage of them does not accurately
reflect the data being described. Data skew. Bad or uncollected system
stats. Incorrect data model. Poor indexing. And perhaps the ultimate:
poor schema design. Not sure if this makes sense -- I blame a lack of
sleep.

Enough random thoughts for now. There's lunch upstairs!

Rich -- [TeamT]

Disclaimer: I wanna be the Jack Bauer of DBAs.

Well, okay, I guess that’s true. There is a certain amount of
predictability in that the optimizer works a certain way for certain reasons.
But, since the only rule that has no exception is that there is an exception to
every rule, the exceptions that override or cause the optimizer to ignore those
rules cause the unpredictability or “art” side of the equation.

All of your points below are valid and I’m sure if we tried we couple
easily triple or quadruple that list and that’s the problem. After 20+
years of tuning SQL I’m still occasionally amazed and baffled. Adding an
index should, in theory, improve performance of query X, but in reality exactly
the opposite happens. Given enough time to ponder and examine may or may not
find an explanation.

On a recent project I found myself baffled at the behavior of a certain query
and resorted to using the tuning lab to look for suggestions. It suggested 40+
alternatives. After testing them all it found no significant improvement from
any of the proposed changes. Granted the tuning lab is having to make some
fairly static suggestions based on the known rules and laws of probability, but
the sheer number of suggestions, the variations of what it was trying and the
fact that it couldn’t successfully predict the outcome without first
“trying” each suggestion would seem to kind of prove the point.
Sometimes it really is trial and error, even when the guesses are educated.

I’ll agree that there’s science, up until the point that random
variability in design, or lack thereof, and common sense, or lack thereof,
overcomes the science with unpredictability.

I’m still scouring the documentation and startup parameters for the
tuning_whimsy_factor setting. If set too high, things just go nutty bonkers.

The tuning whimsy factor that always seems to work for me, and I use it
sparingly, is to use database hints to force full table scans and parallelize
them. I’ve found the extra I/O to hit an index when the query returns 60+
% of the data on a particular table kills performance.

Best regards,
Mike

Michael S. Zarzycki, MTS | Manager of Engineering IT | Sensata Technologies,
Inc. | voice: 508-236-1015 | fax: 508-236-3701 | www.sensata.com | The World
Depends on Sensors and Controls

Thanks for the help folks I think I got this under control (sure) .

Norm – we did previously upgrade from 9i to 11g. everything was great.

Now we were purchased by another company and they created a brand new Oracle
server with 11g on it, but alas, we are not DBAs anymore – go figure.

The first time we loaded, ran our PL/SQL processes, etc. It was extremely slow.

The DBA said “Oh, you HAVE to run stats after each time you load.”

We didn’t even know about stats (probably a good idea that we aren’t
DBAs anymore I suspect), as they were automatically run every night.

They ran stats and Voila! – faster processes.

Now we truncate the tables and start with a different group of data (into the
same tables as above), run the processes and it runs great.

We really don’t see the need to run the stats after each load (sorry to
all those DBAs out there) and the nightly auto run should “sort of”
be ok.

The only thing that is really changing is the size of the table (and the data
inside). The optimizer is probably at a high enough percentage that running a
new stats process would only add to the time…

Thanks everyone…really appreciate the help.

That’s exactly what I meant!

Some people tell me they have jobs that run to collect stats on a
weekly/daily/whatever basis. If you’re happy with the plans/performance
you have now, you put that at risk every time the stats get updated.