Use of nvl on a literal with the UID function

I frequently use the TOAD SQL Optimizer and was wondering if anyone knew why in some of the generated alternative joins the reference to a literal is replaced from something like where col_a = ‘XXX’ to where col_a = nvl(‘XXX’, uid).

It is obvious that ‘XXX’ is never going to be null so what effect is this on the optimizer? The uid function returns the id number for a user’s session but what relevance is that here?

If XXX is indexed I believe using the NVL() function will make sure that
it’s index will not be used.

Dan Brevetti

If XXX is indexed I believe using the NVL() function will make sure that
it’s index will not be used.

Dan Brevetti

Reworded the question for understanding (how I understand what was asked):

why does the TOAD SQL Optimizer generate alternative statements

from “col_a = 'XXX'” to “col_a = nvl('XXX', uid)”

In my experience with regards Oracle optimization, in the case of a hard-coded
literal like that I can’t think of a single good reason you’d wrap
NVL around it. The con would be the increased performance hit that processing
the nvl function would provide (which is probably pretty negligible). In batch
processing of a few 10’s of millions of rows, you’ll probably be
able to measure the effect. In the case of on-line transaction processing,
you’d never notice.

If the reference is to a variable instead, then it has benefits with regards
whether or not the variable can ever be null. Let’s say for example you
populate the variable through a table lookup and that function call can return a
null value. In Oracle “you can not equate a null value”. In other
words, if you use “col_a = something_null” it’ll never quite
register properly and you’re rolling the die with regards whether
it’ll give you the correct results. “col_a is null” is
appropriate use.

So… with those points in mind, my speculation as to why the optimizer
re-writes the code like that is a form of “code correction” to
ensure null values are handled properly. This would be important from the
perspective of analyzing the data results returned. A “proper query”
may very well be radically different if said data results are drastically
different from one another (like one form returning 15 rows and the other form
returning 10,000 rows in a 12,000 row table).

Roger S.

The developer does talk about the method behind the optimizer’s madness on
TW, here’s an example:

Hello everyone,

I have just posted a video on how to import your Toad settings. Hopefully this
will be helpful when people ask the forums how to copy over their settings.


Thanks Jeff!

That’s fairly informative (although not on-topic for the problem
identified). Reading a few more of the articles by Richard To shows
there’s plenty of insight into how to optimize one’s code.

I didn’t notice anything along the lines of a “newbie’s
introduction into optimization” section. Specifically surrounding such
factors as “know your data”. Such articles would be quite beneficial
to those new at performance optimizing.

For example, the rules “know your data, know it’s use” are
still very much requirements for the developer working to optimize code.

In the article linked to – near the bottom – there’s a rewrite
that changes the “exists” to an “in” statement. The
results end up being a time improvement from 2:21 (minutes:seconds) to 0:07.5.

It’s actually a bad example for me to make my point with, but I’ll
try :wink:

Let’s say a report is run once a month using the original SQL and the
report commonly pulls 0.25% of the records in the total dataset. On the other
hand, daily reports are run with the same SQL that end up pulling > 75% of
the total dataset. In this scenario, rewriting the “exists” to an
“in” could actually end up being detrimental. The change could lead
to a performance improvement for the single monthly report while having the
opposite effect on the daily reports. The 2 minutes you saved on that one report
could amount to a cost of hours the rest of the month.

As Richard said in another of his articles: “ However, for something that
needs instinct or innovation, a machine definitely cannot replace humans at this
moment. ”

I would certainly recommend his articles for anyone interested in performance
with the caveat of keeping the basics in mind:

A good developer ‘knows’ the data, a good dba ‘protects’
the data.

A good analyst knows how to get it into Excel :wink:

A good analyst knows how to get it into Excel :wink:


Aye, matey! That’s what Toad is for! J

Daniel B Madvig
Computer Technologies

Northwestern College & Northwestern Media
3003 Snelling Ave.
St. Paul, MN 55113