Toad World® Forums

Code tester with API packages


#1

We have a java application that interfaces to our oracle database via pl/sql packaged procedures…the application only ever calls a single API package that sits in front of all our other underlying packages. The java application connects to oracle via a single oracle user that just has execute privileges on the single API package.

The API generally just directly calls the sub package / procedure that the app wants to run…although sometimes there is a small amount of logic in the API package itself and it is also usually the place where i tend to do the input parameter validation.

My question is this:

I have started using Code Tester to create unit tests for all my underlying procedures and functions in all my packages. This has meant that in order to satisfy the unit tests that i have written in code tester, my procedures have grown slightly …eg in order to pass all my CodeTester unit test i have written extra code to validate correctly all their input parameters…which has expanded the original procedure.

So now I am wondering …should i use CodeTester to write unit tests for all my API packages that are calling all these underlying procedures?? well the fact that sometimes the API has a bit of logic in it and also I tend to do some validation in the API package, would imply YES I should really write unit tests for them. However this would mean that I am validating my input parameters again…once in the API and also again in the underlying procedure that the API is calling.

This doesn’t’t seem like a good idea in terms of performance. I can see that you would want to validate all your inputs when you write modular functions and procedures that can be re-used …but many of these procedures are very application specific and would not be re-used elsewhere.

What is your “Best Practice” methodology for writing unit tests? Generally speaking would you ALWAYS validate every input parameter for every discrete procedure …even though you knew the inputs were already validated higher up in a parent procedure?? or would you pick and choose and eg do ALL the input parameter validation in the API procedure?

How would CodeTester fit into these methods?? eg strictly speaking in order to pass thorough unit tests of your procedures then do should you validate parameters in every procedure that you write…whether API type or not??


#2

Thanks for taking the time to post this interesting question. Before I dive into my answer, I have a question: what do you mean by “validating inputs”? Can you give us a specific example?


#3

Hi Steven

By “validating my input parameters” I mean that for example on a"insert" type procedure we usually load the parameters into a rowtype variable and then insert that row into a table…so before doing that I would do a bit of validation eg

Run a function to check for nulls for all the mandatory fields
For all the VARCHAR type columns I run a function to check the max length.

( i know that there are other ways to do this type of validation)
But there might also be other validation that occurs…

In one of your introduction demos for Code Tester when you are testing the BetweenStr function… one of your tests was to see how it deals with “Zero starting value” , which involved you adding a bit of validation code to the function to deal with that condition. Now say that wasn’t just a simple function but a complex procedure then you might want to do that sort of validation in the API procedure that sits in front of that procedure so that something simple like a bad input parameter would be caught quickly before propagating through to the complex sub procedure…but for unit testing purposes you would still want to validate the parameter in that sub procedure.

Hope this makes sense?!


#4

Thanks, that is very helpful, Duncan. First of all, the logic I added to support the “Zero start” requirement was not validation. It was the logic needed to implement the requirement.

Validating inputs is a very good idea, but I think that is a separate process from functionality testing, which is what Code Tester specializes in. In other words, my program is supposed to do X (and Y and Z). Does it do these things, does it implement its required functionality?

So in general I use Code Tester to:

** Verify that all requirements are satisfied.
** Test boundary conditions to ensure proper behavior with outlying and outlandish inputs.

I would not use Code Tester to validate inputs, per se. Let me give you an example and then step back to the bigger picture.

Suppose I am building a program that returns the department name for an ID. A department ID can never be null. If the caller passes in a null, that means something is WRONG with the way the program is being used. The best way to address this situation is to include an assertion check at the beginning of the program along these lines:

BEGIN
assert.isnotnull (department_id_in, ‘Department ID cannot be null!’);

and the assertion routine (which, by the way, you can obtain from my demo.zip file on the Trainings page of www.ToadWorld.com/SF) raises an exception, let’s say -20001, if the department ID is null.

So your test case for validation of this input would be one in which you pass in a null and test for this exception.

This is a fine and important test, but it requires that you have written your validation input inside this procedure/function.

[By the way, another way to validate inputs is with “Design by Contract”. Check out http://technology.amis.nl/blog/1736/contract-oriented-plsql-coding for more details.]

In other words, an “invalid input” is not the same thing as an unimplemented or wrongly implemented FEATURE and it should be treated differently.

Once you have built in this validation code and created a test case for it for the API program, I see no reason to repeat the test for higher level programs that call the API program.

And more generally one critical idea behind relying on an API is that you can test/verify logic at that level (the API), and then you can TRUST that code at higher levels in the stack. So with a higher level program, you would only add new test cases that exercise the specific “add on” requirements of that program.

I hope this makes sense. I will stop here for feedback and then we can continue the conversation.

Regards, Steven


#5

Thanks for your swift response.

You are correct …that example i gave from your betweenStr demo was a bad one…my mistake.

I think I mostly understand what you are getting at but a few points are still a bit grey for me. I think i am confusing myself more than anything.

One of your points you referred to in your reply was:

"So in general I use Code Tester to:

** Verify that all requirements are satisfied.
** Test boundary conditions to ensure proper behavior with outlying and outlandish inputs."

I think that by “validating inputs” i am mainly talking about your second point above…testing outlandish boundary conditions.

Also one of your statements confused me a bit. You said:

“Once you have built in this validation code and created a test case forit for the API program, I see no reason to repeat the test for higher level programs that call the API program.”

When i talked about an API procedure i meant one that the java app code calls and this API then calls ALL other packages/procedures … this API is NOT called by any other oracle procedure in my code…it IS the highest level as far as oracle goes. Whether or not the java app validates the input parameters itself is another issue as far as i see…maybe it should, maybe it shouldn’t but either way i still want to validate my parameters in the oracle code since this data and database interface may persist long after the java app is dead and buried.
So that is why your statement above, about repeating the tests in higher level programs, confused me somewhat!

It may help if i apply your department example to our packages/procedures structure…

So we are building a program that returns the department name for an ID. A department ID can never be null.

I would have this procedure p1 in pkg1. This procedure would do all the logic…catch exceptions…etc…everything.

I would then build an API package api_1 with a procedure p_1, that would simply just call pkg1.p1. I give access to api_1 to the java application.

So I now build unit tests for pkg1.p1 using CodeTester which tests for this null ID and ensures that it raises the appropriate exception.

Now i do the same for the api and unit test the api_1.p_1 procedure…and it will pass the null ID test because pkg1.p1 was unit tested successfully and passed.

So this is all fine.

However i could write code in the ap1_1.p_1 procedure that would catch this null ID before calling the pkg1.p1 procedure thereby improving performance by not even calling the pkg1.p1 procedure. Admittedly this is not a huge gain but this is a simple example…if you scale this up so that pkg1.p1 is much more complex then you may gain an advantage by doing that null ID validation in the API.

So which is better??

  1. Do all this type of validation in the API then let all lower procedures trust that the API has done its job and they can trust all their inputs.
    OR
  2. Do the validation in the lower level procedure and not in the api so that maybe it takes longer performance wise to recognise we have a problem with an input parameter.
    OR
  3. Do the validation in both the api AND lower level procedure?? which seems wrong since the lower level procedure would never see bad parameters anyway.

It sounded to me like you were advising option 1 above ?? Which is fine but it means that all lower level procedures that the API calls will not pass all the unit tests by themselves without the API calling them. I guess you could argue that these are not modular re-usuable procedures so they only make sense with an API calling them anyway.

Sorry for going over the same ground …i just wanted to make sure i am clear on this topic.

many thanks


#6

I think that the validation and testing belongs closest to where the values/inputs are actually being used. If it is just a pass-through to a lower level program, then test it in the lower level program. Otherwise things could get very confusing.

I find it hard to believe that you would ever experience significant performance issues by possibly calling a lower level routine instead of halting processing at a higher level with a failed input.

But if you found that this actually was the case, then you could make an exception (and document it as such) by repeating the check/assertion in the higher level.

That’s how I would approach it…

  1. Set clear rules, making it easy to remember what you do where.
  2. Make exceptions to the rules only when needed and document them.

Hope that helps, SF


#7

Yes that is fine.
Thanks very much for your time.