Toad World® Forums

dependecny of test cases


#1

Hello, maybe I haven’t found this feature yet, but what about dependency of test cases ? In the meaning that some test case (might be from different program) is run before/after another.

Suppose I have package with procedures doing upsert and delete. It would be nice that I can use valid delete test case to teardown the environment after upsert testcases.
What do you mean ?


#2

I think I understand what you are looking for.

A true unit test should only be testing for a single outcome - but what if you have an outcome that is dependent on a previous scenario/test?

In the test editor, check out the Customizations section. You can insert your own ‘custom’ pl/sql to setup scenario environments before and after the test.

For example, I have a unit that deletes rows out of a table. Code Tester is very polite in that it will re-create my duplicate table for row checking before every test, but I wanted to look at the table after the test and before it got destroyed. I was able to comment out the ‘DROP TABLE’ in the Customization section.

You could use these sections to do your teardown - either before the test runs, or after.


#3

Zdenek,

We do not currently support any concept of setting up a sequence or dependency chain between test cases. I will add this to our Futures ideas, but I must say that I consider this somewhat low on the priority list because a very important and I think sensible rule for unit testing is that each test case is independent. You generally DON’T want dependencies between them, because it makes it harder to maintain and run your tests.

Having said that, testing in the DB world is very challenging due to the need to setup tables etc. So perhaps this is something we need to add, so you can take advantage of those times when it would be enormously convenient to use another test case’s output.


#4

The trouble is that before long you have a complex hierarchy of dependencies and when it fails, where do you start looking.
Do your utmost to keep all tests independent. In the long run you will make life easier for youself.
Or a functionality change suddenly breaks big chunks of your test suite.
It is extremly useful to be able to run just one test in isolation at times, if there are dependencies than that becomes difficult.


#5

Steven,

I think that ruling out “test case dependencies” is a little like saying “life offers no surprises”. I believe that any application, involving more than 2+2=5, is bound to include some sort of embedded dependency. Also, I’ve witnessed enough solutions incorporating some implementation of schema duplication. For that I see two major obstacles:

  1. Testing two (or more) schemas against one another require a fair amount of pl/sql! How do you test the validity of that code, in relation to the code that you really wanna test?
  2. How do you make a generic program, like the CodeTester, handle dublication of entire schemas, let alone the subsequent comparisons?

I’ve spent several years developing pl/sql that, upon input of data, generate more pl/sql and, supported by meta data, create resulting sets of data. Testing that hasn’t always been easy :wink: but I’ve found that the build-in feature of database triggers gives you everything you would ever need to test any kind of transactional process. And, in the end, isn’t any action against the database a transaction?

No matter what you wish to test for or against, a database trigger is easy to write and, per definition, cannot be fooled or circumvented. The only important issue is… What do you want to happen when data is: inserted, updated or deleted?

By the way, I’m looking forward to meeting you in Munich next week :))

Mike


#6

Mike,

I am not into taking dogmatic positions about things (technical issues, anyway). I am happy to allow users to establish dependencies between tests; I just can’t prioritize it for the first commercial release.

As for your comments regarding triggers, are you saying that you put your test logic into triggers? I would be very interested to see an example or more thorough explanation.

And I too look forward to meeting you in Munich as well!

SF


#7

Ouchh… Apologies… I didn’t mean to be nasty.

I guess, instead of talking about dependencies, maybe prerequisites is a better word. I fully agree with you that the result of one testcase shouldn’t be the input for the next one. If that was the case then someone managed to write some pretty interesting requirements :wink: (This assuming the standard behavior of a test case being the requirement turned into a question)

On the other hand (prerequisite), I know of many procedures that act differently, depending not only on applied parameters but also on other information available within the session. Actually, I think QuteROX provided the answer on how to handle that. In other words, the possibility of setting up the appropriate environment, before executing the actual test case. That is what I understand under test case dependencies and, if QuteROX is right, you already implemented such a feature. (double-ouchh, I still haven’t tried out the CodeTester)

(Bring a warm coat. Bavaria is getting pretty cold)

Mike


#8

No worries, I didn’t think you were being nasty.

And, yes, you can put setup logic into the tool and it will be automatically inserted into the generated test code.

Thanks for the warning on the weather!

SF


#9

I have attended Steve’s presentation of the code tester and have a fair idea of the direction Quest is going with it, and I like what I see so far. Our group has developed a large body of unit tests using UTPLSql and this looks like a great extension of a good idea.

We have written a code generator that generates UTPlsql test packages and have added functionality that we find very useful. I would love to describe these features in some detail in hopes they could be included in your code tester, and then I can look toward desupporting our in house generator and us yours.

I will post my suggestions in another ‘enhancement request’ thread (unless you can suggest a better way to share these ideas), but would like to reply here to the current topic, the dependency of test cases.

I would add my vote to the request to allow prerequisites to be linked between unit tests. I help a large team of developers with their unit testing, and I ‘own’ the core unit tests. I understand the idea that tests should be independent, but our applications are so complex that it’s a huge waste of time making every developer write 90 setup cases so they can test their small dependent procedure or function when there already is a complete, tested setup available in another unit test.

I already know that if I break my core applications, all the dependent applications are broken. That’s daily life with a complex business application. It’s the same thing with the unit tests. I not only don’t care that there is a dependency between unit tests, I want that. If I break my core unit tests, and everyone else’s breaks, that’s a good thing. I makes me make sure my core tests are perfect. It also means that I don’t have some developer who doesn’t know the core app writing an inappropriate or incomplete setup that makes his test cases succeed, that would otherwise fail give then correct setup. We have already convinced management of the necessity of allocating time for every single piece of code to have a unit test, but if they knew that 10 developers where spending 50% of their time duplicating setup cases manually, they would start to ask tough questions, and rightly so.


#10

Sorry for the immediate followup, but I want to clarify my previous remarks. I spoke about dependency between separate unit tests before, particularly dependencies of setup cases.

Correct me if I’m wrong, but I think I understand from the thread that the question is mainly about one unit test step being dependent on the success of a previous individual unit test step. I think this is also very important to support. I have cases where I test an insert, then a specific series of updates, sometimes as many as 5. to test a specific condition, and want to have a single update test explicitly execute n other test steps. In our system, we wrap each test step in a named savepoint and roll the operation back at the end of each step. In the cases where we need dependent steps to be executed, we just list them in our test definition, and the code is generated to re-run the required prerequisite tests followed by the particular test we need. This may already be supported, but I wasn’t sure how far you were going with the notion that each test should be separate. Yes, I agree they should, but with named savepoints you can have the both.


#11

Thanks for your insights and sharing of experience on this topic.

We will definitely look into adding dependencies between test cases. It should not be terribly difficult.

In terms of avoiding writing the same setup code, over and over again, that really isn’t necessary, even now.

Simply write the setup code as reusable procedures, place those procedures in the Test Definition Customization section (top of the Test Editor browser) and then call the programs as needed throughout the test definition.

If you need to share them among different program unit tests, then create the code in a separate package, outside the test code.

Hope this helps…SF