Toad World® Forums

What can Code Tester / TDD mean for the majority of us?


#1

Hi,

I have been doing some tests with Code Tester Beta, redoing a group of tests I already had as SqlScripts, and I am rather satisfied how it works. It helps me to make my tests more structured. It helps me to find lacking tests and it makes me repeat whole testsets where I would otherwise only have rerun those tests that “I guessed might be influenced” and so I would have missed avoidable errors.

On the other hand I have a sinking feeling that it will be very hard to use this tool to integrate Test Driven Development in my daily working practice.

It is often said that 80% of all development effort is maintenance, and in my work it is probably close to 95% and that on complicated logistic and financial software.

I am sure the implementation of TDD would mean a major improvement in software quality. It would however have to start with a development effort that I guesstimate at at least 30% of the original development costs if it is done over the Code Tester GUI. So much investment is needed to define a sufficiently complete test base before one can fully reap the rewards of TDD if you are not starting from scratch.

What is painfully missing in Code Tester is a facility to implement it in existing software environments. What is needed is a generator to define basic testsuites for already existing PlSql code. GUI is great when you are building new things using TDD. When you want to implement TDD in an existing environment, you can’t use it however. What you need then is a batch process that scans the current software, accepting all the current outcomes as “right enough for now” and registers those as a basic testsuite for Code Tester to build upon.

Although I have the feeling TDD and Code Tester could mean very much for software quality improvement, I am sure acceptance in our daily practice will be minimal without such a generator, which will make it practical to use TDD for maintenance, not just for fully new development.

Are there any plans for such an addition? Without it TDD will crawl instead fo flying.

Greetings,
Hendrik


#2

Hendrik -

I am glad to hear of your overall positive experience with Code Tester to date.

It has been very interesting to build a new kind of tool for Oracle developers, breaking new ground in functionality and in workflow. It has also been very challenging.

You make an excellent point about Code Tester, and not just regarding defining tests on existing programs. Even for building new tests, we have recognized that Code Tester needs to get much, much smarter about analyzing your program headers and generating, right from the start, a large number of useful test cases.

The Code Tester development team is meeting this week to map out plans for 1.7 and our priority is to make easier to use by (a) smoothing out workflow rough edges and (b) making it smarter.

So we are thinking right along the same lines. Now I do have a question for you…

You wrote:

“What you need then is a batch process that scans the current software,accepting all the current outcomes as “right enough for now” andregisters those as a basic testsuite for Code Tester to build upon.”

Can you explain in a bit more detail what you mean? How do I scan the current software? What do you mean by “accept all current outcomes”…etc.

Thanks, SF


#3

Hi Steven,

The situation I am nearly allways working is that we have a great mass of already accepted and tested software, where relatively minor changes have to be made to achieve slightly modified functionality, keeping all other functionality intact, bugfixing for short.

What I meant is that for bugfixing TDD can be as valuable as for new development, but you do not have the possibility to define testsuites free from the implementation, as usually there is no one around, who knows what all that software should do in its entirety. So for starters you must accept that the software, as it is now, is good enough, as it has already been used in production environments for some time.

What I meant by the “Batch process” is that you need a kind of simplified parser that can scan a piece of software and use its parse tree to define a sufficiently complete set of testing inputs. Then that set of testing inputs should be applied to the program in a representative testenviroment and the outcomes produced should be accepted as being right and stored to be used in future tests.

Of course this way of working goes against the TDD theory, that all test outcomes should be defined free from the implementation,preferrably before building even starts, but in the production situations I know, that ideal is not achievable anyway.

I regularly hear the slogan: Build nothing the user does not ask for. This is a variation on that: the user has accepted this software, so it is good enough to use its current outcomes for future testing.

The slogan then should become: Change nothing the user did not ask for (or have your arguments ready o.c.).

The challenge is to build that parser and how to scan that parse tree so that you get no more testsets than you really need. The rest can be build as a variation on the Oracle Workspaces idea.


#4

Thanks, Hendrik. That is very helpful. You write:

“What I meant by the “Batch process” is that you need a kind ofsimplified parser that can scan a piece of software and use its parsetree to define a sufficiently complete set of testing inputs. Then thatset of testing inputs should be applied to the program in arepresentative testenviroment and the outcomes produced should beaccepted as being right and stored to be used in future tests.”

Now, I must admit that I think what you describe above is actually VERY difficult. What does the parse tree have to do with figuring out reasonable inputs that actually test user functionality? How are we supposed to predict outcomes?

I believe we can (and will) implement automatic generation of boundary tests (as in: “If string is NULL, then return value is NULL, no matter what the other argument values are.”), but beyond that, frankly, I see a solution requiring orders of magnitude more sophistication and resources!

Do you see things differently?

Thanks, SF


#5

Hi Steven,

It sure is difficult, otherwise I would have build it already!

What the parse tree should do for you, is splitting the amorfous mass of code into three kinds of manageable parts: straight statement sequences, branches and loops.
The basic idea is to apply recursion to the testset analysis and work from the bottom to the top. You would need to have a library of testsets for each possible PLSQL and SQL statement, but you could make a good start if you only cover the 50% of most used statements. The most complicated would of course be the analysis of select and dml statements, dynamic sql had better be forgotten for the start.

You start at each leaf of the parse tree, each individual statement, and look up the predefined testset for that statement. That gives a hell of a lot of possible input and output combinations at first, but after that it only gets better.

Next step is to lookup each straight sequence in the code, so you leave all branching and looping out of consideration. You analyse the connections between the individual statements, that is, if variable A is an output of statement A and an input for statement B, that specific parameter of B can only have values that can be produced by A, often not all values that give a relevant testcase for B can be produced by A, so you reduce the testset of B for that. After all connections have been verified you have a testset for the sequence that is nearly allways smaller than the combined testsets of the individual statements.

For branching statements you apply the conditions of its parts to the testsets of its fragments and join the leftover testsets.

Loops are the most difficult part. You will have to design three algorithms The first one must link the testset of the first round to the testset of the initializing code, the second one must be able to link any two consecutive loop traversals. The third must link the last loop traversal to the testset of the following code.

This whole process will generate an immens mass of intermediate “potential” testcases, but Oracle is good at storing things and most statements will reduce the potential testset of following statements.

The beauty of this is: if we succeed in implementing the algorithm above we do not really need to care about or define the user functionality. The quality of the final testset will be as great as the quality of the underlying testset library. If all boundary cases for the individual statements were included in the original testset, we can say for sure that all possible boundary cases for the program unit will be touched by the leftover testset, as we only removed testcases that were filtered by other components in the module.

A consequence of this is also that we do not have to predict testcase outcomes. The outcomes we get when actually running the tests are “good enough”. I am here talking about building testsets for maintenance situations only. The fact that the modules we will be submitting to this kind of analysis have been through several testfases of the more conventional kind is a precondition for this approach.

For example, if a tax code module calculates a correct outcome for 400 dollar, there is no need to test for the outcome when the amount is 500 dollar unless there is some boundary condition in the code that is touched between 400 and 500 dollar. We should be able to turn up all boundary conditions this way and so deliver the minimum set of testcases for the current functionality.

Once such a testset has been produced all further development should go by the normal TDD process: first define all new and changed testcases then start developing the modification.

Kind regards,
Hendrik