Toad World® Forums

BenchMark Factory - Understand Results


#1

Hey Guys!

I would appreciate your help, I am doing a case study database comparison. I utlizando Factory realizanção benchmark for TPC-C test. My biggest difficulty is to understand the results generated by the test could give me a brief explanation of what each one of them? And which of them consider to put in my documentation?

Userload
Test Phase
TPS
BPS
Rows
Bytes
Deadlocks
Rollbacks
Errors
Avg Time
Min Time
Max Time
90th Time
Avg Response Time
Min Response Time
10
1
0,56
223,26
517
40230
0
0
0
0,142
0,006
5.991
0,239
0,142
0,006
50
1
2,65
1188,06
2552
216676
0
0
0
0,054
0,003
0,933
0,154
0,054
0,002
100
1
5,35
2465,85
5272
440095
0
0
0
0,036
0,002
0,475
0,108
0,036
0,002

Max Response Time
90th Response Time
Avg Think Time
Min Think Time
Max Think Time
90th Think Time
Avg Keying Time
5.991
0,239
8.601
0,000
23.995
16.386
10.606
0,933
0,154
9.478
0,000
24.022
18.104
9.571
0,475
0,108
9.211
0,000
23.970
18.479
9.618
sorry my english!!!


#2

There is a good definition of the different statistics in the help file under “Benchmark Factory Testing Results Terminology”. As far as which ones to consider for your documentation that really depends on what you are testing for. Transactions per Second (TPS) is the metric to use when trying to determine the capacity of the database server, in other words, how much load the database server can handle. When determining how the server is performing, use Avg. Response Time.

Looking at your results I would say that if you are determining capacity that you will need to run at a higher user load since your maximum TPS is at the last user load executed in the test.


#3

Hi,

you advised the original user to run a higher user load but he is running at 100 users. Is this not the maximum that is allowed without buying extra licenses ?

I am having the same problem in that I can’t get any meaningful results at 100 users. Have I got this wrong ? and if not can you say why we are limited to only 100 users.

Regards,

Jim.


#4

Yes, 100 virtual users is the default number of virtual users which comes with the base license. And in most cases 100 virtual users can apply a good load against a database server, depending on what the selected workload is and the size of the database server. If you are running the TPC-C workload there are, per TPC specification, default latencies on the transactions in order to simulate a “real world” user. These latencies average several seconds in delay and, therefore, greatly affect the TPS number. Now you can reduce these latencies in order to apply more load on the server and thus higher TPS, but be aware that doing this can also cause other errors to occur since you are outside spec. Now there are other benchmarks, like the TPC-E, which do not have a latency requirements per specification. There is also the TPC-H, which is a data warehouse benchmark, which does not require many users to stress a system. In fact most cases two or three users in the Stream Test can sufficiently stress a system.

So do you have a requirement on the type of workload that you are required to run? The type of workload is usually selected dependent on what the testing goal is (capacity or performance). And depending on the workload and the size of the system you are testing there are cases where more than 100 virtual users will be required.


#5

Thanks for the reply Kevin.

I’m new to bench marking so I might be going about this the wrong way.

We have a document management system running on Oracle in a Windows environment with about 150 concurrent users. We also have an IBM z series on which we have a number of virtual machines running Oracle on SUSE Linux. I want to do some tests to confirm that we can move the database over to the IBM without any loss in performance.

I have been using TPC-C but am getting the message

“Since the maximum TPS was found at the last userload of 100, it is possible that a higher throughput may be attainable at a higher userload. Consider re-executing test with a higher userload.”

Should I be altering the latency or trying TPC-E ?

Thanks,

Jim


#6

If you think that an OLTP workload best simulates your document management system then I would stay with the TPC-C and just reduce the latency by a factor of ten.


#7

Thanks Kevin,

I’m trying that now.

I am getting an error if I use a scale factor of anything but 1 although product says that I need a User Load to Scale Factor Ratio of 1 to 10.

The error is :-

Incorrect version of Benchmark Factory Agent, Please install the correct version.

any ideas ?

Thanks,

Jim.


#8

Hi Kevin,

sorry to be a pain but I’ve been trying changing the latencies. I’ve ran with the default 1000 then 100 then 10 then 1 but the TPS is almost the same each time. I have created a new job every time I changed it but it seems to have no efferct.

Regards,

Jim.


#9

ok, I’ve just seen that the Latencies are set in the Transactions tab. I was using Edit->settings to amend the latencies.


#10

For the most part the settings in the Edit - Settings page are defaults used when you first create a job. The TPC-C is the exception since it’s default latencies are those set by the specification.

Did you get past the Incorrect Version of Benchmark Factory Agent message? That error message shows up when an Agent connects to the console but it’s version is not compatible with that of the console.


#11

I still get the Incorrect Version of Benchmark Factory Agent message. I tried uninstalling Benchmark Factory as I had trial version installed also and reinstalled just the current version but I still get the error.


#12

I’m still having a problem with the scale factor. Tests are now aborting due to deadlocks.

Is there anything I can do to fix this ?


#13

As mentioned above, when you reduce the default latencies for the TPC-C, then errors can occur. Deadlock errors are the most common error. There are a couple of things you can do to fix/avoid the deadlock errors.

  • Tune your database to reduce the deadlock conflicts.

  • Increasing the scale of the TPC-C data set. Using a higher scale set will cause the likelihood that the data conflicts to be reduces since there is more data spread across the virtual users.

  • Increase the deadlock timeout of the database. This will give the locking user more time to complete it’s transaction and thus free up the lock before a deadlock timeout occurs.

  • And since it’s the TPC-C load which is causing the deadlocks you can reduce this load by either increasing the latencies values so that deadlocks don’t occur or reduce the number of virtual users.

Some of the transactions for the TPC-C have the virtual users, which are assigned a specific warehouse and district, access/update data which is not in their assigned data set a small percentage of time. Also since each warehouse has 10 districts, so when any of those virtual users need to update warehouse totals, then a possible deadlock may occur. The get more information about the TPC-C and it’s transactions visit the TPC.ORG.

I hope this helps.


#14

Thanks for your reply.

I have been reducing the latencies to try to stress the database. Using the default latencies with 100 users gave maximum TPS so I need to give the database more work to do.

The advice from the system was to use a scale factor of 1 for every 10 users so for 50 users I should be using a scale factor of 5. This however gives me the error

"Customer with last name ‘PRESESEEING’ could not be found.

Payment : Statement ByID OCIStmtExecute

Transaction “Payment Transaction” run failed by virtual user 5 in agent 1 of IT-SN988135

Customer with last name ‘CALLYANTIPRI’ could not be found."

There are many entries in t he customer table with these names so I am not sure why I am getting the error.

Thanks,

Jim


#15

It looks like loaded data set scale factor doesn’t match what you have specified. Are you getting the following warning in the Messages tab when you run the test?

"“The loaded scale factor(1) doesn’t match the specified scale factor (1) for the TPC-C benchmark.”

If you are you changed the scale factor but didn’t tell BMF to recreate the data set. To do this the easiest way is to drop the C_ tables or you can edit the Create Object step in the job and go the Options tab and select to recreate object every execution.

Recreate Objects Every.jpeg

For the Payment transaction, it not only searches by name but also warehouse and district ids.