Does the TPS (transactions per second) under the Database scalability test refer to the number of transactions per second per user when running multiple userloads? For example, when using a 100 userload, I get a result of 5.25 TPS. It doesn’t seem to make sense if 5.25 refers to the TPS for all 100 virtual users, does it?
TPS is one of the least informative statistics to measure/decide upon. Read some stuff I wrote about why average response time is much more relevant …
The Database Scalability test is made of data warehouse type transactions and OLTP type, taken from the TPC-C test. These OLTP transactions are created to simulate real users and therefore have large latency values (think time and keying time delays). These latencies will affect TPS by lowering the value since BMF will not be submitting the transactions to the server for execution as quickly. So the TPS value you are receiving is possible. You could lower the latencies to increase the rate of transaction submittal and thus the TPS. Or you can, as suggested, use the response time as your measurement since the response time is not as affected to changes in latency as TPS.
I tried lowering the latencies by 50% and this increased the TPS value to 31.13. I also tried choosing No Delay and it only increased the TPS value to 11.27. Shouldn’t choosing No Delay supposed to increase the value of TPS to a value greater than the case where I only lowered the latencies by 50%?
Also, does the TPS measurement refer to the transactions per second for all 100 users or for each user? My boss wants a report of both average response time and throughput (TPS) and the values I am getting (even with low or no latencies) seem to still be low.
Let me start with the easy question first. The TPS value reported is for all 100 virtual users.
Now for the more lenghty answer. As the workload is increased the throughput will also increase until the server threshold is achieved, then, as you continue to increase the workload the throughput will decrease and then plateau. This is the typical throughput characteristic. So as you decrease the latency you are increasing the workload and when you decreased the latency further you have surpassed maximum throughput and will achieve a smaller TPS value. Now that is not saying that the server throughput can not be increased further, it just means that with the current configuration, both hardware and software configuration, you have reached, and surpassed the maximum throughput.
Some tuning of the database will help on the throughput. One thing to verify is that since the Database Scalability test uses the TPC-C transactions and per the TPC-C spec there needs to be 1 warehouse (1 scale factor in BMF) for every 10 users. If you do not have the scale factor set to at least 10, since you are running with 100 users, then the likelihood of deadlocks/resource waits increases. Please refer to the links given by bscalzo