i ran TPC-C benchmark against Oracle 11g database. i got TPS and kBPS values as 0.00 in most of the test results also i used to get < 5 TPS in some of the test results. What all are the database system parameters and Benchmark Factory confiuration settings should be considered to get more Transactions per second? Can anyone help me?
By default the TPC-C transactions have large latency, keying and think times, values. These values, which are typically around 30 seconds, are per the TPC-C specification. Because of these large latency values the TPC-C benchmark will have low throughput values. So if the you wants to lower the latency values then they will increase the load on the server and get more transactions executed. But know that the way the transactions are designed that lowering the latency values will also increase the likelihood of data contention/deadlocks
Far too vague a question to answer. What hardware - cpu’s, memory, disk subsystems, spindles, etc. That alone can mean zillions of different oracle config parameters. There is NO simple golden rule to set this to get max that.
Then depending on all of that being known and setup right - then how many concurrent users and what benchmark scale size are you choosing - and why. Then what BMF settings have you adjusted or modified (e.g. latencies) - and why.
If you want to take this offline and provide more details - email me at firstname.lastname@example.org and I’ll see what advice I can give …
I used SAN storage for Oracle database.
DG1 diskgroup ( 5 RAID 10 luns) for Oracle database files
FRA diskgroup ( 2 RAID 10 luns) for Flash Recovery Area.
Moved Online redo logs to the three separate file system disks(SSD or HDD).
We have to compare the oracle database performance of SSD disks and HDD disks by placing the Online redo logs on them and executing the TPC -C benchmark with the userload of 800.
What all are the parameters we should consider to compare the test results in BF reports and AWR report?
Got it. You’re going to need either to hire a domain expert (consultant) like Mike Ault - he’s written books on this very topic and works for an SSD company. Or you’re going to need to buy Mike’s books and tear them apart. Because the answer is that “there is no simple cookbook per se for what you’re doing” - other than experience on what to tweak along the way. Because what you’d do under Oracle 10gR2 vs. 11g vs. 11gR2 may be different. It may seem like a simple test - do it one way with disks and then again with SSD. But that would be like saying do the test with 8 GB RAM and then add 24GB more to the server - you know you’ll need to adjust various parameters to take advantage of that. The same is true here.
You mention 5 TPS in your email - and that’s just abysmal. Something is very wrong! My simple quad core server with 8 GB memory and a simple RAID card and 8 SATA disks delivers much better performance than that. When I add SSD, it goes an order of magnitude even higher. My DELL 2950 with cacheless RAID but 15K drives goes even higher. So you may have a more fundamental problem that needs solved before doing the relative comparison.
Here’s the book’s Mike has written that might help:
Mike works at Texas Memory Systems - a leading SSD vendor.
If Mike is unavailable or too expensive - then you could try Quest Professional Services - or maybe even see about my assisting. But you need much more than just informal, long distance email support. But try the reading materials first and see if that can get you past the hump …