Toad World® Forums

Problems with Large Oracle Trace File


#1

Is there a limit to the number of lines that Benchmark Factory can take for a Orcale trace file ?, I’m loading in just under 2 million lines. BMF loads it into xml which is fine but when I try to replay the file I get a error saying aborted due to error, it would also be nice to know whereabouts in the trace file it failed i.e which sql statement is failing and also with what error. Any help would be appreciated, thanks.


#2

This is not really an answer other than when I asked my sales rep at Quest, I was told there is no limit to the size of an oracle trace file.

I am trying to load some large Oracle trace files (Around 500Mb) – I first tried to load into a trail version of Benchmark Factory and it resulted in an error – I am not trying to load into a registered version – and am still waiting for the files to load.

I will update you with my progress.


#3

I have worked out my large trace files will not load (I have had to break them up into smaller trace files and load them separately. This has loaded them into BMF.

I found that there is a limit of around about 200+mb (I tried 400mb+ and they failed). BMF ran out of memory or peaked the CPU for hours.

I am now waiting for the DBA (I am not a DBA, but a performance test expert) to load the backups into the test environment.


#4

What version of BMF are you using? Older versions would get tripped up by the really, really big files, or when you tried to import say 30,000 files.

Latest version will try to parse the file up to 1,000 statements. After that, the trace file is imported into BMF as an XML file, which is much more efficient than trying to load every single statement from the trace info Memory and display inside BMF.

So, if you are using the latest and greatest version, and still having problems, I would say you have found a bug.


#5

Hello, I am using version 5.8, build 565 which I think is the latest version.
I have managed to load in my large trace file which goes into xml and after hours of pruning the sql I have got the trace file working for 1 virtual user. When pruning the sql I found that sometimes BMF uses $BFRand(1,100) for bind variables. This nearly always results in a error, when I look at the orginal trace file and find the sql statement the bind value is a number i.e 1478915 so I don’t understand why BMF is using $BFRand(1,100). I then spend hours searching for $BFRand(1,100) within the xml file and replacing them with the correct values from the trace file - I usually use float and then the real value.
What I ideally need to do is to run the trace file with 100 virtual users and this is where I am hitting big problems. My PC runs out of memory and the CPU is constantly at 100%. I have now started to break down the trace file into 5 bits and then run a mix load job with weighting percentages against each trace file. I have still not managed to run for 100 virtual users but have managed 20, I am now playing around with latency options. Also how important are the timing options (Pre-sampling, Iteration etc and user startup options)
Thanks for your help.


#6

Could you post the section of the trace file for which BMF is generating the $BFRand(1,100)? Thanks.


#7

Hi,

I am, also facing the same problem…
I have a license of 100 V users.
I tried loading trace file of 500000 SQL to BF 5.8.1.
The execution did not complete even after 30 Hours.
Any comment on the same.

-Alok