While using the Beta to read logs, I’ve run into several problems that prevent me from reading logs.
After an error in a script inadvertently changed 900k records resulting in a log of 60gb (oops), I detached the db, copied the mdf and ldf files to dbname_copy.
When running the log reader in offline mode, it defaulted to showing the original log file plus one other that didn’t exist that it inserted. When I pressed next, it threw an error saying the second one it created didn’t exist. There was also no way to remove it. I worked around it by changing the first and second ldf to the same ldf_copy name. However, it failed saying it could not associate the copy name to the database.
I then changed the db name and attached the ‘copy’ version of mdf and ldf, attached the db,started the sql, opened the tables to ensure all was working, then detached the database and tried again. It still would not recognize the log files and wanted to insert a ghost log file. The second log file it was showing in Toad was not displayed in the file properties for the db, so its’ not clear where the log name came from.
I then installed the required files to read the live logs, re-attached and it started progressing. At various points before finishing, it threw an out of memory exception. The first time it ran out of memory, sql was allocated 36gb memory and physical memory went to 98%, so I reducted it to 30 gb and reran it. It threw an out of memory exception again. I reduced sql memory to 10gb and ran Toad again, it ran out of memory again, but this time physical memory was at 50% and toad was at 673,556k. Does Toad have some internal limits on the size of a log file it will read and process, if so what is that size, or are there some other settings that allow me to read and process this large file in sections. Thanks.