Good Day Everyone,
I have a production server who has close to 80 Toad automation tasks scheduled with different triggers, since this week i have noticed problems where the task are stuck at Running on the Windows Task scheduler, after digging i found out that Toad gets stuck in the shutting down process every time.
I am using Toad Data point 3.8, As big as this server is; management is actually paranoid about updating the software.
Could you give me a few pointers of what can be causing this issue?
Here is a little cropped image of how the toad shows stuck in shutting down, there is a icon for every task:
http://imgur.com/a/K1SYG
hi jesusblanco,
Unfortunately there are no effective workarounds for this problem considering your system uses TDP v3.8 - our team made quite a few software changes to address this problem explicitly in later versions of TDP. I’d suggest you to persuade your management how essential is to upgrade our product to the latest 4.1 or try the latest 4.2 beta or wait a couple of weeks further to try 4.2 which will be released soon.
Sorry for the problems,
-Martin
Hey WhiteJesus,
I had a similiar problem on an older version of Toad Data Point where there was no notification sent for jobs that did not start or never finished. So I created a job table to track scheduled jobs and made generic table update routines that update the job table when the job starts and ends. The table had the job name, job owner, expected duration, start time, frequency, status flag and last run time. Then I had a job that runs every hour that goes through the table to see if there are any jobs that did not run on schedule or is running past the expected completion time and sends an email if it finds one. You can reuse a standard block at the beginning and end of the job if you use the same variable name in each automation job to hold the job name that matches your the name in your job table. You can store that those two blocks in your Templates section at the bottom of the toolbox in automation so you can just drag and drop them into each job. This does not stop the problem but at least you get notified when there is one.