TDP Giving Up on Query Connections with No Error in Automation

I hope I can explain this well.

This is a problem that we’ve had for some time, but I was really working to ensure it wasn’t an issue on my end before I submitted this to TDP. I do believe it may be on the TDP product end. Essentially we use TDP 4.0, but I’ve seen the problem as well on earlier versions. We have our queries that connect using an I-Series ODBC connection to an AS400 using DB2 SQL. All of our queries use this. With that being said, I’ve noticed quite frequently that long running queries (those that take possibly an hour or more to run) have issues specifically when running in automation that are essentially giving up the connection to the query and continuing to run on the machine. Therefore, when I come in in the morning I notice that the little toad icon is still up and the automation task is still running and I have to forcibly end the task and then attempt to rerun. (It will work usually when rerunning and this issue isn’t present every time the query runs, just on and off, most notably when CPU is higher and query times are running even longer than normal) I have the ability to check the query running on the AS400 and when I do this on these queries in automation that give up and essentially keep running, I notice that the query is no longer running on the AS400. Therefore, I’m suspecting that somehow the query either times out or is killed some how on the TDP automation end, but throws no errors back. The reason I believe this, is I also have the ability to terminate connections to queries on our AS400 which means that I can stop a query on the AS400 side. If this was the case, or connections get lost, I always throw back an error on the automation that the connection has been lost, which is great and tells me that it will error for connections lost on the AS400 end. This is why I assume its something with the TDP product. I’d like to see if the wonderful TDP support folks have seen this issue before and if not, what information I may be able to provide to help this issue to be researched. Also, I’d love to hear if anyone else has experienced anything similar.

Hope to hear from you soon.

Chad

Hello Chad,

Could you please provide me with the following items / answers:

  • Any way it might be related to that problem you’ve mentioned in www.toadworld.com/…/62380 topic? If yes then you might try the one-off build that tries to address the problem.
  • Could you please send me the ExceptionTrace.log from Help > About > click on the “Application Data Directory” link?
  • Could you please write down timestamps when does the automation job start and then timestamp when do you kill the task? This would help to highlight the region of log file’s entries.
  • Also you might try to check the event logs for any clues related to Toad between this time frame.
  • Have you experienced this problem on connections other than I-Series ODBC?
  • Maybe a hint - not sure if those ‘long running queries’ only are single-statement SELECT queries but when i ran into similar scenario in the past it helped me a lot when i’ve added some minor logging as a part of the query - like simple INSERT INTO statement into a log table describing time stamped Step in progress - with this we will know whether it ever done someething, became stuck in the middle or it finished all but is stuck upon finish…
    For any attachments please send them to my email: Martin.Holkovic@quest.com

Thank you,

Martin

Martin, responses are below. I’ve also emailed you information requested, plus additional information that I hope may help. Please let me know if there’s anything else you need from me.

Thanks!

    • Any way it might be related to that problem you’ve mentioned in www.toadworld.com/…/62380 topic? If yes then you might try the one-off build that tries to address the problem. I do not believe that this has anything to do with the previous discussion regarding the TDP must shut down message. That was specific to 4.0 and this problem has existed at least since 3.8. Plus it’s almost the opposite. Instead of giving a message that TDP must shut down, it just continues to try to run even know the query isn’t processing on the AS400.
    • Could you please send me the ExceptionTrace.log from Help > About > click on the “Application Data Directory” link? I’m emailed as much information as I think you may need. Please let me know if there’s any other information I can provide.
    • Could you please write down timestamps when does the automation job start and then timestamp when do you kill the task? This would help to highlight the region of log file’s entries. Regarding timestamps, the process starts exactly at 7pm and should end by around 7:30-8pm. Unfortunately it stays locked up until I arrive the next morning to force it to shut down, so I’d say it was around 8am-8:30 am when shut down.
    • Also you might try to check the event logs for any clues related to Toad between this time frame. If you are referring to the logs created and placed in the automation folder, unfortunately, it’s still stuck on the query portion and never goes further than that. At least at this time, I cannot see anything out of the ordinary. This was also emailed though.
    • Have you experienced this problem on connections other than I-Series ODBC? I actually ONLY use I-Series ODBC, so I can’t speak to the issue possibly existing on other platforms.
    • Maybe a hint - not sure if those ‘long running queries’ only are single-statement SELECT queries but when i ran into similar scenario in the past it helped me a lot when i’ve added some minor logging as a part of the query - like simple INSERT INTO statement into a log table describing time stamped Step in progress - with this we will know whether it ever done someething, became stuck in the middle or it finished all but is stuck upon finish… *I have not tried this but possibly could. I will say that the query runs through a “select to file” action and no information is ever produced at all when this issue occurs and again the SQL that’s running is the only thing that remains in the log, so I’m not sure what this would catch. I know the query starts, it just does not complete and somehow ends the connection and query (No longer running on the AS400 side), which is somehow being stopped on the TDP side with no error and no shut down, resulting in it just hanging up and staying running until I kill it. *

Hey Chad,

Thanks for the files.

I was able to find two different errors within the timeframes concerned:

New exception: 8/29/2016 7:00:44 AM

System.IO.IOException: The process cannot access the file ‘C:\Users\S_SDLCORPWRUM02CCK\AppData\Roaming\Quest Software\Toad Data Point 4.0*Connections.xml*’ because it is being used by another process.

And this one:

New exception: 8/28/2016 8:00:16 PM

Sybase.Data.AseClient.AseException: Could not load C:\Users\S_SDLCORPWRUM02CCK\AppData\Local\Temp\Sybase.AdoNet4.AseClient.32bits.16.0.2.0*msvcr110.dll*

My comments:

  •      The first one is 100% the same error as I’ve mentioned in the separate email thread – I’ve made a fix for it. Can you please confirm whether you use the same TDP version also for this job? Did you try it in the build with my fix?
    
  •      The second one is about missing binary library for Ase connection – not sure how that relates to you since you don’t use it but obviously Toad tried to load the assembly… From the support bundle I understand that you are using Windows Server 2008.. After googling a little I found users complaining that this assembly is missing there including in Windows 8 udpate. Anyway in an attempt to diagnose the problem further could you please install the [VC++ redistributable package](https://www.microsoft.com/en-us/download/details.aspx?id=30679)? Click on Download and select the x86 version.
    
  •      the execution log confirms that the query was going to be executed it says the last activity to be executed was the Select to file and the stage where it ended is before execute, because when I compare log from your script and log from a sample export script I’ve created then I see that the last log entry of your log file is before the “begin / execute” sections… which might support the idea of the bug in connections.xml being blocked…
    

Your log:

01/09/2016 14:38:57: Export Template: <Quest.Toad.ImportExport.ExportEngine>

<…cut lines off…>

 <Fields>

 </Fields>

</Quest.Toad.ImportExport.ExportEngine>

My log:

– 8/30/2016 10:53:43 AM: Digital_Spread.log: Export Template: <Quest.Toad.ImportExport.ExportEngine>

<…cut lines off…>

 <Fields>

 </Fields>

</Quest.Toad.ImportExport.ExportEngine></Quest.Toad.ImportExport.ExportEngine>

01/09/2016 14:38:58: 14:38:57 Thread (3) Adding Export Notification Subscriber {0}

01/09/2016 14:38:58: 14:38:57 Thread (3) Export Started [01/09/2016 14:38:57]

01/09/2016 14:38:58: 14:38:57 Thread (3) ** START **

01/09/2016 14:38:58: 14:38:57 Thread (3) Export using connection: DSG4C94662\SQLEXPRESS (PROD\mholkovi), dev

01/09/2016 14:38:58: 14:38:57 Thread (3) Export using connection: DSG4C94662\SQLEXPRESS (PROD\mholkovi), dev

01/09/2016 14:38:58: 14:38:57 Thread (3) Export Wizard: Building object list

01/09/2016 14:38:58: 14:38:57 Thread (3) Build List done.

01/09/2016 14:38:58: 14:38:57 Thread (3) Exporting Data (1 of 1)

01/09/2016 14:38:58: 14:38:57 Thread (3) BEGIN Query :

01/09/2016 14:38:58: 14:38:57 Thread (3) Object SQL Query started.

01/09/2016 14:38:58: 14:38:57 Thread (72) InternalReadBackground - after ExecuteReader - command select 1

01/09/2016 14:38:58: 14:38:57 Thread (92) Excel version = 15

01/09/2016 14:38:58: 14:38:58 Thread (89) DoneReadTreadWriter

01/09/2016 14:38:58: 14:38:58 Thread (89) Writing rows 0 - 1

01/09/2016 14:38:58: 14:38:58 Thread (89) Writing rows Finalize

01/09/2016 14:38:58: 14:38:58 Thread (89) DoneReadThreadWriter finally - adapter.RowsRead 1

01/09/2016 14:38:58: 14:38:58 Thread (3) END Query :

01/09/2016 14:38:58: 14:38:58 Thread (3) Object SQL Query finished.

01/09/2016 14:38:58: 14:38:58 Thread (3) Export Finished [01/09/2016 14:38:58]

Martin,

I’m going to install that Visual C++ and give that a whirl and let you know how it goes.

Regarding the version with the bug fix for the issue with TDP shutting down, it IS running on that version, which is 4.0.1.topic-ToadWorldPivotAndConnFix

This is the version I’ve been using and actually still have this same problem even with this version. The trace information and all the information I provided came the 4.0.1 version.

Hope this helps further. I’d be happy to provide anymore information you may need in the meantime. I’ll continue to test with that Visual C++ Installed to see if that makes any impact.

Thanks!

Thanks for confirming the #1.

FYI - while i wait for the c++ redist to be installed on your side i will try to find clues what might hang betweens the following two lines

</Quest.Toad.ImportExport.ExportEngine>

01/09/2016 14:38:58: 14:38:57 Thread (3) Adding Export Notification Subscriber {0}

Because you don’t the the last line in your log which means it never reached it… that should narrow down the problem even further…

Hallelujah!! We are having similar problems but with an native Oracle connection. The log often just says (writing to local storage) but the query appears to have just stopped running and the automation has come to a stand still too. We too get no errors - and the verbose log shows no problems. If we can help bottoming out this problem then count us in [:)]

hello spligosh,

Could you please also send me your automation script file along with the log? (.tas) I’d like to compare whether it hangs on the same spot or somewhere else.

Thanks - Martin

Hi here is the log - I believe it is still running - but apparently doing nothing. We have a central automation (Extract_All_MonthlyData) that runs lots of embedded automations. As you can see it seems to stop while writing to local storage. Please note that this only appears to happen with Oracle scripts and will happen at various times while writing to various tables.

– 01/09/2016 15:54:12: Extract_All_MonthlyData_temp.log: Starting Automation Script
– 01/09/2016 15:54:12: Extract_All_MonthlyData_temp.log: Setting up environment
– 01/09/2016 15:54:12: Extract_All_MonthlyData_temp.log: Script run by Toad Data Point 3.8.0.575 as admshassa1 on ST1W1611
– 01/09/2016 15:54:12: Extract_All_MonthlyData_temp.log: Build started
– 01/09/2016 15:54:12: Extract_All_MonthlyData_temp.log: Compiling script
– 01/09/2016 15:54:12: Extract_All_MonthlyData_temp.log: Start CompileWorkflow
– 01/09/2016 15:54:12: Extract_All_MonthlyData_temp.log: Before Compile
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Return from CompileWorkflow
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Workflow Compiled without Errors
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Build completed
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Create Workflow instance
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Started workflow instance
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Begin execution script activities
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Executing Automation Script: D:\CapPlan_Data\BMC_Processing\BMC_MISC_Update.tas
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Setting up environment
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Script run by Toad Data Point 3.8.0.575 as admshassa1 on ST1W1611
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Build started
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Compiling script
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Start CompileWorkflow
– 01/09/2016 15:54:14: Extract_All_MonthlyData_temp.log: Before Compile
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Return from CompileWorkflow
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Workflow Compiled without Errors
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Build completed
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Create Workflow instance
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Started workflow instance
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Begin execution script activities
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Locally defined data set variable ‘Loop_data_1_SQL’ will be used in the ‘Loop_data_1’ activity
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Connecting to root@localhost
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Sql executed and 15 rows returned
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Sql executed and 15 rows returned
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Begin loop dataset
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Executing activity Loop_row_1 using row 1
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Locally defined data set variable ‘Loop_data_2_SQL’ will be used in the ‘Loop_data_2’ activity
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Connecting to root@localhost
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Sql executed and 9 rows returned
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Sql executed and 9 rows returned
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Begin loop dataset
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Executing activity Loop_row_2 using row 1
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Connection description = BMC_Portal_Client
– 01/09/2016 15:54:16: Extract_All_MonthlyData_temp.log: Connection node found
– 01/09/2016 15:54:28: Extract_All_MonthlyData_temp.log: Connection to: BMC_Portal_Client
– 01/09/2016 15:54:28: Extract_All_MonthlyData_temp.log: Connection successful
– 01/09/2016 15:54:28: Extract_All_MonthlyData_temp.log: Connection logon: PE
– 01/09/2016 15:54:28: Extract_All_MonthlyData_temp.log: Last schema: PE
– 01/09/2016 15:54:28: Extract_All_MonthlyData_temp.log: Default schema: PE
– 01/09/2016 15:54:28: Extract_All_MonthlyData_temp.log: Auto Commit: True
– 01/09/2016 15:54:28: Extract_All_MonthlyData_temp.log: Executing script: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,
‘dd/mm/yy’)
AS TIMECODE,
VW_NAMESPACE.WARNINGENABLE,
VW_NAMESPACE.WARNINGMINVALUE
FROM PE.VW_NAMESPACE VW_NAMESPACE
INNER JOIN
PE.PARAM_SUMMARIZATION_DATA_POINT PARAM_SUMMARIZATION_DATA_POINT
ON (VW_NAMESPACE.PARAMETER_INSTANCE_GUID =
PARAM_SUMMARIZATION_DATA_POINT.“PARAMETER”)
WHERE ( ( VW_NAMESPACE.PARAMETER_NAME = ‘CPU Usage’
AND VW_NAMESPACE.ELEMENT_NAME =
‘Sample Manager / Strands / Payroll - STWH20’)
AND TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP
/ 24
/ 60
/ 60
/ 1000,
‘dd/mm/yy’) = ‘’ /last month/)
– 01/09/2016 15:54:33: Extract_All_MonthlyData_temp.log: Executed: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,
‘dd/mm/yy’)
AS TIMECODE,
VW_NAMESPACE.WARNINGENABLE,
VW_NAMESPACE.WARNINGMINVALUE
FROM PE.VW_NAMESPACE VW_NAMESPACE
INNER JOIN
PE.PARAM_SUMMARIZATION_DATA_POINT PARAM_SUMMARIZATION_DATA_POINT
ON (VW_NAMESPACE.PARAMETER_INSTANCE_GUID =
PARAM_SUMMARIZATION_DATA_POINT.“PARAMETER”)
WHERE ( ( VW_NAMESPACE.PARAMETER_NAME = ‘CPU Usage’
AND VW_NAMESPACE.ELEMENT_NAME =
‘Sample Manager / Strands / Payroll - STWH20’)
AND TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP
/ 24
/ 60
/ 60
/ 1000,
‘dd/mm/yy’) = ‘’ /last month/)
– 01/09/2016 15:54:33: Extract_All_MonthlyData_temp.log: Query returned 0 rows
– 01/09/2016 15:54:33: Extract_All_MonthlyData_temp.log: Start saving to Local Storage (‘TDP.STWH20’).
– 01/09/2016 15:54:33: Extract_All_MonthlyData_temp.log: Appending to an existing table ‘TDP.STWH20’.
– 01/09/2016 15:54:33: Extract_All_MonthlyData_temp.log: Saving to Local Storage is done.
– 01/09/2016 15:54:33: Extract_All_MonthlyData_temp.log: Script execution completed.
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Executing activity Loop_row_2 using row 2
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Connection description = BMC_Portal_Client
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Connection node found
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Connection to: BMC_Portal_Client
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Connection successful
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Connection logon: PE
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Last schema: PE
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Default schema: PE
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Auto Commit: True
– 01/09/2016 15:54:34: Extract_All_MonthlyData_temp.log: Executing script: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,
‘dd/mm/yy’)
AS TIMECODE,
VW_NAMESPACE.WARNINGENABLE,
VW_NAMESPACE.WARNINGMINVALUE
FROM PE.VW_NAMESPACE VW_NAMESPACE
INNER JOIN
PE.PARAM_SUMMARIZATION_DATA_POINT PARAM_SUMMARIZATION_DATA_POINT
ON (VW_NAMESPACE.PARAMETER_INSTANCE_GUID =
PARAM_SUMMARIZATION_DATA_POINT.“PARAMETER”)
WHERE ( ( VW_NAMESPACE.PARAMETER_NAME = ‘Processor Count’
AND VW_NAMESPACE.ELEMENT_NAME =
‘Sample Manager / Strands / Payroll - STWH20’)
AND TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP
/ 24
/ 60
/ 60
/ 1000,
‘dd/mm/yy’) = ‘’ /last month/)
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Executed: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,
‘dd/mm/yy’)
AS TIMECODE,
VW_NAMESPACE.WARNINGENABLE,
VW_NAMESPACE.WARNINGMINVALUE
FROM PE.VW_NAMESPACE VW_NAMESPACE
INNER JOIN
PE.PARAM_SUMMARIZATION_DATA_POINT PARAM_SUMMARIZATION_DATA_POINT
ON (VW_NAMESPACE.PARAMETER_INSTANCE_GUID =
PARAM_SUMMARIZATION_DATA_POINT.“PARAMETER”)
WHERE ( ( VW_NAMESPACE.PARAMETER_NAME = ‘Processor Count’
AND VW_NAMESPACE.ELEMENT_NAME =
‘Sample Manager / Strands / Payroll - STWH20’)
AND TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP
/ 24
/ 60
/ 60
/ 1000,
‘dd/mm/yy’) = ‘’ /last month/)
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Query returned 0 rows
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Start saving to Local Storage (‘TDP.STWH20’).
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Appending to an existing table ‘TDP.STWH20’.
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Saving to Local Storage is done.
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Script execution completed.
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Executing activity Loop_row_2 using row 3
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Connection description = BMC_Portal_Client
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Connection node found
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Connection to: BMC_Portal_Client
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Connection successful
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Connection logon: PE
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Last schema: PE
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Default schema: PE
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Auto Commit: True
– 01/09/2016 15:54:36: Extract_All_MonthlyData_temp.log: Executing script: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,
‘dd/mm/yy’)
AS TIMECODE,
VW_NAMESPACE.WARNINGENABLE,
VW_NAMESPACE.WARNINGMINVALUE
FROM PE.VW_NAMESPACE VW_NAMESPACE
INNER JOIN
PE.PARAM_SUMMARIZATION_DATA_POINT PARAM_SUMMARIZATION_DATA_POINT
ON (VW_NAMESPACE.PARAMETER_INSTANCE_GUID =
PARAM_SUMMARIZATION_DATA_POINT.“PARAMETER”)
WHERE ( ( VW_NAMESPACE.PARAMETER_NAME = ‘File System Capacity’
AND VW_NAMESPACE.ELEMENT_NAME =
‘Sample Manager / Strands / Payroll - STWH20’)
AND TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP
/ 24
/ 60
/ 60
/ 1000,
‘dd/mm/yy’) = ‘’ /last month/)
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Executed: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,
‘dd/mm/yy’)
AS TIMECODE,
VW_NAMESPACE.WARNINGENABLE,
VW_NAMESPACE.WARNINGMINVALUE
FROM PE.VW_NAMESPACE VW_NAMESPACE
INNER JOIN
PE.PARAM_SUMMARIZATION_DATA_POINT PARAM_SUMMARIZATION_DATA_POINT
ON (VW_NAMESPACE.PARAMETER_INSTANCE_GUID =
PARAM_SUMMARIZATION_DATA_POINT.“PARAMETER”)
WHERE ( ( VW_NAMESPACE.PARAMETER_NAME = ‘File System Capacity’
AND VW_NAMESPACE.ELEMENT_NAME =
‘Sample Manager / Strands / Payroll - STWH20’)
AND TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP
/ 24
/ 60
/ 60
/ 1000,
‘dd/mm/yy’) = ‘’ /last month/)
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Query returned 0 rows
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Start saving to Local Storage (‘TDP.STWH20’).
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Appending to an existing table ‘TDP.STWH20’.
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Saving to Local Storage is done.
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Script execution completed.
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Executing activity Loop_row_2 using row 4
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Connection description = BMC_Portal_Client
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Connection node found
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Connection to: BMC_Portal_Client
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Connection successful
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Connection logon: PE
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Last schema: PE
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Default schema: PE
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Auto Commit: True
– 01/09/2016 15:54:39: Extract_All_MonthlyData_temp.log: Executing script: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,
‘dd/mm/yy’)
AS TIMECODE,
VW_NAMESPACE.WARNINGENABLE,
VW_NAMESPACE.WARNINGMINVALUE
FROM PE.VW_NAMESPACE VW_NAMESPACE
INNER JOIN
PE.PARAM_SUMMARIZATION_DATA_POINT PARAM_SUMMARIZATION_DATA_POINT
ON (VW_NAMESPACE.PARAMETER_INSTANCE_GUID =
PARAM_SUMMARIZATION_DATA_POINT.“PARAMETER”)
WHERE ( ( VW_NAMESPACE.PARAMETER_NAME = ‘File System Space Available’
AND VW_NAMESPACE.ELEMENT_NAME =
‘Sample Manager / Strands / Payroll - STWH20’)
AND TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP
/ 24
/ 60
/ 60
/ 1000,
‘dd/mm/yy’) = ‘’ /last month/)
– 01/09/2016 15:54:40: Extract_All_MonthlyData_temp.log: Executed: SELECT VW_NAMESPACE.ELEMENT_NAME,
VW_NAMESPACE.ELEMENTSTATUS,
VW_NAMESPACE.APPLICATIONNAME,
VW_NAMESPACE.INSTANCENAME,
VW_NAMESPACE.PARAMETER_NAME,
VW_NAMESPACE.PARAMETER_UNIT,
PARAM_SUMMARIZATION_DATA_POINT.VALUESUM
/ PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT
AS VALUEAVG,
PARAM_SUMMARIZATION_DATA_POINT.VALUECOUNT,
TO_DATE (
TO_DATE (‘01-JAN-1970’, ‘dd-mon-yyyy’)
+ PARAM_SUMMARIZATION_DATA_POINT.TIMESTAMP / 24 / 60 / 60 / 1000,

Can you check if you are low on disk space and create some space if your are.

Also, clean out your temp dir. On any result sets over 2500 rows we serialize to the temp dir. if we run out of space or have some conversion error we might have silent issues.

One more thing, go to Help|about and access your App Data Dir. Empty the contents of the KnownExceptions.xml so it look like this.

<?xml version="1.0" encoding="UTF-8" standalone="true"?>

Chad - did that VC++ redist help?

Guys could you please try what Debbie suggested and then give it a try? After the error occurs please send me the latest version of the ExceptionTrace.log and clarify at which date + time did the job start to execute before it failed. It makes no sense to me that the error with blocked Connections.xml file error is still present when i’ve applied an ignore-and-log fix for it… Because if still this error occurs (maybe on different file?) during the execution it is definitely the root cause…

In the meantime i did review the code between the export template xml string pasted into the log and the “Adding Export Notification Subscriber” line and i can say it is only a few lines of code that initialize the export engine with the logging option… and that particular log entry is there only if you have Verbose logging enabled which i believe you do because you can see the export template xml string… The conclusion is there is no command / line of code that is trying to be executed between these two log entries… I’d say Toad has some background thread running which freezes it - could you please confirm whether you have any jobs running at the same time that overlaps these jobs? I am not sure if it is even possible to run multiple jobs with a single toad process - if it is true then if the other jobs gets stuck within the same toad process then it might be the root cause

Spligosh: I see that you are using TDP 3.8 - have you tried the 4.0? and also there is this “special” one-off build that tries to address one specific error that hangs the job…

Debbie and Martin,

I did install the C++ redist and I have not noticed a change. I still have jobs that are hanging up. The next time this happens, I will work on timing and get you revised logs and info, etc…

Debbie, regarding the disk space, my server has plenty of disk space, so I don’t believe that to be an issue. Also, regarding the temporary directory (which I’m assuming is where the .log files post) its pretty clean. I did delete some older stuff just in case, but it is pretty current. I’m not sure if I need to set up a task to remove all .log files out periodically or not, but it appears each day it just overrides the .log files as the tasks run. Lastly, Id did empty the contents of the KnownExceptions.xml file to exactly how you instructed. I’ll watch and monitor any changes since updating the xml for that.

Martin,

I will try as suggested post the changes that Debbie has suggested and get you further information if/when it happens again. Regarding your question about jobs running at the same time and overlapping, the answer is an absolute yes. Being in the IT world, a lot of our reporting/analytics has to be done during certain times meaning that we have a lot that will at least overlap. We try not to have many run exactly the same time, but there’s certainly overlapping, as we have no choice but to do so. TDP definitely allows it and that was one huge perk to us getting this product is we needed something that could run together or overlap if needed.

I hope I can continue to provide yall with more information as it comes. I very much appreciate your help with trying to identify the root and hopefully a fix for the issues we are having. Please let me know if you need anything else in the meantime.

Chad

No we haven’t tried 4 yet. This has been an ongoing issue for some time but it has been very hard to prove because it would hang during a very long series of automation which begins running at just turned midnight on the 1st of each month. All our other connections appear to run fine but it appears to often be during our one Oracle connection and often on large table extracts (almost like something times out!).

I have checked disk space and we have about 2GB free on the system drive (all our result sets are written to a separate drive) - is this enough?

No issues with the temp folder that I can see.

All our extracts run sequentially and hangs have happened without any other TDP processes running.

Martin,

FYI, I did email you the latest .log file for the issue that happened again over the weekend. Essentially the same lockup problem where I had to end the task this morning to shut it down and restart it. As mentioned in the email, the exceptiontrace.log file has not had any updates since 9/2, so I don’t think that would provide any further insight. I’m not sure if updating the knownexeptions.xml has anything to do with that or not, but I’m not sure theres other information I can provide. I do know the problem still appears to be present. You and Debbie, please let me know though if there’s further information I can provide. Thanks…Chad

Thanks Chad,

Could you please do the following:

  1. open Help > About > Application Data Directory (i.e. c:\Users\xxx\AppData\Roaming\Quest Software\Toad Data Point 4.0)
  2. navigate to the ImportExport folder
  3. zip all log/text files in here and send them to me.
    Those files are log files being appended by the export engine during execution. Their contents are inserted into the automation log but only after engine finishes and then at the very end the files are deleted. So that means if the export engine is in a dead lock state (hangs) it has never reached the point of “delete file” step which in this case for us means that we can retrieve additional log entries and approximately determine where it gets stuck.

Previously you have sent me the support bundle for the iSeries issue you’ve mentioned before. It contains the contents of the ImportExport folder and it has 167 files dating from 18th December 2013 - that means you had 167 dead lock (hang) states (or interuptions) since this date until 30th August 2016 (the day when you sent it) (just to mention it).

Next time it hangs, go to task manager, right click and generate dump file. Send to me. This is a really ugly file but would help if the issue is a thread lock.

Debbie/Martin,

I have the .dmp file uploaded in this reply. Hope this helps some more. As mentioned in the email. This is a good example of a query that runs especially long and appears to simply lose connection on the TDP side at some point (again I the connection can no longer be found on the AS400 side), but not throw any error back leaving the task running indefinitely until I have to forceably end the task. Let me know if you need anything else from me.

Thanks

Chad

**One more thing to note and at least remind ya’ll about is that this task WILL run just fine while having TDP up to run. Even if it takes 2 hours to return results of the query, it will still not give up the connection and will successfully run. This ONLY happens when trying to run specifically through the task scheduler which is running the automation task in batch mode. Just wanted to make sure I mentioned that. Thanks…

I apologize. I stand corrected. I’m actually noticing the automation task hanging up and continuing to run even while running directly in Toad data point (not using batch). Thanks again.

Thank you, I was able to download the zip file