I’m running an automation in TDP 3.6 that is connecting to a Teradata Server. The script creates some volatile tables in Teradata database and finally executes a select statement that pulls the data into a csv file. All the code is included in a “Select to File” task. The code itself runs fine, but when I schedule the automation it runs and finishes few seconds later successfully according to windows task scheduler, however the code was not executed.
I tried with removing the volatile table’s code and doing only a select statement and the scheduled task works fine, so I guess there must be some kind of problem with the volatile tables. Is there way to make it work?
Sorry that i wasn’t very clear. If I execute the Automation Script it works. If I schedule the automation then it doesn’t run at all and it finishes a couple of seconds later after it started showing ‘finished successfully’ in windows task scheduler. The script is quite big, so to simplify i paste here a very simple example that doesn’t work for me:
CREATE VOLATILE TABLE table1 AS
(
SELECT
ORDER_NUM,
,SUM(TXN_SIGN_NBR * REVN_AS_SLD_DISC_USD) As Rev
from XXX
GROUP BY 1
)
WITH DATA NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
CREATE VOLATILE TABLE table2 AS
(
select
ORDER_NUM
,SUM(TRU_QTy)
from t2
GROUP BY 1
)
WITH DATA NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
select
a.Order_Num
a.Rev
b.TRU_QTy
from table1 a
inner join table2 b
on a.order_num = b.order_num;
If instead of volatile tables, i use only the select with subqueries then schedule task works correctly.
I inspected the script you’ve sent to Aleksey and I was able to reproduce the issue. It is not related to Teradata. I’ll be creating QAT to address this problem shortly. For now I’m suggesting a workaround.
Please add additional Execute Script activity and move there all sql statements from your Select to File activity except the final select statement that actually exports data to CSV. It was working for me this way.
Please let us know if the suggested workaround works for you as well.
As Aleksey suggested i tried to save the file in local computer and it didn’t work for me.
Also i tried to separate it in 2 different steps, '1 execute script’with the sql statements and 1 ‘select to file’ with the final select but i get an error in the select that the tables i created in the step 1 don’t exist. I guess is because they are volatile tables, so they are deleted when Toad disconnects and connects again. So I used a connection item and put the 2 tasks inside, so i can keep the same connection, but i get the same saying tables don’t exist.
Does your script not run at all? Toad has an annoying feature of checking out your script before running and if objects that are needed don’t exist at compile time, it won’t run. Some of that has been fixed, don’t know about Terradata. Also, what version if Toad are you using.
Greg, the script runs fine when I click the button ‘run’ with the automation open. The problem I have is when i want to schedule it to run automatically i’m always getting an error that the file i want to export the select statement to, is being used by another application (even if the file doesn’t exist). I’m using Toad 3.6.
Igor, what you suggested works only if I use the same editor window for both parts, as each window opens a different session in Teradata. Anyway as i mention before the script runs fine when I run it manually, and the problem only comes when i schedule it. So it look like a small bug somewhere.
Aleksey told me that they are going to have a look as it seems an issue on their side.
Appreciate your help and interest in solving this!
I played around and figured that it is happening because Teradata opens many sessions to create temp tables and the scheduler is not able to talk to the session that created the temp table.
So what I did is, I converted the final statement that exports the data into a permanent table and then exported from this final table. This is working without any issues. Try and let us know.
Yes that solution is the one I’m using currently. I work with permanent tables, but as they are very big i have to create them and drop in each execution, so I need to add more steps to the code. That’s why I would prefer doing it with volatile tables that are created on the moment and deleted when user is disconnected. I hope it will be possible to do it.
Yes, that was one of the solutions I tried. The example that I gave that I attached will work with your solution, however the real case if much more complex and it needs to be divided in separate statements.