Variable types and how to use

What is the difference between using the “Execute Script” activity and utilizing the “Save result set in variable” versus using “Set Variable” and using the same query on an “SQL” variable type? Does either option allow the variable to represent a list of values? For example, if you wanted to gather a list of zip codes, set them to a variable and then use the variable in a later query?

Is there any documentation on Toad Data Point Version 4.3.0.718 that goes into this level of detail?

For the variable, you can see the help documentation as below.


For use variables in later query, we have Loop Dataset Activity, please see the help documentation as below.
You can also see this video https://blog.toadworld.com/using-the-loop-dataset-in-automation-in-toad-data-point

If you have problem to use, please let me know and give me the details.

Hi Sandy,

Thank you for the reply. I have reviewed the documents you noted above and do not find the answer to my question within. All the examples that I can find show a variable, whether a bind or otherwise, with a single value.

Is it not possible to set a variable to the results of a query and then use that variable in the where clause of another query using the “in” operator?

Thank you,
Brandon

SQL variable is substituted by the value of the 1st field of the 1st row of the result set in its default text representation.
For your case, you can define a string variable as below.


Use this variable in another query as below.

Perhaps there is a different way to achieve my desired goal and I am going about it all wrong. Here is the functionality that I am looking for that will help me automate a great deal of work done by my team and others.

I need to be able to run a query against one database (Postegresql, SAP ASE) and save the results so that they can be used in a query against a separate database (Oracle). In my example, it does not work to loop through all the results for several reasons. First, my result sets are very large (10,000+) and I don’t want the second query to have to run 10,000+ times. Second, I often use the original data set in my ‘where’ clause of the second query (using the ‘IN’ operator) or I need access to the full data set rather than a single entry within.

Thank you for your time and assistance!

You can use Xquery to get the results and use Xquery in Automation.

Here is the example for your case.

  1. Go to Tools | Query Builder | Cross-Connection Query Builder.
    Build the Xquery, my example is Oracle and SQLServer.
  2. Go to Automation. Define a string variable. The value is the subquery from Xquery. My example is as below.
  3. Add the select to file activity. Input the Xquery with defined variable.

I hope this solution will help you.

Cross connection queries are great if your datasets are not too large but tend to be very slow. I prefer to go for your method. You will need to use a loop and a couple of variables. One variable will be a string to concatenate the results in the loop (you need to build your in statement). You must create the variable outside of the loop. Inside the loop concatenate a comma and each rows result value onto the variable itself. The loop variable will be the #ResultSetName,FieldNameFromYourQuery#. If you can group the query to get all values needed with ListAgg (oracle) or Stuff (SQL Server) you might be able to do this without the loop (faster). Once you have built the dynamic In statement. You can have the second database query with Where ZipCodeField in (#VariableName#) (#VarialbleName#). You will probably have to put the entire SQL query into a string variable using the #VariableName# as part of it, then execute the Sql Query string variable.

ListAgg example
https://docs.oracle.com/cd/E11882_01/server.112/e41084/functions089.htm#SQLRF30030

Stuff with XLM Path example

One gotcha. In statements have limits (2000 arguments I think could be less depends on the system). You may have to write results out to a file or import the results into a temporary table for large results. Writing out to a file or table and then using that in the next query still may be faster than the cross connection query.