I have to pass 1000s of values in the where condition for a normal SQL Query, How can I pass the values from excel or Access to the query?
Is it one time task or repeating task?
Currently it is one time task but later can be repeating quarterly task. I tried running below query but getting command not properly ended error each time.
OPENROWSET(INNER JOIN BULK ‘C:\Users\rvijay\Desktop\Item.txt’,
FORMATFILE = ‘c:\myfileformat.txt’
) B ON E.item = B.item
Unfortunately no, I need some help through which the where clause can be taken from .txt/.xls etc as it has 1 million of items no in that.
So like I want
Select data from table where item in (“All data should be read from here which can be xls/txt”)
You don’t specify why you need this in a query like you mentioned, but it seems like if you have that much data that you would just import into a table first and then you can easily query it however you want. In Toad you can import data from Excel, Access etc. using Import Table data (Database > Import > Import Table Data), and then if you have to do it more than once, you could use the Automation Designer Import Table Data action to do it whenever you wanted (there’s lots of documentation on Toad World about how to use Automation Designer, including in the help).
I was knowing that you will ask this, actually the table I am querying the data is in Production table so I don’t have rights to create temp table over there so I had to take this route.
If you don’t mind doing it “by hand”, you can select your rows from your source database and turn that into an in list ready to paste into SQL
To do this, set the Export Dataset format to “Delimited Text”, and check “Export as In list for SQL”.
If you really want to do it as a bind variable, there are some packages you can find on the internet that can help you process an In lists as a bind variable
I have created a test_data.txt file with 10 values (from 1-10). They are used in FileIterator1, which I called PARAMS:
Then I added as a child ExecuteScript action, in which I put test_file.sql, with content:
SELECT &1 FROM DUAL WHERE &1 > 4;
And define parameters with %PARAMS% iteration list. Result is put in each file (where I use %PARAMS% as part of file name):
When you run PARAMS, it will be run 10 times (for each line in file) and pass this value to a sql as a parameter.
Result will be 10 generated files:
Volume in Laufwerk C: hat keine Bezeichnung.
Verzeichnis von c:\temp
06.06.2018 15:03 500 test_results_1.txt
06.06.2018 15:03 511 test_results_10.txt
06.06.2018 15:03 500 test_results_2.txt
06.06.2018 15:03 500 test_results_3.txt
06.06.2018 15:03 500 test_results_4.txt
06.06.2018 15:03 510 test_results_5.txt
06.06.2018 15:03 510 test_results_6.txt
06.06.2018 15:03 510 test_results_7.txt
06.06.2018 15:03 510 test_results_8.txt
06.06.2018 15:03 510 test_results_9.txt
10 Datei(en), 5.061 Bytes
0 Verzeichnis(se), 49.391.239.168 Bytes frei
So you have your results in those files.
If you do not need output, then just put output to clipboard.
I hope this helps
Damir, he wants all values from the list to be in one where clause, not to iterate though them.
And I just noticed that there are over 1 million values in his excel file.
Rahulvijay, your best option is probably to first import your excel file into the table database that you want to export from. Then change your query so that it joins ITEM_SUPPLIER with the newly imported table.
Even if you got the bind variable trick to work, that query would probably execute very badly because there are so many values. Remember, you can only have 1000 items in an in-list so you’d need to do this:
where item_no in (list of 1000 values) or
item_no in (list of another 1000 values) or
and so on 1000 times to get to 1 million.
btw…OPENROWSET is a SQL Server command, and I don’t think (I may be wrong here) that Oracle has an equivalent command. Another possible way around this would be to have your DBA create a DB Link to another Oracle database that you do have access to create tables on. And then you could create a temp table as needed or even setup an external table (easiest way to read from a file directly). Just a thought.
You are right, and one 1 mil anything Toad is not a good tool (atumation designer).
So best proposal:
import values in one table order by that value
CREATE TABLE LIST_TABLE NOLOGGING (col1 number) PCTFREE 0 storage (buffer_pool KEEP) PARALLEL 8 ;
ALTER TABLE LIST_TABLE PARALLEL 1;
Create UQX index on that column
CREATE UNIQUE INDEX LIST_TABLE_UQX on LIST_TABLE (COL1) storage (buffer_pool KEEP) ;
Create stat on that table in a way:
OwnName => ‘XXXX’
,TabName => ‘LIST_TABLE’
,Estimate_Percent => 100
,Block_sample => TRUE
,Method_Opt => 'FOR ALL INDEXED COLUMNS SIZE 254 ’
,Degree => 4
,Cascade => TRUE
,No_Invalidate => FALSE
,Force => TRUE);
Then query some table in a way:
select /*+ USE_HASH(a) PARALLEL(a 8) */ *
from TABLE_DATA a
where EXISTS (select null from TABLE_LIST cb WHERE a.col1=cb.col1);
I think this is the fastest way (better than “IN” operator use).
I got it but I am in situation where the data from I am querying is in Production Server and I don’t have rights to create temp table and import the data and use join to make that happen.
So I have got no way apart from passing the parameter from file/csv into the query and run.
Hope you can tell any way if that is possible and logical!
Of course, TABLE_DATA should have have an index on col1…and range partition would be perfect.
How about send a copy of both tables to another database?
Or…create a read-only database link on a test database that points to your two databases.
Then you can join the two tables without having to copy them.