Hi,
I have a text file with more than 3000 records. I am using the import wizard to import the text file. The wizard is completed with a message stating 2154 were read and 2154 were imported. I am using TDA 2.5.1. Hope you can help me to identify the problem of not being able to upload all records. Note: I uploaded the text file into excel and access to ensure there was not problem with the file. Import in excel and access uploaded all records.
OPENORDR.TXT (2.08 MB)
I got the same result as you did at first. But if I go back and change my import settings it reads it properly. The settings needs to be:
Tab Delimited
String Quote Char --. "
Uncheck Colmn names as header row
First data row 1.
See attached screenshot.
I think the main one is the string quote char. There are cretain rows that are quoted with double quotes.
Debbie

Thanks a lot Debbie fro your help; at first it did not work because I did not change the processig row count to 500.Once I changed the processing row count it worked. Thanks again.
Hi,
I tried to import a text file with almost 5K rows into my oracle connection but not all the rows are imported. Also I am selecting the option to create the file for the not imported rows but the file is not created. Also the import message error stays X out X were imported. I beleive it should be X out Y were imported; where X is the number of imported rows and Y is the number of rows found in the text file. Am I doing something wrong?
Here is the log:
3:13:10 PM Thread (14) Import Started [11/03/2010 3:13:10 PM]
3:13:10 PM Thread (14) Processing “OPENORDR.TXT” into a new table OPS$SQLTIME.ORDERS
3:13:11 PM Thread (14) Droping Table
3:13:11 PM Thread (14) Creating table “OPS$SQLTIME”.ORDERS
3:13:11 PM Thread (14) CREATE TABLE “OPS$SQLTIME”.ORDERS (“Field1” NVARCHAR2(50),
“Field2” NVARCHAR2(50),
“Field3” NVARCHAR2(50),
“Field4” NVARCHAR2(50),
“Field5” NVARCHAR2(50),
“Field20” NVARCHAR2(50))
3:13:12 PM Thread (14) The rows failed importing will be written to file : S:\COMMON\Q2C Downloads\OPENORDR_rows_with_error.csv
3:13:12 PM Thread (14) Importing 1 out of 1 files
3:13:12 PM Thread (14) Reading from file OPENORDR.TXT
3:13:12 PM Thread (14) Inserting row 100 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 200 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 300 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 400 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 500 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 600 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 700 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 800 into table OPS$SQLTIME.ORDERS
3:13:13 PM Thread (14) Inserting row 900 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1000 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1100 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1200 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1300 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1400 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1500 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1600 into table OPS$SQLTIME.ORDERS
3:13:14 PM Thread (14) Inserting row 1700 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 1800 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 1900 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 2000 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 2100 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 2200 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 2300 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 2400 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 2500 into table OPS$SQLTIME.ORDERS
3:13:15 PM Thread (14) Inserting row 2600 into table OPS$SQLTIME.ORDERS
3:13:16 PM Thread (14) Inserting row 2700 into table OPS$SQLTIME.ORDERS
3:13:16 PM Thread (14) Inserting row 2800 into table OPS$SQLTIME.ORDERS
3:13:16 PM Thread (14) Inserting row 2900 into table OPS$SQLTIME.ORDERS
3:13:16 PM Thread (14) Inserting row 3000 into table OPS$SQLTIME.ORDERS
3:13:16 PM Thread (14) Inserting row 3100 into table OPS$SQLTIME.ORDERS
3:13:16 PM Thread (14) Inserting row 3129 into table OPS$SQLTIME.ORDERS
3:13:16 PM Thread (14) Failed to parse row #30. Actual columns is 12 while 29 is expected.
Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
3:13:16 PM Thread (14) Nothing to discard, delete S:\COMMON\Q2C Downloads\OPENORDR_rows_with_error.csv
3:13:22 PM Thread (14)
Error(s) occured, 3129 out of 3129 rows were imported and have been committed.
3:13:23 PM Thread (14) Error importing data, please check file format options: Failed to parse row #30. Actual columns is 12 while 29 is expected.
Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
Import option is not uploading all the records from a text file into a new table in oracle
Your observations on the row count are correct.
What it looks like is the import wizard either does not have the proper parsing settings or the export of the file has some errors in it.
If possible the best thing to do is to send me the txt file so I can have a developer look under the hood. Send direct to dpeabody@quest.com.
Otherwise, open up the file and confirm what the delimiter is and if there is any quoting of fields. Then open the import wizard and confirm the settings we set match. If not change them.
The errors below show that when we parsed the file there were not the correct number of columns. It says row 30 but that seems out of sequence.
Debbie
Okay, we have looked at the file. When parsing the file using the other delimiter "|" line 3131 has some errors in it. There are several ||||||| which are being read as empty columns and it pushes the data over.
If you open the file in Excel or Access and go to 3131 you can see this. You do need to scroll all the way over to the left.
I entered CR79,567 to fix the following issues:
- Export log should use the correct row number for errors.
- Dump log should contain parsing row error data.
- Bad row data should be kept when possible.
I am not sure how easy item 3 is. I realize Access and Excel just take the bad row data and don't tell you it is not like the rest. But without doing that the file can not be processed. We happen to be working on these types of items now so we will come up with something. Thanks for bringing this to our attention.
Debbie
Hello Debbie,
I have similar issue with TOAD DP 3.8. I am trying to import 41367 rows in a Teradata table. When the processing row count on template = 10000 the total imported rows = 40000. When I decrease the row count to 500 then the total imported rows are 41367-500= 40867. How to overcome this issue and achieve all rows import with max possible processing row count block size? I have tried with Generic ODBC connection and with TOAD native TD connection - no difference. Any assistance will be appreciated.
When importing there are a couple of factors to keep in mind.
-
We have a pre-scan that checks for simple failures and prevents them. IE: a string value that is longer than the length of the vchar column or text being inserted into number column. These values are extracted and not imported.
-
Depending on the type of provider we try to insert in chunks and use what ever bulk processing the database has. Often a database will take a chunk of N rows but if there is an insert error with in the chunk of rows the database rejects all rows in that chunk.
-
The format of the input file is very important. Especially when it comes to csv, etc. The data in the file needs to be parsable based on the parsing symbols. above is an example where the parsing symbols could not be used.
Can i get a copy of your import files and TDP Import Wizard. You can send to me through email. Include the type and version of the database you are connected to.