Import .csv into Local Storage. It reads the file, processes the file, imports the rows, then cancels/rolls back.
Very simple, 4-column import file: column 1 is text, column 2 is text, column 3 is integer, column 4 is a percentage (in decimal form).
Update: Even though the process indicates that it didn't import anything...turns out that it does import 59K rows.
59,000 rows (out of 750k)... is it the first 59k of the 750k?
Can you tell where (at what row) the import was cut off?
Yes, it does appear to be the first 59,755 rows of data. The next row in the import file appears to be normal...no strange characters or duplicates.
EDIT: Further checking reveals that the next record did have a duplicates in the column that is the primary key. Thanks, Gary...back to step 1 to better cleanse the data source. This is what happens when you trust that another person did that! LOL
Glad we're getting farther along...
You may want to use the Data Profiling engine in TDP to identify other anomalies that may be present. e.g. the Profiler would have picked up straight away that you had duped keys, for example.