Hello group (first time post here!),
Bought a TDP 3.6 license this morning to convert few SQL Server 2008 tables to Oracle 11g enviornment. These tables has ‘ntext’ column values in sql server that I am converting it to CLOB column in Oracle. While comparing a table and then syncing out the target to look alike source, TDP is throwing a following error on generating a script. Did anyone see this type of error, any help would be appreciate.
Exception of type ‘System.OutOfMemoryException’ was thrown.
at System.String.InternalSubString(Int32 startIndex, Int32 length, Boolean fAlwaysCopy)
at System.String.InternalSubStringWithChecks(Int32 startIndex, Int32 length, Boolean fAlwaysCopy)
at Quest.DataCompare.PLOracle.OracleObjectWithColumns.generateBlob(SyncContext context, Object value, SynchOptions syncOptions, OracleColumn column, ScriptKeeper sk, Boolean& hasErrors, StringBuilder errors)
at Quest.DataCompare.PLOracle.OracleObjectWithColumns.ScriptLOBS(SyncContext context, SynchOptions syncOptions, ScriptKeeper sk, Boolean& hasErrors, StringBuilder errors)
at Quest.DataCompare.PL.ObjectWithColumns.ScriptInsert(SyncContext context, SynchOptions syncOptions, ScriptKeeper sk)
at Quest.DataCompare.PL.OperationScripter.Script(SynchronizationState state, Int32& scriptedTotal)
at Quest.DataCompare.PL.PLFasade.GenerateScript(ScriptKeeper script, Boolean synchForward, List`1 tablesToSync, RowsLimitType scriptingType, Int32 scriptingLimit, SynchOptions synchOptions, ICommandProgress progress)
at Quest.DataCompare.UI.SyncWizard.SyncGenerationScriptPage.GenerateScriptAndNotify(Object state)
I have seen Out of memmory erros when I have too many applications open and large results at the same time. Try it with Excel, Access and any other large memmory hogs closed. If that does not work it could be the structure of the job or query that is beyond your PC’s ablility. You may have to break it down into smaller chunks. Posting your job and sql will help other evaluate what you can do.
Please try select "Take no action" on synchroniztion wizard, on the next page, it will tell you where the sync script is generated, copy that path and open the file after the synchronization wizard finishes. Please take Greg's suggestion as well to minimize memory usage from other application.
Thanks Kiki & Greg for your input, I have almost over a million rows of data in these tables with ntext values, So doing that, created a large sql file of 900 MB and then TOAD itself having a hard time opening that file, not to mention it stuck while running. Is there any other workaround.? I appreciate it.
Can you break it up into smaller pieces with multiple loads? Like by date (a month at a time), or by some ID/sequence number (1000 at a time). You will need to play and see how much you can do at once without it failing.
Once you know how much you can move at a time you can create a job with a loop and a variable to process the whole thing and just run the job.
Hi Greg, sorry for delay in response (was off yesterday). thanks for your suggestion, so I was able to convert 200 rows at a time, can’t go beyond 200. You mentioned something about creating a job and defining a variable, Does TDP provide these features? Please let me know.
Thanks once again!
Yes. You need to look into the automation features of Toad (for me it is the singular best feature of Toad). They have online tutorials, Debbie Peabody (Toad Goddess) has a blog with an automation series that I highly recomend. Go here and start with Series #1:
Oh boy, it’s a new different ballgame, though will start looking in her blogs. Thanks Greg.
Hey Greg, thanks for the kudos. Another place to access automation videos in from this link. It is a bit better organized. This link is available in the product also from the launch page.
I went back and read this whole thread. Can I ask a question? You need to move data from SQL Server to Oracle. I think I would have used the Cross-Platform import for initial movement. Then used data compare to check results. There shouldn’t be so many rows that the update script is running out of memory. I think that is the point to address.
See this Blog for importing from SQL Server to Oracle.
If you would like you can open up a support case on this and we can give more detailed support on this issue.
Have you tried using Toad’s Import Wizzard? Start with your Oracle connection, then launch the import wizzard, select query option and then up on top change the source from the Oracle connection to the sql server connection. Follow the rest to map the query results into an existing or new Oracle table. I always save my import template even if it is a one time run, so if something is not quite right you can modify it instead starting from the beginning again.
Thanks Debbie & Greg for your input, I will look into that blog and import wizard and will keep you posted.
The import wizzard takes the data from one connection directly into the table on the other connection, so there is no large file being generated and then read and then loaded. So you may get around your size problem.
BTW Debbie, I have already opened a Service Request # 2704501 with Dell support on this issue. Uploaded few files and engineer was able to replicate the issue at his environment and forwarded it to development team, not sure if you are aware of.
Yes, I saw the support ticket today. Being able to handle large text files has never been something we do well. I would rather look at the task you want to accomplish. I am having my engineer build the Oracle and SQL tables and look at it from the importing aspect. I don’t think Data Compare should be used for that.
Hi Greg/Debbie, I would like to import few large SQL Server tables to Oracle 11g databases. SQL Server tables has ntext/varvhar(max) datatypes that I am converting to CLOB in Oracle. I changed my approached and instead doing data synchronization, I started doing import in TDP and getting the following errors:
Error # 1:
you want to cancel the import? Click no to ignore the current error(s) and any additional errors and continue importing. Toad will not prompt you again if another error occurs.: Failed inserting from row #0 to #399 without detailed row number info : ORA-24381: error(s) in array DML Do you want to cancel the import? Click no to ignore the current error(s) and any additional errors and continue importing. Toad will not prompt you again if another error occurs. (Remembered answer: “OK”. Enable)
Error # 2:
Do you want to cancel the import? Click no to ignore the current error(s) and any additional errors and continue importing. Toad will not prompt you again if another error occurs.: Failed inserting from row #0 to #136 without detailed row number info : Value length exceeds the parameter size Do you want to cancel the import? Click no to ignore the current error(s) and any additional errors and continue importing. Toad will not prompt you again if another error occurs. (Remembered answer: “OK”. Enable)
Is there a way I get a workaround with this, Please let me know If I can update the same SR also.
sounds like you have data truncation error. Data in one of your source fields is bigger than the destination field.
We looked at the scripts you sent support today and recreated a SQL Server table with varchar(max) and added a couple of rows with the large text you sent. You can do the cross-platform import by using Oracle's largest varchar values. (varchar(4000)) I did not see any text coming from Sql server in your scripts that was larger than that. I prefer this method.
If you there is data that is larger than this you can use the Data Compare sync script that you started out with but just put a where condition on your Data Compare. In this way you can do the syncing in smaller chunks.
Debbie, That’s what I started doing to convert table. However, getting some issue with the column datatype, hope you can suggest something. I appreciate it.
Failed importing 0 rows on step of
Failed inserting from row #0 to #399 without detailed row number info : ORA-24381: error(s) in array DML
9:22:45 PM Thread (43) Import canceled and inserted rows were rolled back.