Script generated with old version of the procedure

Hello,

To correct a bug, I make a change in a procedure stored in a package. I compile it successfully and commit. After, I generate the script of the package to execute it on an other database but in my script, I have the old version of my procedure, my changes are not there.

I made these changes 2 days ago, when I view the body of the procedure, the change are there but not when I generate the script, it’s very strange.

Does anyone have this problem? How can I fix it?

Thanks for your help.

I’m not familiar with such an issue, but I can imagine how it might occur.

Second caveat: I’m not familiar with team coding, so I don’t know
how it functions at the detailed level. As a result, the following question
could be totally out to lunch… perhaps even through the looking glass :slight_smile:

Do you perchance have team coding enabled so the source for the script is
actually external to the database?

Roger S.

On a more intelligent note then my last response (I am still pretty asleep at
the moment so the brain is pretty slow in functioning except when on
auto-response)

Or maybe this suggestion isn’t any more intelligent :slight_smile:

What’s the possibility the owner of the package where you made the change
is different from the schema you are trying to generate the code from – a
schema that has an old copy of the code in it…

Ok… I’ll hush up now and go back to sleep…

Roger S.

Hello,

I am creating a scheduled job to delete some old records in a very large table.
I need to delete records older than a given date, but also want to delete a max
number of records per scheduled job. Plus, I want to commit every 1000 records
deleted.

I am rather new to some of this so appreciate your being kind to my lack of
knowledge. I have this package to be called by the scheduled job, but need to
alter this to accept a set number of rows max plus a commit every 1000. Any help
appreciated.

CREATE OR REPLACE PACKAGE OCS_ADMIN.“UB_VEHICLE_POSITION_CLEAN” is

procedure VEHICLE_POSITION_CLEAN (numRows IN integer);

end UB_VEHICLE_POSITION_CLEAN;

/

CREATE OR REPLACE package body OCS_ADMIN.UB_VEHICLE_POSITION_CLEAN is

procedure VEHICLE_POSITION_CLEAN(numRows IN integer) is

begin

delete from OCS_ADMIN.TP_VEHICLE_POSITION

where

gpstime

commit;

end VEHICLE_POSITION_CLEAN;

end UB_VEHICLE_POSITION_CLEAN;

Best Regards

Timothy Bryan

We are what we repeatedly do. Excellence, therefore, is not an act but a habit.

Aristotle

Message from: raks

Hello Roger,

Thanks for your answer. I don’t use team coding on my Toad program. I have
only one schema open when I generate the script. I always make like that and
have any problem.

When I made the update of the other database to correct the bug, I use the
compare function of Toad and of course, Toad didn’t detect differences. So,
when my customer test the program, the bug was not corrected^^

@Timothy: when do you ask it on this topic and don’t create a new one?
Your package body is not completed, in your where clause, you must make a
comparation with gpstime. For the job, you can create it with the graphic
interface of Toad and you specifie your procedure in argument.


Historical Messages

Author: Herman
Date: Thu Feb 23 23:08:04 PST 2012
Hello Roger,

Thanks for your answer. I don’t use team coding on my Toad program. I have
only one schema open when I generate the script. I always make like that and
have any problem.

When I made the update of the other database to correct the bug, I use the
compare function of Toad and of course, Toad didn’t detect differences. So,
when my customer test the program, the bug was not corrected^^

@Timothy: when do you ask it on this topic and don’t create a new one?
Your package body is not completed, in your where clause, you must make a
comparation with gpstime. For the job, you can create it with the graphic
interface of Toad and you specifie your procedure in argument.

__

Author: Bryan, Timothy
Date: Thu Feb 23 11:42:30 PST 2012
Hello, I am creating a scheduled job to delete some old records in a very large
table. I need to delete records older than a given date, but also want to delete
a max number of records per scheduled job. Plus, I want to commit every 1000
records deleted. I am rather new to some of this so appreciate your being kind
to my lack of knowledge. I have this package to be called by the scheduled job,
but need to alter this to accept a set number of rows max plus a commit every
1000. Any help appreciated. CREATE OR REPLACE PACKAGE
OCS_ADMIN.“UB_VEHICLE_POSITION_CLEAN” is procedure VEHICLE_POSITION_CLEAN
(numRows IN integer); end UB_VEHICLE_POSITION_CLEAN; / CREATE OR REPLACE package
body OCS_ADMIN.UB_VEHICLE_POSITION_CLEAN is procedure
VEHICLE_POSITION_CLEAN(numRows IN integer) is begin delete from
OCS_ADMIN.TP_VEHICLE_POSITION where gpstime __

Author: Simoneau, Roger
Date: Thu Feb 23 08:26:37 PST 2012
On a more intelligent note then my last response (I am still pretty asleep at
the moment so the brain is pretty slow in functioning except when on
auto-response) Or maybe this suggestion isn’t any more intelligent :slight_smile:
What’s the possibility the owner of the package where you made the change
is different from the schema you are trying to generate the code from – a
schema that has an old copy of the code in it… Ok… I’ll hush
up now and go back to sleep… Roger S.
__

Author: Simoneau, Roger
Date: Thu Feb 23 08:23:25 PST 2012
I’m not familiar with such an issue, but I can imagine how it might occur.
Second caveat: I’m not familiar with team coding, so I don’t know
how it functions at the detailed level. As a result, the following question
could be totally out to lunch… perhaps even through the looking glass :slight_smile:
Do you perchance have team coding enabled so the source for the script is
actually external to the database? Roger S.
__

Author: Herman
Date: Thu Feb 23 03:09:01 PST 2012
Hello,

To correct a bug, I make a change in a procedure stored in a package. I compile
it successfully and commit. After, I generate the script of the package to
execute it on an other database but in my script, I have the old version of my
procedure, my changes are not there.

I made these changes 2 days ago, when I view the body of the procedure, the
change are there but not when I generate the script, it’s very strange.

Does anyone have this problem? How can I fix it?

Thanks for your help.

__


Hello Roger,

Thanks for your answer. I don’t use team coding on my Toad program. I have only one schema open when I generate the script. I always make like that and have any problem.

When I made the update of the other database to correct the bug, I use the compare function of Toad and of course, Toad didn’t detect differences. So, when my customer test the program, the bug was not corrected^^

@Timothy: when do you ask it on this topic and don’t create a new one?
Your package body is not completed, in your where clause, you must make a comparation with gpstime. For the job, you can create it with the graphic interface of Toad and you specifie your procedure in argument.

Afternoon Timothy,

I assume that the file is huge and you cannot simply delete all rows
older than the given date without blowing away the UNDO tablespace? If
so, that’s a problem for the DBAs to resolve and resorting to “delete
1000 then commit” isn’t really helpful or efficient.

Given also that you probably cannot do anything about the above and are
stuck with doing it as explained, the following might help:

Your code should do the following:

  1. create a cursor that does something like "select ROWID from table
    where some_date
    with a LIMIT 1000 clause.

That will pull the first 1000 rows (Well, their ROWIDs) back into your
collection.

  1. Use a FORALL to “delete where rowid = the rowid from the collection”
    the 1000 rows and COMMIT provided that the number of entries you
    retrieved is below your max limit.

I have not tested the following:

Create or replace procedure whatever(pMax in number, pDate in date) as

type rowid_table is table of rowid;
myRowids rowid_table;

cursor c1(cpMax in number, cpDate in date) is
select rowid
from some_table
where some_date
and rownum

begin
open c1(pMax, pDate);

loop
fetch c1 bulk collect into myRowids limit 1000;
exit when myRowids.count = 0;

 forall x in myRowids.first .. myRowids.last
   delete from table
   where rowid = myRowids(x);

 commit;

end loop;
close c1;
end;

It should work and the code will only ever delete as many rows as you
tell it to in batches of 1000.


Cheers,
Norm. [TeamT]

don't use team coding

Unfortunately I no longer have access to Toad so I can’t step through
possibilities anymore. As a result, here’s some potentials:

Evening Timothy,

Ok, I have tested it and it works fine.

Cheers,
Norm. [TeamT]

On 24/02/12 14:06, Norman Dunbar wrote:

Afternoon Timothy,

I assume that the file is huge and you cannot simply delete all rows
older than the given date without blowing away the UNDO tablespace? If
so, that's a problem for the DBAs to resolve and resorting to "delete
1000 then commit" isn't really helpful or efficient.

Given also that you probably cannot do anything about the above and are
stuck with doing it as explained, the following might help:

Your code should do the following:

  1. create a cursor that does something like "select ROWID from table
    where some_date
    with a LIMIT 1000 clause.

That will pull the first 1000 rows (Well, their ROWIDs) back into your
collection.

  1. Use a FORALL to "delete where rowid = the rowid from the collection"
    the 1000 rows and COMMIT provided that the number of entries you
    retrieved is below your max limit.

I have not tested the following:

Create or replace procedure whatever(pMax in number, pDate in date) as

type rowid_table is table of rowid;
myRowids rowid_table;

cursor c1(cpMax in number, cpDate in date) is
select rowid
from some_table
where some_date
and rownum

begin
open c1(pMax, pDate);

loop
fetch c1 bulk collect into myRowids limit 1000;
exit when myRowids.count = 0;

forall x in myRowids.first .. myRowids.last
delete from table
where rowid = myRowids(x);

commit;

end loop;
close c1;
end;

It should work and the code will only ever delete as many rows as you
tell it to in batches of 1000.