Skip to Main Content
  • Questions
  • After Insert Trigger, Synchronous or Async

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Jimson.

Asked: October 10, 2011 - 8:43 am UTC

Last updated: May 12, 2025 - 3:10 am UTC

Version: 11.2.0.2

Viewed 10K+ times! This question is

You Asked

Hi Tom,

What I need to do is, to monitor a table for changes. For this a trigger needs to be setup.

But my doubt is, if we create a trigger after insert or update of my table column on my table for each row, and the trigger does some time consuming task like DBMS_LOCK.SLEEP (60*1), will this block subsequent inserts?
I mean, will the next insert be able to run only after the sleep?

In short, Is, after insert trigger is blocking or not?
Will the cost of trigger amount to the cost of insert statement?
I know the before statement will. But it is really confusing for after insert.

Thanks in advance,
Jimson

and Tom said...

A trigger is not necessary, it is not the only approach

http://docs.oracle.com/cd/E11882_01/java.112/e16548/dbchgnf.htm#JJDBC28815



Why would you use dbms_lock or do anything time consuming in the trigger?

The after insert trigger has to finish for the session that triggered it to be able to continue. It will block the session that fired the trigger (but not other sessions).

If you want to do something long running - after a row has been committed (it doesn't make sense to burn a lot of time in a user transaction for something that might roll back anyway) - I would suggest using dbms_job. The trigger would simply queue up a job that would run right after the transaction commits. That way - you put off the long running thing (don't make people wait) and it (the long running, resource consuming thing) only runs if the transaction actually commits.

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, May 11, 2025 - 7:22 pm UTC

Hi, Can you please advise if its recommended to create jobs from tiggers. Does this cause any performance issues in case of bulk inserts
Connor McDonald
May 12, 2025 - 3:10 am UTC

I would say that "depends" - a few things to consider

1) If you use DBMS_JOB, then this is transactional, ie, if the statement that was firing the trigger fails, then the jobs disappear without ever starting.

2) If you use DBMS_SCHEDULER, then its not, which means, even if the statement fails, those jobs are still there. That might be good ... it might be bad.

3) For bulk inserts, it strikes me as inefficient to launch a job for every row impacted. It might be a better option to assigh a single sequence number to all rows touched and use after-statement trigger to submit a single job to process all rows with that single sequence value.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library