ORA-04091: the table [blah] is mutating, the trigger / function may not see it

I recently started working on a large complex application, and I just got an error due to this error:

ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4 ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1' 

The trigger in question looks like

  create or replace TRIGGER TRG_T1_TBL1_COL1 BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1 FOR EACH ROW WHEN (NEW.t1_prnt_t1_pk is not null) DECLARE v_reassign_count number(20); BEGIN select count(t1_pk) INTO v_reassign_count from TBL1 where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null; IF (v_reassign_count > 0) THEN RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed'); END IF; END; 

The table has a primary key " t1_pk ", "destination event identifier", t1_appnt_evnt_id and another column " t1_prnt_t1_pk ", which may or may not contain another row t1_pk .

It seems that the trigger is trying to make sure that no one with the same t1_appnt_evnt_id has , referring to the same one, that this line refers to the direction to another line, if it refers to another line.

The comment on the error report from the database administrator says: โ€œDelete the trigger and perform code verificationโ€, but, unfortunately, they have their own code generation structure located on top of Hibernate, so I canโ€™t even figure out where it is actually written off, therefore I hope there is a way to make this trigger work. There is?

+6
oracle triggers hibernate ora-04091
source share
4 answers

I think I disagree with your description of what the trigger is trying to do. It seems to me that it is designed to enforce this business rule: for a given value of t1_appnt_event, only one line can have a non-NULL value of t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)

Interestingly, it is defined for UPDATE OF t1_appnt_event, but not for another column, so I think someone can break the rule by updating the second column if there is no separate trigger for this column.

Perhaps you can create an index based on functions that will apply this rule to completely get rid of the trigger. I came up with one way, but it requires some assumptions:

  • Table has a numerical primary key
  • The primary key and t1_prnt_t1_pk are both always positive numbers

If these assumptions are true, you can create a function like this:

 dev> create or replace function f( a number, b number ) return number deterministic as 2 begin 3 if a is null then return 0-b; else return a; end if; 4 end; 

and an index like this:

 CREATE UNIQUE INDEX my_index ON my_table ( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) ); 

Thus, rows in which the PMNT column is NULL will be displayed in the index with the primary key inverted as the second value, so they never conflict with each other. Rows where it is not NULL will use the actual (positive) value of the column. The only way you could get a constraint violation would be for the two rows to have the same non-NULL values โ€‹โ€‹in both columns.

It may be too smart, but it can help you solve your problem.

Update from Paul Tomblin: I went with an update to the original idea that igor added in the comments:

  CREATE UNIQUE INDEX cappec_ccip_uniq_idx ON tbl1 (t1_appnt_event, CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END); 
+7
source share

I agree with Dave that the desired probalby result can and should be achieved using built-in constraints, such as unique indexes (or unique constraints).

If you really need to get around the mutating table error, the usual way to do this is to create a package containing a variable with a scope, which is a table of what can be used to identify changed rows (I think a ROWID is possible, otherwise you should use PK, I am not using Oracle at this time, so I cannot verify it). The FOR EACH ROW trigger then fills this variable with all the lines that are modified by the statement, and then appears AFTER each statement trigger that reads the lines and checks them.

Something like (the syntax is probably incorrect, I have not worked with Oracle for several years)

 CREATE OR REPLACE PACKAGE trigger_pkg; PROCEDURE before_stmt_trigger; PROCEDURE for_each_row_trigger(row IN ROWID); PROCEDURE after_stmt_trigger; END trigger_pkg; CREATE OR REPLACE PACKAGE BODY trigger_pkg AS TYPE rowid_tbl IS TABLE OF(ROWID); modified_rows rowid_tbl; PROCEDURE before_stmt_trigger IS BEGIN modified_rows := rowid_tbl(); END before_each_stmt_trigger; PROCEDURE for_each_row_trigger(row IN ROWID) IS BEGIN modified_rows(modified_rows.COUNT) = row; END for_each_row_trigger; PROCEDURE after_stmt_trigger IS BEGIN FOR i IN 1 .. modified_rows.COUNT LOOP SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i); -- do whatever you want to END LOOP; END after_each_stmt_trigger; END trigger_pkg; CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS BEGIN trigger_pkg.before_stmt_trigger; END; CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS BEGIN trigger_pkg.after_stmt_trigger; END; CREATE OR REPLACE TRIGGER for_each_row_trigger BEFORE INSERT OR UPDATE ON mytable WHEN (new.mycolumn IS NOT NULL) AS BEGIN trigger_pkg.for_each_row_trigger(:new.rowid); END; 
0
source share

For any solution based on a trigger (or based on an application code), you need to block to prevent data corruption in a multi-user environment. Even if your trigger worked or was rewritten to avoid a mutation table problem, this would not prevent two users from updating t1_appnt_evnt_id to the same value in the rows where t1_appnt_evnt_id is not null: suppose there are currently no rows where t1_appnt_evnt_id = 123 and t1_prnt_t1_pk is not null:

 Session 1> update tbl1 set t1_appnt_evnt_id=123 where t1_prnt_t1_pk =456; /* OK, trigger sees count of 0 */ Session 2> update tbl1 set t1_appnt_evnt_id=123 where t1_prnt_t1_pk =789; /* OK, trigger sees count of 0 because session 1 hasn't committed yet */ Session 1> commit; Session 2> commit; 

You now have a damaged database!

A way to avoid this (in the trigger or application code) is to lock the parent row in the table referenced by t1_appnt_evnt_id = 123 before performing the check:

 select appe_id into v_app_id from parent_table where appe_id = :new.t1_appnt_evnt_id for update; 

Session 2 trigger should now wait until session 1 ends or rollback before it checks.

It would be much easier and safer to implement the Dave Costa Index!

Finally, I am glad that no one suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often offered on forums and works just like a problem with a mutated table - but it further degrades data integrity! So just don't ...

0
source share

I had a similar error with Hibernate. And a cleaning session with

 getHibernateTemplate().saveOrUpdate(o); getHibernateTemplate().flush(); 

solved this problem for me. (I am not sending my code block, as I was sure that everything was written correctly and should work, but this did not happen until I added the previous flush () statement). Maybe this can help someone.

0
source share

All Articles