Creating a DB2 history table trigger

I want to create a history table to track field changes in multiple tables in DB2.

I know that a story is usually executed by copying the entire structure of the table and giving it a suffix name (for example, user → user_history). Then you can use a fairly simple trigger to copy the old record to the history table in UPDATE.

However, for my application this will use too much space. It seems to me that it is not recommended (at least to me) to copy the entire record to another table every time a field is changed. Therefore, I thought that I could have a common history table that would track changes to individual fields:

CREATE TABLE history ( history_id LONG GENERATED ALWAYS AS IDENTITY, record_id INTEGER NOT NULL, table_name VARCHAR(32) NOT NULL, field_name VARCHAR(64) NOT NULL, field_value VARCHAR(1024), change_time TIMESTAMP, PRIMARY KEY (history_id) ); 

OK, so every table that I want to track has one, automatically generated id field as the primary key, which will be placed in the "record_id" field. The maximum VARCHAR size in tables is 1024. Obviously, if a field other than VARCHAR is changed, it must be converted to VARCHAR before inserting the record into the history table.

Now this can be a completely inhibited way to do something (hey, let me know why, if so), but I think this is a good way to track changes that you rarely need to cut and need to be stored for a considerable amount of time.

In any case, I need help writing a trigger to add entries to the update history table. Let, for example, take a hypothetical user table:

 CREATE TABLE user ( user_id INTEGER GENERATED ALWAYS AS IDENTITY, username VARCHAR(32) NOT NULL, first_name VARCHAR(64) NOT NULL, last_name VARCHAR(64) NOT NULL, email_address VARCHAR(256) NOT NULL PRIMARY KEY(user_id) ); 

So, can someone help me with the trigger when updating the user table to insert the changes into the history table? I assume that some procedural SQL will be used to cycle through the fields in the old record, compare them with the fields in the new record, and if they do not match, then add a new record to the history table.

It would be preferable to use the same SQL query action for each table, regardless of its fields, if possible.

Thanks!

+4
source share
4 answers

I don’t think this is a good idea, since you generate even more cost overhead with a large table where more than one value changes. But it depends on your application.

In addition, you should consider the practical value of such a history table. You have to collect a lot of lines to even see the context with the changed value and require that you encode another application that only runs this complex history logic for the end user. And for DB-admin, it would be cumbersome to restore values ​​from history.

it may seem a little harsh, but it is not an intention. An experienced programmer in our store had his own idea at the table. He started it and started it, but he was eating disk space like no tomorrow.

Think about what the history table really needs to do.

+1
source

Did you see this as a two-step process? Implement a simple trigger that records the original and modified version of the entire string. Then write a separate program that runs once a day to retrieve the modified fields as described above.

This makes the trigger simpler, safer, faster, and you have more options for the post-processing phase.

+1
source

We do something similar in our SQL Server database, but audit tables are checked for each audited table (one central table will be huge, since our database has many gigabytes)

One thing you need to do is make sure you also record who made the changes. You should also write the old and new values ​​together (makes sending data easier if necessary) and the type of change (insert, update, delete). You do not mention deleting a record from a table, but we find those of those things that we most often use for a table.

We use dynamic SQl to generate code for creating audit tables (using the table in which system information is stored), and all audit tables have the same structure (data return is simplified).

When you create code to store data in your history table, also create code to restore the data, if necessary. This will save a lot of time when you need to restore something, and you are under pressure from the senior management to do it now.

Now I do not know if you plan to restore data from your history table, but as soon as you do this, I can guarantee that the manual wants it to be used in this way.

+1
source
 CREATE TABLE HIST.TB_HISTORY ( HIST_ID BIGINT GENERATED ALWAYS AS IDENTITY (START WITH 0, INCREMENT BY 1, NO CACHE) NOT NULL, HIST_COLUMNNAME VARCHAR(128) NOT NULL, HIST_OLDVALUE VARCHAR(255), HIST_NEWVALUE VARCHAR(255), HIST_CHANGEDDATE TIMESTAMP NOT NULL PRIMARY KEY(HIST_SAFTYNO) ) GO CREATE TRIGGER COMMON.TG_BANKCODE AFTER UPDATE OF FRD_BANKCODE ON COMMON.TB_MAINTENANCE REFERENCING OLD AS oldcol NEW AS newcol FOR EACH ROW MODE DB2SQL WHEN(COALESCE(newcol.FRD_BANKCODE,'#null#') <> COALESCE(oldcol.FRD_BANKCODE,'#null#')) BEGIN ATOMIC CALL FB_CHECKING.SP_FRAUDHISTORY_ON_DATACHANGED( newcol.FRD_FRAUDID, 'FRD_BANKCODE', oldcol.FRD_BANKCODE, newcol.FRD_BANKCODE, newcol.FRD_UPDATEDBY );-- INSERT INTO FB_CHECKING.TB_FRAUDMAINHISTORY( HIST_COLUMNNAME, HIST_OLDVALUE, HIST_NEWVALUE, HIST_CHANGEDDATE 
+1
source

All Articles