Detecting database tampering, is this possible?

Long listener, first time caller.

'Say you have a database table that is responsible for logging user actions. The integrity of this log is important, so you want to know if anyone has changed the data from the table. To make things more interesting, also consider the fact that your system can be controlled by an evil SQL administrator who has full control over this miserable system. Clap ...

How do you protect your data?

How do you find that someone has interfered with your data?

You have unlimited tools at your disposal. (e.g. hashing, encryption, etc.).

+66
database tampering
Nov 05 '09 at 20:40
source share
25 answers

If you really need to detect that an intervention has occurred, add the checksum field to the table. The checksum for each new line must contain the checksum of the previous line. Then, to check the contents, go through a dataset that calculates the checksum when moving forward. If the calculated checksum does not match the value in the table, then some value has been changed.

-Mike

+28
Nov 05 '09 at 20:46
source share

If the "evil admin" does not have access to the application that populates the database, an additional column on each table consists of a cryptographic signature, the rest of the columns will do the job. The “no access” condition is necessary so that they cannot simply retrieve your private key and sign their fake data.

Edit: Ah, as commentators note, I did not consider the administrator to simply delete the line. To do this, you will need one additional line with a cryptographically signed row count, which you will update each time (either a signed hash of the rest of the table contents, or the last access time, or any other indicator that you choose).

+13
Nov 05 '09 at 20:44
source share

If you really want to be safe, use - Write once. Read a lot of material for this table.

+5
Nov 05 '09 at 20:46
source share

Create a shadow table in which the hashes will be stored with the key / salt, only you and the application will know. If you want to check the falsification of data, rephrase the user table and compare it with the shadow table.

+4
Nov 05 '09 at 20:45
source share

Just run a paper journal with transaction IDs and keep the printer in the room with only one key. Work with financial systems and you will find that many of them still rely on their paper copies. It's pretty hard to “hack” a paper journal unnoticed ... This is why people continue to insist on paper registration in voting machines.

Many people say, “Just add another database,” and although I actually practiced this type of registration myself, I don’t believe in it. An attacker could knock out this guarantee in a dozen ways.

Everything we do here is trying to find a way to make it obvious that something happened. You will lose your magazines. You cannot trust them: if I came across a system with a reliable logging system, I would either fill it with garbage data, or simply completely destroy it. Do not fall into the Maginot line mentality.

But if you have prepared enough to have too many failures, you can narrow the sabotage to an internal source. You need to log the database around , you need to keep extensive system logs, you need to control IP traffic, put the camera in the server room, leave the keylogger on the console, etc. Etc. Even the best will slip somewhere, and if you have enough mousetraps, you can accidentally catch them.

+4
Nov 05 '09 at 21:25
source share

It allows you to be clear: if you assume "Ominous Sysadmin", a cryptographic solution will not be forbidden to change data in the system in an inconspicuous way - there are solutions that will prevent them from decrypting information, but nothing that can prevent them from writing new information in any form convenient for them .

This situation requires the following conditions:

  • That the system was, if necessary, autonomous. If you can add another system in that Evil Sysadmin does not have access to the registration host (for example, the syslog server), then suddenly the problem becomes a trivial case of sending logs or hashes on a regular basis.

  • So that the system does not have composite components for write once. The simplest things as others have suggested are things like a printer, however you can use a CD or regular disposable equipment to prevent this problem. They become more complex, but not insurmountable if Evil Sysadmin has physical access to the machine.

  • You need certainty, not statistical probability. In the event that No. 1 and No. 2 are impossible, the only remaining solution is weight loss - implementing trick traps designed to catch falsification if Evil Sysadmin is not aware of the trap.

The secret to effective No. 3 is a tactical surprise. The goal is to convey the impression to the attacker that they know everything about any countermeasures, while in fact they have more that they don’t know about. In general, this requires at least two levels of coverage - you need to have at least one level of protection, from which you can expect that Evil Sysadmin will compromise, because they will look for him, and if they do not find him, they will get suspicious and dig deeper until they do.

The important point is that this cover has to be so convincing as to satisfy Evil Sysadmin, that as soon as they find it, they no longer need to look. The second layer then identifies the falsification using alternative methods and generates an appropriate warning. There are various offers in this transaction flow, etc. that can be implemented. The lower the level of your solution, the more likely it will succeed (i.e., fixing the database source code is much less noticeable than the standard process that performs the connection and query, fixing the kernel is less noticeable again, changing the firmware.).

It is important to emphasize that this is not an ideal solution. No matter how complex your installation may be, someone may have figured out / compromised enough information to implement countermeasures. This does not apply to # 1 and # 2 (done correctly). However, if the value of the information you protect is low enough so that people with the necessary skills are not interested in the job of obtaining it, it should provide workable protection.

+3
Nov 06 '09 at 0:37
source share

You can use triggers through which you check insertions, updates, and deletes. Now, if the "evil SQL administrator" disables triggers, then you have even more complex problems. I would not allow the evil administrator to have full control over the system if I wanted to protect my data.

+2
Nov 05 '09 at 20:43
source share

I think this is a great question! But your scenario is against the principles of database design.

String checksums, triggers exporting other databases - everything you do will be a compromise!

I can only offer something outside the box - will it help you to apply some kind of standard, for example PCI Compliance?

If this fails, I would suggest looking for another job! There is enough work in our industry where you do not need to work with these people ...

+2
Nov 05 '09 at 21:17
source share

Consider creating an automatic backup of your data in an automatic backup. S3 is so cheap these days that you could create a process like mysqldump to transfer your entire data repository to a Transatlantic backup. Keep it all so often. How often this will depend on the evil of your database administrator.

To make this process possible, simply locate or start the machine on your network so that the evil administrator does not know anything or does not want to see if she suspected anything. The simplicity and elegance of nofollow noreferrer → plug-in cannot be overestimated here.

A note on the actual export mechanism: without knowing anything about your particular system, I suggested mysqldump or Oracle exp as the simplest and frantic solution. If your application has a way to export data to its own format (for example, XML, JSON or even protocol buffers), in other words, any format that part of the application, for example, uses SOA to communicate with each other), then the format can be used as format of your stuffing box.

I applied this approach in my gitosis . Every three hours, the contents are dumped into the European S3 bucket. This is a poor VCS of another VCS.

+2
Nov 06. '09 at 0:34
source share

Configure the system to write registration data to the remote system, the evil SQL administrator has no control. This will not prevent the specified administrator from deleting or faking your logging program, but this will not allow him to change them after the fact.

+1
Nov 05 '09 at 20:43
source share

This is a common data security issue. The simple answer: if you are in a situation where one "evil SQL administrator" has access to your entire environment, you have no way to protect your data.

A common practice for critical data is registering for multiple backups and protecting, ensuring that no one has permissions.

+1
Nov 05 '09 at 20:44
source share

If your application always works, you can start the transaction in the database and not release it until your application closes ... this way nothing can even view the table, but your application ...

Also yes, encrypt all text string data that comes in and out of your program if you have time for this ...

I also like BobbyShaftoe's answer ... do it a little further, see if you can make the trigger go to sleep or something, and in a few minutes all the records will return to what they were ... so our evil Admin believes that he has made changes, but they will just come back.

+1
Nov 05 '09 at 20:46
source share

First, be very careful with who you hire to administer your system.

The following audit tables are populated with triggers. Even if he bypasses the trigger for his changes, you can at least view the data before he changes it (especially from your backup).

The third automatic backup is deleted off-site. That way, even if the bad guy dumped the database and deleted the backup in place, you have a backup position. Make sure that the backup copy outside the site is not accessible to the database administrator, only someone has rights who do not have production rights on the database server.

Then there are no direct rights to the tables for all but the administrator. This means using stored procedures without dynamic SQL. This, at least, prevents unauthorized access to other data. It’s now harder for your accounts to commit fraud.

There are no rights to administer production for everyone except the administrator, and the other as a backup. That way, if you find that the trigger has changed, you know who did it. Now everything is going wrong, you have only two suspects.

SQL Server 2008 has DDL triggers that tell you who made structural changes. Again, if the trigger did not record the change, it was made by the administrator by default.

Encrypt backups and certain personal data, making it difficult to steal. Now it’s more difficult for a person who delivers off-site to steal your data.

The fire of any administrator who turned out to be unreliable, even if it was not data that he did not trust. If he fakes a schedule or steals office supplies, he will steal data. If he is arrested for some serious crime (and not a violation of the rules of the road), you can suspend him, if necessary, to check if the charge is proven.

When the administrator decides to switch to another job, do not let him have access to your system from the moment he tells you that he is going. If you shoot him, this is especially important.

+1
Nov 05 '09 at 21:37
source share

I found this article interesting, it may be a possible solution, although I did not find the time to try and think of exploits.




On top of my head, I could depict two different databases, the system administrator of "evil" would have only access to one.

One database would provide one-time pads to another database and the log that the pad requested and when. This panel, along with the current time and row data, can be hashed.

Thus, if an evil system administrator changes something, the hash will not be verified, and if it tries to rephrase, you will have a log of what time something was supposed to happen.

If sysadmin can store time and a one-time block, then this whole system crashes.

This is a deceptively difficult problem, I'm not sure that any protocol will actually work, but adding physical security and an audit trail would be a good idea.

+1
Nov 05 '09 at 23:40
source share

If you need an automated approach, you first need to know what actions and context are valid for the type of user. This is quite difficult because in the right context, falling is acceptable, but it is not for the everyday user.

I like the idea of ​​backing up on paper, however, the amount of information that is created can quickly subside with a large user base and heavy database usage.

+1
Nov 06 '09 at 0:29
source share

Every few hours, create a hash of the contents of the table. Also write down the start and end lines. For the second hash and overlay, make a hash of both the contents of the entire table and the rows hashed in the previous hash (hash hash). If the previous hash and the validation hash do not match, the database table has been modified. I would like these hashes to be emailed to you so you can check if the scam administrator has gone through and regenerated all the hashes. I understand that there is a gap, but I do not think that there is much more that can be done (without removing their access) than this or what has already been mentioned.

+1
Nov 06 '09 at 1:07
source share

I like the MikeMontana solution, but I thought it was worth adding an add to it. Unfortunately, I can not leave comments, so I posted them in a new answer, below is the original:

If you really need to detect that an intervention has occurred, then add the checksum field to the table. the checksum for each new line should include the checksum of the previous line. Then check the contents, go through a dataset that calculates the checksum as you move forward. If the calculated checksum does not match the value in the table, then some value has been tampered with.

-Mike

Several people noted: it’s good that the system administrator could recount the checksum (an even bigger problem if you want the code to work on your server), to which I add the following improvement:

when data is inserted into a table, it is encrypted with a public key, so everyone can add it to the database (provided that you have several people using it). Periodically, you decrypt the data using a private key and calculate the checksum. If it is different, it means that the database has been modified (what you wanted to check). Then you recalculate the checksum and insert it into the table (of course, the public key).

If an evil system administrator tries to recount a new checksum, he does it on encrypted data.

In addition, if you access this data remotely, this approach is not susceptible to humans in medium attacks using decryption and checksum calculations in a local field. The intercepted data remains encrypted and therefore unusable.

The only drawback of this system is the detection of any transaction in the database. You can solve this by abstraction and say:

  • check checksum
  • insert data
  • recalculate checksum

but this removes the advantage that you have access to data without providing a private key.

Now you can solve this problem in a different way, for which I would recommend you:

Solving the asymmetry of trust in grid computing

by Peter Dinda

http://portal.acm.org/citation.cfm?id=1066656

but implementation details are getting longer.

+1
Nov 06 '09 at 1:38
source share

As long as there are some very good suggestions, they will all bite the dust.

If you have an "unreliable" actor, an evil admin, as the custodian of your data, you cannot protect yourself. There are various schemes in network protocols and in the real world so that you can protect your data from unauthorized access / courier. But as far as I know, there is nothing that could protect you from an unreliable custodian, as in "Hello. I am Mr. Madoff, I was the chairman of the New York Stock Exchange, which you can trust me ...".

+1
Nov 06 '09 at 2:23
source share

Separation of batteries / Dual Power.

I like the ideas that have been presented so far. I wanted to add my own 2 cents.

In the financial industry, the separation of powers was key for one person to be completely evil. Our main processing solution is under the responsibility of our accounting department (bless their hearts), so we programmers really do not get much access to our live data.

Additionally, a third party records interactions with key parts of our system.

In general, not a single person has enough control to influence all the checks and balances, which makes the gain so low that it (I hope) is not worth coordinating.

+1
Nov 06 '09 at 3:01
source share

There are two interesting articles on this topic. One of them suggests using HMAC alogrith. Another suggests using the Condensed RSA scheme and the BGLS signature scheme.

Authentication and integrity in outsourcing databases

http://www.isoc.org/isoc/conferences/ndss/04/proceedings/Papers/Mykletun.pdf

General Discrete Recognition Technique for Relational Databases

http://www.dsi.unive.it/~cortesi/paperi/iciss09.pdf

I feel that both decisions are undeniable based on the amount of missed risk. --Kiran.Kumar

+1
Mar 03 2018-11-11T00:
source share

In addition to triggers for auditing, checksums, etc., you can see database replication to another slave database - no one can perform any actions directly on it.

You still have a risk that someone is messing around with triggers, etc., but that would be very noticeable when they were corrupted, so you can find out at what point replication was upset.

0
Nov 05 '09 at 20:48
source share

You can add a trigger to send a copy of the data as it is entered into the non-production database, to which the villainous administrator also does not have access. The administrator can stop the trigger from starting, but the question was how to detect the manipulation, and not to prevent it.

0
Nov 05 '09 at 21:11
source share

Since your evil admin has full control over the server, you probably need an external audit solution designed to monitor the activity of privileged SQL Server users.

Guardium create a network device that can record all request activity in the database or on the server, and this is done at the network level (including local connections) so that you can not do anything at the SQL Server level to prevent it.

This does not stop your evil administrator from modifying the table, but since this is a locked device, the evil administrator cannot modify the table and then convince the device to say that he did not.

0
Nov 08 '09 at 0:28
source share

I found this thread, exploring how to implement just such a solution. One (very theoretical) solution that I was thinking about would be very similar to using a perfect direct secret key system.

What I understood is that if you have a pair with a private public key (call them Kpr and Kpb) and a set of algorithms (A and B) to:

A (K_pr) = K'_pr

B (K_pb) = K'_pb

(where K'pr and K'pb are a valid private public key pair other than Kpr and Kpb)

Using this, you can sign each row in the database, discarding each private key after use, but retaining the public key with the signature. Then you could save the first public key in a location that the evil administrator could not change significantly (i.e. send it to everyone you know, print in a newspaper, put a tattoo on your face, all the above).

It would not be possible to re-sign each entry since there is no longer a private key, and you can check if all public keys are consecutive. There are only two drawbacks that I could think of:

  • If the evil admin receives a copy of the private key, he will be able to change any record from now on. This can be circumvented using the hardware module for creating signatures, so the private key is not accessible from the software.
  • An evil administrator will be able to add data to your table.

The problem is that I don’t know the set of algorithms like the one I described. However, I am not a cryptographer, so this is possible.

EDIT:

After a few thoughts, I may have figured out a way to make this possible using existing tools. If you include the public key for the nth record in the (n-1) th record and its signature (which you can, because at the time of writing the record you could access the next private key) d protects each record of the previous one. After deleting the private key, the signature cannot be recreated, therefore, as long as you have the first public key, you can always check the entire table. It also eliminates the need for "sequential" private keys; you can simply generate a new private key for each row (although it would be very expensive). The same disadvantages are still applicable.

0
Oct 22 '15 at 10:44
source share

The answer to this 2016 question will be to use blockchain . According to Wikipedia:

The block chain is primarily protected from unauthorized access by temporarily restricting hashes of recent valid transaction packets to “blocks”, proving that the data must have existed at that time. Each block includes a previous time stamp forming a chain of blocks, with each additional time stamp reinforcing those in front of it.

0
Jan 17 '16 at 6:18
source share



All Articles