Protecting a system deployed in a "hostile" environment

In my company, we are developing a large system consisting of several servers. The system consists of approximately 5 logical components. Data is stored in XML, MS SQL and SQLite. This is a .Net system (mostly), components interact using WCF and some custom UDP. Clients access the system mainly through custom UDP or WEB (ASP.NET and Silverlight).

Communication security is simple, some SSL and some security in WCF, and we are done.

The main problem that we are facing is that the system must be deployed on a client site, a client that we do not necessarily trust. We need to protect the data on the servers, and the software itself - from reverse engineering. It is important for us to be important.

We also need a kill switch, I would like something that will destroy data and software, on command or if I can’t call home for a certain period of time.

The direction I was thinking about is using TPM or something similar - some kind of hardware encryption combined with another service that we could internally encrypt for all the software and data on the servers so that the key appears from our server safely on our website and, possibly, memory from TPM.

How do you propose solving such a problem?


UPDATE 04/02 I am looking for practical suggestions or advise about products that can help me, so I'm starting to be generous ...

Look, guys, we basically put our car on the client site (for business and practical reasons), we have this car, and the client receives everything that he pays for several hours, and he can do everything that he wants, But I have the algorithms running on this machine, and some of the data stored there are our trade secrets that we want to protect. Ideally, I would like the machine to not work at all, without even booting, unless I say that this is normal, and without my OK, so that everything on the machine remains encrypted. Saving memory also looks like a good way to protect the machine at runtime.

Ideally, I would like the HD and storage on all machines to explode as soon as someone approached them with a screwdriver ... :-), but I think it will be too far ...


UPDATE 10/02 OK. after some research, I think that we will try to do something in the same direction as the PS3 encryption system, except that we are going to enter keys to decrypt the software and data from our servers. By doing this, we can decide on our machines, whether we trust the server requesting the keys, we can get the kill switch by simply restarting the machine. it will probably be based on TPM or something similar, maybe intel TXT ... I'm also very interested in saving memory as an important security feature ...

By the way, we cannot solve this problem by moving valuable parts of our system to our website, both because of business requirements and because it is not technologically feasible - we need a huge bandwidth ....

+8
security encryption obfuscation tpm
source share
7 answers

What you are asking for is essentially the holy grail. This is roughly equivalent to what is done for game consoles, where you have a trusted platform that works in an untrusted environment.

Think about whether you can treat the car as a compromise from day one. If you can work in accordance with this assumption, it will become much easier for you, but it does not look terribly viable.

In terms of ensuring this, there are several problems:

  • You must encrypt the file system and use hardware decryption
  • You must isolate your applications from each other, so that security problems in one do not violate the others.
  • You must plan for security issues, which means that mitigation strategies, such as a secure hypervisor,

I know that they are rather vague, but in fact this is the history of protecting game consoles over the last couple of years - if you are interested in how it was solved (and broken) again and again, look at the console manufacturers.

This has never been completely successful, but you can significantly increase write access.

+10
source share

... Honestly, it looks like you are asking how to write a virus into your application, which makes me think that your client probably has more reasons to not trust you than vice versa.

Saying this is a terrible idea for a number of reasons:

  • What happens if their internet connection matrixes or they move offices and disconnect the machine from the bit?
  • What to do if you made a mistake, and this is a misfire? Deleting data, even if the client uses it correctly?
  • I can only assume your request implies that your application offers no backup options. Am I faithful? It sounds exactly like a product I would not buy.
  • How valuable is the application data being managed? If this removed what financial loss will result in a customer? Does your legal department have this verified and you cannot be held accountable?
+4
source share

This question is asked on SO 2-3 times a week, and the answer is always the same - everything that you gave the user no longer belongs to you.

You can make it harder for the user to get to the data, but you cannot stop him from getting there completely. You can encrypt the data, you can save the decryption key on a USB cryptotoken (which does not reveal the secret key), but theoretically, if the code can call the cryptotoken and ask him to decrypt a piece of data, then the hacker can duplicate your code (theoretically) and make this code cryptocurrency to decrypt all data.

In practice, a task can be made complex enough to make data acquisition impossible. At this point you should check how important decrypted data is to the user.

About the kill switch: this does not work. Never. If necessary, the user can make a copy and restore it from the backup. He can change the computer clock. Probably, it can even slow down the computer (if the data is so valuable that you can invest in your own equipment for emulation).

About critical data: sometimes it turns out that your valuable asset is really of little value to someone else [and some other aspects of your decision]. Example: we send the source code of our drivers. This is the most valuable asset for us, but users do not pay for lines of code, but for support, updates, and other benefits. The user will not be able to effectively use the [stolen] source code without investing an amount comparable to the cost of our license.

About obfuscation: virtualization of code fragments (for example, the VMProtect product) seems quite effective, but it can also be circumvented with some effort.

In general, I can think of some special equipment with a customizable operating system, sealed like an ATM (so that the client could not penetrate without breaking the print), with regular checks, etc. It might work. Thus, the task is not just technical, but mostly organizational - you will need to regularly check the machine, etc.

To summarize: if the data is valuable, keep it on your servers and offer only an Internet connection. Otherwise, you can only minimize risks, and not completely avoid them.

+4
source share

Like everyone else, there is no magic bullet. The user can turn off the machine, get HD as a slave on another machine, back up everything, cancel the engine of your code, and then successfully crack it. After the user has physical access to the executable file, he is potentially compromised, and there is nothing to do to stop him in 100% of cases.

The best you can do is make the job of a potential cracker as hellish, but no matter what you do, it will not be indestructible.

Using self-destruction in case of something wrong can be handled by an attacker who has backed up everything.

Using the key in the USB driver helps make the life of the cracker more complicated, but can ultimately be defeated by a competent decisive cracker: code that unencrypted things cannot be in an encrypted state (including the part that receives the key), so this is a big weakness. Hacking this part of the code to save the key in another place hits the key.

If the software authenticates on a remote server, this can be done by attacking the client and bypassing authentication. If he receives the key from the server, you can sniff the network to intercept the server data containing the key. If the server data is encrypted, the cracker can decrypt it by analyzing software that unencrypts it and catches unencrypted data.

In particular, all that would be much easier for an attacker if he uses an emulator to run your software that can save snapshots of memory (including an unencrypted version of the algorithm). Even easier, if it can manipulate and eject memory right away while starting your software.

If you do not expect your unreliable client to be very decisive, you can simply complicate the situation and hope that they never get enough energy and skill to break them.

The best solution, in my opinion, is to get all the software on your trusted server and get their server to just ask your server to do the job and save your algorithms on your server. This is much safer and simpler than anything else, since it eliminates the main problem: the user no longer has physical access to the algorithm. You should really think about how to do this, eliminating your needs in order to save the code in the client. However, even this is not indestructible, the hacker can determine what the algorithm does by analyzing what is the output to the input function. In most scenarios (it doesn't seem like this is your case), the algorithm is not the most important in the system, but data instead.

So, if you really cannot avoid running the algorithm on the untrustworthy side, you cannot do much more than what you have already said to do: encrypt everything (mainly on the hardware level), authenticate and verify everything, destroy important data before than someone would think about backing it up if you suspect something is wrong and have someone crack it.


BUT IF YOU REALLY WANT SOME IDEAS AND REALLY WANT THIS, WE ARE HERE:

I could suggest you make your program a mutant. IE: When you decrypt your code, encrypt it with a different key and discard the old key. Get a new key from the server and make sure that the key itself is encoded in such a way that it would be very difficult to make fun of the server with something that gives compromised new keys. Make sure the key is unique and never reused. Again, this is not indestructible (and the first thing the cracker would do was attack this very feature).

One more thing: Put a lot of non-obvious red herrings that do insensitive weird sequence checks that contain a lot of non-functional dummy version of your algorithm and add a lot of complex overflow that does nothing effectively and claims that it works as expected from real code. Make real code do some things that look weird and no-sense too. This makes debugging and reverse work even more difficult, since hacking will require a lot of effort, trying to separate what is useful from what is undesirable.

EDIT: And obviously, make the junk e-mail part of the code better than the right one, so the cracker will look there first, effectively wasting time and patience. It goes without saying that everyone is confused, so even if the cracker gets a simple unencrypted start code, he still looks confusing and very strange.

+2
source share

I know that others are likely to dig holes in this solution - and feel free to do it as I do such things for life and will welcome the challenge! - but why not do it:

  • Since you are explicitly using windows, enable disk lock protection on your hard drive with maximum security settings. This will help soften people cloning the disk, as my understanding. If I am wrong, say so! - its contents will be encrypted based on these system hardware settings.

  • Enable TPM on the hardware and configure it correctly for your software. This will help stop hardware sniffing.

  • Disable all accounts that are not used by you, and lock system accounts and groups to use only what you need. Bonus points for setting up Active Directory and a secure VPN so that you can access your network remotely through the back door to check the system without an official visit on site.

  • To raise the technical bar necessary for this, write the software in C ++ or in some other language other than the way it is written, since the MSIL bytecode is easily decompiled into the source code using publicly available free tools, and this It requires more technical skills to decompile something in the assembly, even if it is still very suitable for the right tools. Be sure to include all the cpu instructions for the hardware that you will use to further complicate the situation.

  • You have software to check the hardware profile (Unique Hardware ID) of the deployed system so often. If this does not work (as in hardware), it itself will collapse.

  • After checking the hardware, download your software from an encrypted binary image loaded into an encrypted RAM disk, which is then decrypted in (unsecured!) Memory. Do not bind it or use a permanent memory address, as this is a bad idea.

  • Be very careful that after decryption is complete, the keys are deleted from RAM, as some compilers foolishly optimize unprotected calls to bzero / memset0 and leave your key in memory.

  • Remember that security keys can be detected in memory by randomness with respect to other memory blocks. To facilitate this, make sure that you use several dummy keys, which, if used, run an intrusion and explosion detection script. Since you do not have to commit the memory used by the keys, this will allow people to run the same dummy keys several times. Bonus points, if you can arbitrarily generate all the dummy keys, and the real key is different every time because of number 12 below, so that they can’t just look for a key that does not change .. because they all do.

  • Use polymorphic assembler code. Remember that an assembly is just numbers that can be made independently, based on the instructions and the state of the stack / what was called earlier. For example, in a simple system i386 0x0F97 (Set byte if above) can easily be the opposite (Set byte if below) command, simply subtract 5. Use your keys to initialize the stack and use the CPU L1 / L2 cache if you really want to go to the kernel.

  • Make sure your system understands the current date / time and confirms that the current date / time is within acceptable limits. Starting from the day prior to deployment and providing it with a 4-year limit, it will be compatible with a critical hardware failure error for hard drives under warranty / support so that you can take advantage of this protection and allow you to have a good time between hardware upgrades. If you refuse this test, do it to kill yourself.

  • You can help reduce the number of users typing hours by making sure your pid file is updated with the current time so often; Comparing the last modified time (both encrypted data and its file attributes in the file system) with the current time will be an early warning system if people are screwed with a clock. According to the discovered problem it will explode.

  • All data files must be encrypted with a key that is updated in your team. Install your system to update at least once a week and with every reboot. Add this to the software update feature from your server that you must have.

  • All cryptography should follow FIPS guidelines. Therefore, use strong cryptography, use HMACS, etc. You should try to use the FIPS-140-2-level-4 specifications, taking into account your current situation, but, for obvious reasons, some of the requirements may be impossible from an economic point of view and realistic, FIPS-140 -2-level-2 may be your limit.

  • In all cases of suicide, call home first to let you know immediately what happened.

And finally, some non-software solutions:

  • If he cannot call home .. as the last effort of the rope, the user hardware device is connected to the internal serial / USB port, which is configured to turn on the relay, which then issues the Thermite unit if it detects any case, hardware or software. Place it on top of the hard drives and place them on top of the motherboard. However, you will need to check with your legal department for permissions, etc., if this is not a U.S.-approved military situation, as I assume you are in the U.S.

  • To ensure that the equipment is not tampered with, see the FIPS physical security requirements for more information on making sure the system is physically secure. Bonus points if you can see the bolts / welding of the modern racks that you use in the old AS400 case as a disguise to help mitigate the movement / fabrication of the equipment. Younger guys will not know what to do and worry about breaking “suck old things”, older guys will ask the question “wtf?”, And most of them will leave behind blood that can be used later as evidence of falsification if they interfere with the often gabled corps, at least based on my own experience.

  • In case of notification of an intrusion, remove it from orbit .. its the only way to make sure .;) Just make sure that you have all the legal forms and access requirements filled out, so that the legal one is satisfied with the reduction of risk or liability ... Or you can Automatically set up your notification system for email / text / phone users as soon as you receive a notification that it has exploded.

+2
source share

"The only way to have a completely secure system is to smash it with a hammer."

However, you can cheat on potential hackers to make them more difficult than worth it. If a machine is a black box, where they cannot really access it directly, but instead have programs that can handle it, then physical access is the greatest threat to it. You can lock the cases and even install a small, intermittent element in the case, which will be cut off if this case is opened ... make sure your employees always replace this element ... he will let you know if someone opened it without permission (yes, this is an old teenage trick, but it works). As for the window itself, physically disconnect any parts of the equipment (for example, USB ports) that you do not need.

If you are dealing with a machine that is not a black box, encrypt the hell out of everything ... 256-bit encryption is almost impossible to crack without a key ... then the trick will get the key.

In theory, you can potentially change the key (by re-encrypting the data) and only be a restored process that communicates directly with your (secure) servers.

In addition, keep track of everything that happens with the box, especially everything that happens in software that is outside normal use. Most of this cannot protect you from someone who is really, really tuned ... but it can warn you that your system has been compromised. (where you can sue those who broke in)

As for the kill switch ... well, the hibernation viruses are there, but, as already mentioned, they can be tricked or set off by chance. I would suggest that instead of cleaning itself clean if you suspect a violation, the system encrypts everything that it can, using a randomly generated key, sends the key to your servers (so that you can cancel the damage), and then “chop” the file that was used to store the key. (Many file shredders there can destroy data well enough to restore (almost) impossible.)

+1
source share

To summarize the answers, yes. There are no "completely safe" solutions to this problem, because this will require homomorphic encryption (which exist only in the form of limited prototypes, which require a ridiculous amount of computation).

In practice, you need a combination of appropriate engineering requirements and safety measures (assessment of stakeholders, interests, valuable assets in a deployed system, possible attacks and losses from each successful VS attack scenario to protect against it.)

After that, you will either see that the protection is not really needed, or you can deploy some reasonable measures and cover other “holes” with legal materials, or completely reconfigure the system starting with a business model (It is unlikely, but possible, too).

As a rule, security is a problem of systems engineering, and you should not limit yourself only to technical approaches.

+1
source share

All Articles