Black hat knowledge for programmers with a white hat

Non-programmers always have skepticism when honest developers learn the technique of hacker hackers. Obviously, however, we need to learn many of their tricks so that we can maintain our own safety to the point.

To what extent do you think an honest programmer should know the methods of malicious programmers?

+70
security
Apr 21 '09 at 13:29
source share
21 answers

I come late on this one since I just heard about it in the podcast. However, I will offer my opinion as a person who worked in the security team of a software company.

We really took developer education very seriously, and we provided as many development teams as possible as basic training in safe development. Thinking about security really requires a shift in thinking from normal development, so we will try to get developers to think about how to be in the order of things. One of the pedestals we used was one of these home safes with a numeric keypad. We would let the developers study it inside and out to try and find a way to crack it. (The solution was to push the handle, giving the safe a sharp bash from above, which would make the bolt bounce off its spring in the solenoid.) Although we would not give them specific black- we will talk about the implementation errors that cause these vulnerabilities, especially about things that they may not have encountered before, for example, integer overflows or compilers that optimize function calls (for example, memset to clear passwords). We published a monthly security newsletter that invited developers to identify security-related errors in small code samples, which certainly showed how much they would miss.

We also sought to follow the Microsoft Security development life cycle, which involves involving developers in discussing their product architecture and identifying assets and possible ways to attack those assets.

As for the security group, which was mostly former developers, understanding black hats was very important to us. One of the things we were responsible for was receiving security warnings from third parties, and knowing how difficult it would be for a black hat to exploit some weakness was an important part of the sorting and investigation processes. And yes, sometimes it made me go through the debugger to calculate the memory offsets of vulnerable routines and fix binary executables.

The real problem is that many of them were above the capabilities of the developers. Any reasonably sized company will have many developers who are good enough for writing code, but simply do not have a security mentality. Therefore, my answer to your question is this: expecting that all developers will know the black skin, it will be an undesirable and fatal burden, but someone from your company should have this knowledge, whether it be a security and response audit team, or simply senior developers.

+37
May 04 '09 at 1:32 a.m.
source share

At the end of the day, nothing that the "black hats" know about criminal knowledge is how knowledge is applied. A deep understanding of any technology is valuable as a programmer, this is how we get the best out of the system. This can be obtained these days without knowing the depth, as we have more and more frameworks, libraries and components that were written using such knowledge in order to save you from having to know everything, but it’s still nice to dig around from time to time.

+43
Apr 21 '09 at 13:39
source share

I will be a little heretical and go out on a limb and say:

  • You really need to talk to sysadmin / network users who protect their machines. These people deal with the concept of hacks daily and are always looking for potential exploits to be used against them. For the most part, ignore the "motivational" aspect of how attackers think, since the days of "hacking for fame" are long gone. Focus on the methodology. A competent administrator can easily demonstrate this.

When you write a program, you imagine that (hopefully) a seamless, sleek interface for $ {any-else-accepts-your-programs-I / O}. In this case, it may be the end user, or it may be a different process on another machine, but it does not matter. ALWAYS believe that the “client” of your application is potentially hostile, regardless of whether it is a machine or a person.

Do not believe me? Try to write a small application that takes orders from sellers from sellers, and then a company rule that must be followed using this application, but sellers are constantly trying to get around so they can earn more money. Only this small exercise will demonstrate how a motivated attacker - in this case, the intended end user - will actively seek ways to either exploit flaws in the logic or play the system in other ways. And these are trusted end users!

Multiuser online games are constantly involved in the war against cheaters, because server software usually trusts the client; and in all cases, the client can and will be hacked, as a result, players play the system. Think about it - here we have people who just enjoy themselves and they will use extreme measures to prevail over activities that do not involve making money.

Imagine the motivation of a professional shepherd bot who earns money for a living this way ... writing malware so they can use other people's machines as income generators, selling their botnets at the highest price for massive spam flows ... yes , this really does .

Regardless of motivation, the point remains, your program may and at some point will be attacked. This is not enough to protect against buffer overflows. splitting a stack , executing a stack (data of type "code" is loaded onto the stack, then returning to unload the stack, which leads to code execution), executing data , crossite scripting , privilege escalation , race conditions or other "software" attacks, although this helps. In addition to your “standard” software protection, you also need to think about trust, verification, identification, and credentials — in other words, dealing with what your program input provides and what your program consumes. For example, how to protect against DNS poisoning from a software point of view? And sometimes, you cannot avoid errors in the code - forcing your end users not to translate their passwords to employees is an example.

Include these concepts in a safety methodology, not in a “technology”. Security is a process, not a product . When you start thinking about the “other side” of your program and the methods that you can use to mitigate these problems, it will become much clearer what can go right and what can go horribly wrong.

+38
Apr 21 '09 at 14:01
source share

To a large extent. You need to think like a criminal, or you're not paranoid enough.

+19
Apr 21 '09 at 13:31
source share

To what extent do you think an honest programmer should know the methods of malicious programmers?

You need to know more than them.

+18
Apr 21 '09 at 13:38
source share

I work as a security guy, not a developer, and based on my experience, I can simply say that you cannot learn something like a black hat or professional white hat if this is not your second profession. This is too much time.

The most important bit, although they see some bad guys or professionals in action and understand what the possibilities and impact of unsafe code are.

Thus, by studying some tricks, but many of them can get the feeling of a “false sense of security” because he or she cannot crack. Although a more experienced attacker can crack the same thing in a few minutes.

Having said that, as soon as you remember this, I think it’s good to study some attacks, fun and quite educational, to learn how to tear things apart.

+13
Apr 21 '09 at 13:35
source share

He pays to be “as innocent as pigeons, and as wise as snakes,” and to learn the methods that people do for vile purposes. However, such knowledge should be used carefully. "With great power comes great responsibility."

+9
Apr 21 '09 at 13:37
source share

Definitely explore the dark side. Even if you do not learn real methods, at least make an effort to find out what is possible.

alt text http://ecx.images-amazon.com/images/I/51rqNSV141L._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpg alt text http: //ecx.images-amazon .com / images / I / 519BX6GJZVL._BO2,204,203,200_PIsitb-sticker-arrow-click, TopRight, 35, -76_AA240_SH20_OU01_.jpg

Good Resources for Learning Trade Tricks Reverse: Reverse Engineering Secrets and Hacking: The Art of Exploitation . They are written for both parties - they can be used to learn how to hack, but they also provide ways to prevent such attacks.

+9
Apr 22 '09 at 13:24
source share

One word of caution: Oregon State vs. Randal Schwartz .

As a small part of researching two separate incidents on our site, I would say that the chances of finding out about an exploit before it is used against you are vanishingly small. Perhaps if you devote your career to a white hat, you will remain at the top of all potential holes in most of the popular hardware and software stacks. But for an ordinary programmer, you are likely to be in reaction mode.

You are responsible for knowing how to hack your own software, and be responsible for constantly updating third-party software. It would be nice to have an emergency plan to deal with the attack, especially if you are a highly professional or highly valued target. Some places will want to close the hole immediately, but our site usually leaves certain holes open to help law enforcement catch criminals. The IT security team announces from time to time that it will conduct port scans so that the SA does not worry about it.

+8
Apr 29 '09 at 23:37
source share

Design for evil . "When good is stupid, evil will always triumph."

In short, if you do not think like a criminal, this does not mean that criminals will not.

+5
Apr 21 '09 at 13:34
source share

I personally do not see the technical difference. Of course, the motives are different, but the technical game is the same. He likes to ask what kind of war you need to know about "goodies."

The answer is all this, even if they are not actively practicing it.

+5
Apr 21 '09 at 13:34
source share

I believe that part of the “coding defensively” includes knowledge of malicious methods, but at the same time you do not need to know all the methods to effectively protect them. For example, knowing about buffer overflow attacks is not a reason for trying to protect your buffers from overflowing. You protect them from overflow, because if they do, it can lead to chaos in your program, regardless of whether it is a bug or an attack.

If you write very carefully tested and well-designed code, then malicious attacks cannot penetrate, because a good architecture should automatically block side effects and unauthorized access.

However, this last paragraph assumes that we have a perfect job where we are given an incredible amount of time to make our code just right. Since such work does not exist, knowledge of malicious methods is a good shortcut, because it means that although your code is not perfect, you can create "non-working" for these exploits to make sure that they do not get through. But, they do not make the code better, and they do not make the application better.

Ultimately, malicious malware exploits are something that is good to know, but 95% of them will be covered simply because you adhere to best practices.

+5
Apr 21 '09 at 13:40
source share

One skill that is often overlooked is social engineering.

Many people simply do not recognize when they are connected. In a previous company, VP ran a test with three (women) temporary employees in conference rooms that call programmers and system administrators and work from a script to try to get someone to grant access or open passwords. Each pace got access to something in the first hour of calls.

I bet if a similar test was carried out in any company of medium and large size, they will get the same results.

+5
Apr 23 '09 at 19:29
source share

One of the methods that White Hats needs to learn is to check / mitigate / think in terms of social engineering, because the biggest security risk is people.

White hats can manipulate beats, but people often manipulate black hats.

+3
Apr 21 '09 at 14:44
source share

we white hats and gray hats should be good at a million things, these black hats and skiddies only have to succeed in one thing

+1
Apr 21 '09 at 14:14
source share

I am going to take a controversial position and say that there are some black hats that you do not need to be a good hacker from white huts. The doctor does not need to know how to genetically develop the virus in order to effectively treat the disease.

+1
Apr 22 '09 at 22:20
source share

Basically, almost all the security vulnerabilities exploited by hackers are errors in the code introduced as a result of poor programming style or discipline. If you write code to protect against bad data and invalid calls, you block most of the vulnerabilities in your code.

If you are interested in protecting your code from hacking / abuse / etc. you spend too much time on this. Just buy a bag to protect the basics and just move on.

+1
Apr 29 '09 at 21:14
source share

You need to understand how to use the “bad guys,” so some understanding is mandatory.

For the average developer, I think it’s enough to check the basic principle of what they do to avoid creating vulnerabilities in their projects.

For someone who works in the field of security (comes up with questions about banking services or credit card information in an online store), a deeper understanding is required. These developers should go “under the hood” about how the “bad guy” works and what methods he uses.

0
Apr 21 '09 at 13:40
source share

Until the moment when, studying their paths, he begins to think in his own direction. And then he must choose which side he wants to "belong to."

There is nothing malicious in the technology itself ... knowledge is pure ... this is how you use it, which determines how it should be considered.

0
Apr 21 '09 at 13:51
source share

2 sides of the same coin. Beyond intention - what is the question? The same skills, different implementation.

0
Apr 22 '09 at 13:07
source share

When I hear the word blackhat, I think of someone who uses computer knowledge to break into banks and do other mischievous things. White knows everything that black knows, but simply does nothing wrong with him.

Therefore, you do not need to know that the "black check" must be protected ...

Knowing how blackhat thinks when you're already equivalent to white doesn't help squat. He likes to know: "John wants to break into my house and steal my iPod music." If you really cared about your iPod music, you should have protected it anyway.

0
Nov 23 '09 at 7:14
source share



All Articles