The difference between segmentation degradation and segmentation core failure

consider the following code in C

int n; scanf("%d",n) 

he gives an error. Segmentation with a segmentation error reset by the Linux Mandriva GCC compiler

but the following code

 int *p=NULL; *P=8; 

gives only a segmentation error, why is this so.?

+4
source share
3 answers

A core dump is a file containing a dump of the state and memory of a program during its breaking. Since base dumps can accept non-trivial amounts of disk space, there is a configurable limit on how large they are. You can see it with ulimit -c .

Now that you get a segmentation error, the default action is to terminate the process and the core dump. Your shell reports what happened if the process terminated with a segmentation failure signal, it will print a Segmentation fault , and if this process additionally resets the kernel (if the ulimit parameter and permissions are in the directory where the core dump will be generated), this will tell you about it.

+4
source

Assuming you are running both of them on the same system with the same ulimit -c settings (which would be my first assumption regarding the difference you see), then its possible optimizer β€œnotices” the clearly undefined behavior in the second example and creating your own exit. You can check with objdump -x .

+1
source

In the first case, "n" can have any value, you can have this memory (or not), it can be written (or not), but it probably exists. There is no reason that n is necessarily zero.

Writing to NULL is definitely naughty and something the OS will see!

0
source

All Articles