I am reading a book on operating systems, and it provides some C examples that I basically understand. The example I'm looking at now shows two almost identical pieces of code that will work on a dummy system ...
int i, j; int [128] [128] data; for (j = 0; j < 128; j++) for (i = 0; i < 128; i++) data [i] [j] = 0;
And the second part of the code
int i, j; int [128] [128] data; for (i = 0; i < 128; i++) for (j = 0; j < 128; j++) data [i] [j] = 0;
On this particular system, the first section of code will lead to errors on page 16k, and the second will only result in 128.
My apologies if this is a stupid question, but in my experience with .NET, I have always been pretty much memoryless. I just create a variable, and it is "somewhere", but I do not know where I do not care.
My question is: how to compare .NET with these C examples in this fictional system (pages have 128 words in size, each array line occupies one full page. In the first example, we set one int on page 1, then one int on page 2 and so .d., while the second example sets all int on page 1, then all int on page 2, etc.)
Also, although I think I understand why the code creates different levels of swap, is there anything meaningful that I can do with it? Does the page size depend on the operating system? Does this call, as a general rule, to access memory in an array as bolder as possible?