This is not so much a NSMutableData problem, but a kernel / OS problem. If a process requests a (large) chunk of memory, the kernel will usually just say "it's ok, here you go." But only in fact, using it, is it really ("physically") allocated. This is normal, because if your program starts with 2 GB of malloc (as you do here), it otherwise otherwise immediately crowds out other exchange programs, whereas in practice you often will not use 2 GB at once.
When accessing a memory page that is not actually present in physical memory, the kernel will receive a signal from the CPU. If the page should be there (because it is in your 2 GB block), it will be inserted into place (possibly from a swap), and you wonβt even notice. If there should not be a page (because the address is not allocated in your virtual memory), you will receive a segmentation error (error type SIGSEGV or EXC_BAD_ACCESS).
One related topic is "overcommit (ment)", where the kernel promises more memory than is actually available. This can cause serious problems if all processes start using the promised memory. It depends on the OS.
There are many pages on the Internet explaining this better and in more detail; I just wanted to give a short introduction, so you have conditions for posting on Google.
change has just been tested, linux easily promises me 4 TB of memory, and, I assure you, this device does not even have 1 TB of total disk storage. You can imagine that this, if not taken care of, can cause some headaches when creating mission-critical systems.
source share