I have a C # service running as a LocalSystem account that starts many other processes depending on their needs. This happens within a few months. This week, some of the subprocesses are crumbling. I connected a remote debugger to them, and they do not allocate memory (the new C ++ operator returns 0x0), which is an indirect cause of the failure.
It's funny if I RDP into the machine, I can easily start the process from CMD without any problems. However, when the service starts it, do not leave.
The device is running Windows XP SP3. It is not an obligation to fix about 80% of physical memory.
Are there any special restrictions on how many processes or how much memory can be used by a service, including processes spawned by this service?
Any other ideas why these processes cannot allocate memory.
EDIT:
I had a good look at the SysInternals Procmon crash scenario and it doesn't reveal anything (what I see). Everything looks normal and then suddenly crashes. I can confirm that I connected a remote debugger that it crashes after dereferencing the null pointer from a new C ++ call. This is one of the first objects highlighted in the application; it should never fail.
I also found that if you enable the Enable services option: allow services to interact with the desktop, then all child processes start correctly. However, they appear on the desktop when connected via RDP and, unfortunately, terminate if you exit through RDP = YUK! However, this is not an ideal solution - I would really like to know why the child processes could not allocate memory after the 6th child process.
windows memory windows-xp service
Matt connolly
source share