The question may look blurry, since it is difficult to describe the problem in one line, so here it is. I use Debian on the Raspberry Pi to run the PID controller, which means that dt (time difference between cycles) is obtained every time the PID output is output. Basically, dt is calculated like this.
oldtime_ = time_; clock_gettime(CLOCK_MONOTONIC, &time_); Timer.dt = ((static_cast<int64_t>(time_.tv_sec) * 1000000000 + static_cast<int64_t>(time_.tv_nsec)) - (static_cast<int64_t>(oldtime_.tv_sec) * 1000000000 + static_cast<int64_t>(oldtime_.tv_nsec))) / 1000000000.0;
The PID is updated about 400 times per second and it runs fine, but sometimes Linux decides to take a lot more time to do the action. The result is a large number of dt, say, not 1/400 = 0.0025, but 0.8, which is 320 times more than necessary. The result is an incorrect PID calculation. It looks like this. 
I would like an answer on how to move raspbian a little closer to the real-time system.
EDIT
Thanks, anaken78 and everyone who helped. Using the RR_FIFO graph worked fine, and the processing speed is always 380-400 Hz. 
source share