How do H.264 or video encoders usually calculate the afterimage of two frames?

I am trying to understand how video encoding works for modern encoders, in particular, H264. The documentation very often mentions that residual frames are created from the differences between the current p-frame and the last i-frame (provided that the following frames are not used in the prediction). I understand that the YUV color space is used (possibly YV12), and one image is “crossed out” from another, and then a residual is formed. What I don’t understand is how this work is done. I do not think that this is the absolute value of the difference, because it would be ambiguous. What is the formula per pixel to get this difference?

+5
source share
1 answer

Subtraction is just one small step in encoding a video; The basic principle underlying the majority of modern video encodings is motion estimation , followed by motion compensation . Basically, the motion estimation process generates vectors that show offsets between macroblocks in successive frames. However, there is always an error in these vectors.

, , , , . - ; . . " " - , , "" .

PDF, .

:

  • , YUV , , , YV12 -
  • Y, U V ( , ), , ). Y, U V; Y (), .
+5

All Articles