Capture all NSWindows as active images such as Mission Control on Mac OS X

I am looking to aggregate live views of all windows. Like Mission Control (Exposé), I want to access the image buffer of any NSWindow or screen very quickly. Ideally, I want to combine these live images in my own OpenGL context so that I can manipulate them (scale and move the screens of the Windows screen).

Things that are too slow:

  • CGDisplayCreateImage
  • CGWindowListCreateImage
  • CGDisplayIDToOpenGLDisplayMask and CGLCreateContext and CGBitmapContextCreate

Any other ideas? I am trying to achieve 60 fps capture / composite / output, but the best I can get with any of these methods is ~ 5 frames per second (on a retina screen that shoots the entire screen).

+7
core-graphics window opengl macos mission-control
source share
1 answer

Unfortunately, I did not find to quickly capture the framebuffers of individual windows, but I understood the following best. This is a way to quickly capture the live view of the entire screen (s) in OpenGL:

Configure AVFoundation

 _session = [[AVCaptureSession alloc] init]; _session.sessionPreset = AVCaptureSessionPresetPhoto; AVCaptureScreenInput *input = [[AVCaptureScreenInput alloc] initWithDisplayID:kCGDirectMainDisplay]; input.minFrameDuration = CMTimeMake(1, 60); [_session addInput:input]; AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; [output setAlwaysDiscardsLateVideoFrames:YES]; [output setSampleBufferDelegate:self queue:dispatch_get_main_queue()]; [_session addOutput:output]; [_session startRunning]; 

In each frame AVCaptureVideoDataOutput

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); const size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer); const size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer); CVOpenGLTextureRef texture; CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, &texture); CVOpenGLTextureCacheFlush(_textureCache, 0); // Manipulate and draw the texture however you want... const GLenum target = CVOpenGLTextureGetTarget(texture); const GLuint name = CVOpenGLTextureGetName(texture); // ... glEnable(target); glBindTexture(target, name); CVOpenGLTextureRelease(texture); } 

Cleanup

 [_session stopRunning]; CVOpenGLTextureCacheRelease(_textureCache); 

The big difference between some other implementations that get the OpenGL AVCaptureVideoDataOutput image as a texture is that they can use CVPixelBufferLockBaseAddress , CVPixelBufferGetBaseAddress , glTexImage2D and CVPixelBufferUnlockBaseAddress . The problem with this approach is that it is usually terribly redundant and slow. CVPixelBufferLockBaseAddress will ensure that the memory it was about to transfer to you is not GPU memory, and will copy all this into the shared memory of the CPU. This is bad! In the end, we just copy it to the GPU using glTexImage2D .

So, we can take advantage of the fact that CVPixelBuffer already in GPU memory with CVOpenGLTextureCacheCreateTextureFromImage .

Hope this helps someone else ... the CVOpenGLTextureCache package CVOpenGLTextureCache terribly documented, and its iOS CVOpenGLESTextureCache only slightly better documented.

60 frames per second at 20% CPU, capturing a 2560x1600 desktop!

+5
source share

All Articles