Unity3D on iOS, checking camera image of device in Obj-C

I have a Unity / iOS application that captures a user's photo and displays it in a 3D environment. Now I would like to use CIFaceFeature to search for eye positions, which requires access to the national (Objective-C) level. My stream looks like this:

Unity - WebCamTexture (encode and send image to native - , this is SLOW )

Obj-C โ†’ CIFaceFeature (find eye coordinates)

Unity โ†’ Eye Position Display

I have a working prototype, but it's slow because I capture the image in Unity (WebCamTexture) and then send it to Obj-C to detect FaceFeature. There seems to be a way to just ask my Obj-C class to "inspect the active camera". It should be much, much faster than encoding and transmitting the image.

So my question is, in a nutshell:

  • Is it possible to request in Obj-C 'is there a camera that is currently capturing?'
  • If so, how do I โ€œsnapshotโ€ the image from the current session?

Thanks!

+7
source share
2 answers

You can access the capture stream of the camera view by changing CameraCapture.mm in unity.

I suggest you take a look at some existing plugin called Camera Capture for an example of how additional camera I / O functions can be added to a capture session / capture pipeline.

To turn you off in the right direction. take a look at the initCapture function in CameraCapture.mm:

 - (bool)initCapture:(AVCaptureDevice*)device width:(int)w height:(int)h fps:(float)fps 

Here you can add to the capture session.

And then you should take a look at the sample code provided by Apple for face recognition:

https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html

Greetings

+2
source

Unity 3D allows you to execute custom code. Find the native plugins in the script link. That way, you can display your own iOS view (with a camera view, possibly hidden depending on your requirements) and run the Objective C code. Then return the eye detection results to Unity if you need it in the 3D view.

+1
source

All Articles