I am new to Android development - I am using Xamarin.
I am trying to write an application that initiates a camera preview and then constantly scans incoming frames for text (I use Xamarin.Tesseract from NuGet).
In other words, I donβt want the user to take a photo and then do OCR analysis, instead I just want them to point to the camcorder on some paper with text on it, I will constantly perform OCR analysis until I find the specific text I'm looking for), at this moment I will give a thumbs up for the user.
This is the approach I've reached so far:
Initialize the camera and set the preview callback
_Camera = Android.Hardware.Camera.Open ();
_Camera.SetPreviewCallback (this);
_Camera.StartPreview ();
In the callback, take bytes representing the current frame and pass it as input image bytes for Xamarin.Tesseract
public void OnPreviewFrame (byte [] data, Android.Hardware.Camera camera)
{
await _TesseractApi.SetImage (data); /// this hangs
string text = _Api.Text;
return text;
}
It currently hangs when passing byte [] to the Tesseract API. I am sure that this will happen because the bytes in the array are the wrong encoding, or I basically do not understand Camera api!
Can someone give me a push in the direction of recording?
source
share