How to capture QR code data in a specific area of ​​AVCaptureVideoPreviewLayer using Swift?

I am creating an application for the iPad, and one of its functions is scanning QR codes. I have part of the QR scan, but the problem is that the iPad screen is very large and I will scan small QR codes of a sheet of paper with many QR codes visible immediately. I want to designate a smaller display area to be the only area that can actually capture a QR code, so it’s easier for the user to scan the specific QR code that they need.

Currently, I have made a temporary UIView with red borders that are centered on the page as an example of where I want the user to scan QR codes. It looks like this:

1LttF.jpg

I went through everything to find an answer to how I can target a specific AVCaptureVideoPreviewLayer region to collect QR code data, and what I found are suggestions for using "rectOfInterest" with AVCaptureMetadataOutput. I tried to do this, but when I set rectOfInterest to the same coordinates and size as the ones I use for my UIView, which displays correctly, I can no longer scan and recognize any QR codes. Can someone please tell me why the scan area does not match the location of the UIView that is visible, and how can I get rectOfInterest to be in the red borders that I added to the screen?

Here is the code for the scan function that I am currently using:

func startScan() { // Get an instance of the AVCaptureDevice class to initialize a device object and provide the video // as the media type parameter. let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) // Get an instance of the AVCaptureDeviceInput class using the previous device object. var error:NSError? let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error) if (error != nil) { // If any error occurs, simply log the description of it and don't continue any more. println("\(error?.localizedDescription)") return } // Initialize the captureSession object. captureSession = AVCaptureSession() // Set the input device on the capture session. captureSession?.addInput(input as! AVCaptureInput) // Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session. let captureMetadataOutput = AVCaptureMetadataOutput() captureSession?.addOutput(captureMetadataOutput) // calculate a centered square rectangle with red border let size = 300 let screenWidth = self.view.frame.size.width let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2)) let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size) // create UIView that will server as a red square to indicate where to place QRCode for scanning scanAreaView = UIView() scanAreaView?.layer.borderColor = UIColor.redColor().CGColor scanAreaView?.layer.borderWidth = 4 scanAreaView?.frame = scanRect // Set delegate and use the default dispatch queue to execute the call back captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue()) captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode] captureMetadataOutput.rectOfInterest = scanRect // Initialize the video preview layer and add it as a sublayer to the viewPreview view layer. videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession) videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill videoPreviewLayer?.frame = view.layer.bounds view.layer.addSublayer(videoPreviewLayer) // Start video capture. captureSession?.startRunning() // Initialize QR Code Frame to highlight the QR code qrCodeFrameView = UIView() qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor qrCodeFrameView?.layer.borderWidth = 2 view.addSubview(qrCodeFrameView!) view.bringSubviewToFront(qrCodeFrameView!) // Add a button that will be used to close out of the scan view videoBtn.setTitle("Close", forState: .Normal) videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal) videoBtn.backgroundColor = UIColor.grayColor() videoBtn.layer.cornerRadius = 5.0; videoBtn.frame = CGRectMake(10, 30, 70, 45) videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside) view.addSubview(videoBtn) view.addSubview(scanAreaView!) } 

Update The reason I don't think this is a duplicate is because the other link specified in Objective-C and my code is in Swift. For those of us new to iOS, it's not easy to translate these two. In addition, the response link does not show the actual update made in the code that resolved its problem. He left a good explanation of the need to use the metadataOutputRectOfInterestForRect method to convert the coordinates of the rectangle, but I still can’t get this method to work, because I don’t understand how this should work without an example.

+6
source share

All Articles