How to adjust the sensitivity of MKAnnotationView?

Currently I have a map view with some annotation on it. I have an annotation with custom images. The problem I'm trying to fix is ​​image sensitivity. When I try to drag them, it seems to me that I have to touch the exact center so that it can be focused. Is there a way to increase the boundaries of the touch?

+8
ios objective-c iphone cocoa-touch
source share
4 answers

To do this, you need to subclass MKAnnotationView to create your own MKAnnotationView . In your subclass, undo the following functions:

 - (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent*)event { // Return YES if the point is inside an area you want to be touchable } - (UIView*)hitTest:(CGPoint)point withEvent:(UIEvent*)event { // Return the deepest view that the point is inside of. } 

This allows the use of interactive presentations (e.g. buttons, etc.). The default implementation in MKAnnotationView not strict on pointInside and hitTest , because it allows clicks that are actually inside one annotation to send them to another annotation. He does this by calculating the closest center of annotations to the touch point and sending events to this annotation, so that the close (overlapping) annotations do not block each other from choosing. However, in your case, you probably want to block other annotations if the user needs to select and drag the topmost annotation, so the above method is probably what you want, otherwise it will set you on the right path.

EDIT: I asked in the comments for an example implementation of hitTest:withEvent: It depends on what you are trying to achieve. The initial question suggested touching and dragging the images in the annotation, whereas in my case I have some buttons inside the annotation that I want to be interactive. Here is my code:

 - (UIView*)hitTest:(CGPoint)point withEvent:(UIEvent*)event { UIView* hitView = [super hitTest:point withEvent:event]; if (hitView != nil) { // ensure that the callout appears above all other views [self.superview bringSubviewToFront:self]; // we tapped inside the callout if (CGRectContainsPoint(self.resultView.callButton.frame, point)) { hitView = self.resultView.callButton; } else if (CGRectContainsPoint(self.resultView.addButton.frame, point)) { hitView = self.resultView.addButton; } else { hitView = self.resultView.viewDetailsButton; } [self preventSelectionChange]; } return hitView; } 

As you can see, this is pretty simple. The MKAnnotationView implementation (called super on the first line of my implementation) returns only the first (external) view, it does not flow through the view hierarchy to see which subview is actually inside. In my case, I just check if the touch press of one of the three buttons is located and returns them. In other cases, you may have simple rectangle-based drilling through a hierarchy or more complex impact tests, for example, to avoid transparent areas in your view, to allow touches to go through these parts. If you need to expand, CGRectContainsPoint can be used in the same way as I used it, but do not forget to compensate for your points in the local coordinates of the view for each viewing level in which you train.

The preventSelectionChange method is to prevent my custom annotation from being selected, I use it as a custom / interactive callout from the map pins, and this connects it to the selected one, and does not allow the selection to change this annotation.

+17
source share

Have you been a subclass of MKAnnotationView or have you changed the image property you just set?

Here is the documentation for setting the image property.

Discussion: Assigning a new image to this property also resizes the view frame so that it matches the width and height of the new image. The position of the view frame does not change.

Check the frame size of your annotation view in which your object can get strokes.

+1
source share

I implemented something similar as follows

  • Create a subclass of the Recognizer Gesture class that handles touches
  • Subclass the UIImage class, this class uses the recognizer class to handle your gestures.
  • Use this in annotations

A gesture recognizer subclass will handle your gestures if you execute them anywhere in the image. This should help you.

Keep us posted on whether this solution works for u ...

0
source share

The @jhabbott solution never worked for me, as I mentioned here .

I have an image and a label side by side. The image was shown by setting the annotationview image property, and the label by adding UILabel

I redirected the func point(inside:with:) UILabel to UILabel alone (which included the image area), and hitTest returned exactly the same view whether I clicked on the label or image. But clicking a shortcut didn’t trigger a callback ...

Finally, I ended up enlarging the MKAnnotationView frame to wrap the shortcut + image, I set annotationView.image to nil , and I created my custom UIImageView .

Since I wanted the anchor point to be in the middle of the image, I had to set a custom one:

 self.frame = CGRect(x: 0, y: 0, width: self.myLabel.frame.width, height: self.myLabel.frame.height) self.centerOffset = CGPoint(x: self.frame.width/2, y: self.myImageView.frame.height/2) 

Then I removed point(inside:with:) and hitTest(point:with:) overrides that did nothing.

And now, for the first time, my annotation looks completely reactive.

0
source share

All Articles