Consider for a moment that each look is supported by CALayer. IIRC, views are about 44 bytes (plus foo), levels are about 44 bytes (3 times, since there is a view tree and a render tree), and layers that perform any rendering are supported by raster contexts.
Or for a simpler comparison: each pointer consumes as much memory as one pixel.
I only use tags where they make my life easier:
- A lot if there are similar representations at the tip (a bunch of buttons, each of which selects a different color). I could connect a bunch of outlets, but then the code would have to deal with a bunch of Ivars, not some arithmetic).
- Many subzones with similar functionality (for example, βpagesβ in a scroll view or cell views in a container like a UITableView). I could track them in an array, but today I feel lazy.
- Whenever I need a view to store an extra integer (like page number). There are many other ways (subclasses, "related objects", sticking values ββin
layer.style ...). This usually refers to the first two places.
Also, remember that [v viewWithTag:tag] can return v , any subview, any subview-of-subview ... Consider the FooView class, which has a "content view" with tag 1 and a "toolbar" with tag 2:
FooView * f1 = ...; FooView * f2 = ...; [f1.contentView addSubview:f2]; NSLog(@"%@ %@", f1.toolbarView, f2.toolbarView);
What is he typing? Well, both of them can be f2 toolbar!
Yes, Apple can make the search more reasonable (it can continue the search until it finds the least deep match or starts searching by the depth of iterations), but I would suggest that it would perform a simple search by depth unless otherwise specified in the documentation .
source share