in my (puzzles) games are drawn on the screen using CALayer for each piece. There are 48 pieces (in an 8x6 grid), each of which is 48x48 pixels. I'm not sure if this is too many layers, but if this is not the best solution, I donβt know what it is, because redrawing the entire screen using Quartz2D in every frame does not seem to be faster.
In any case, the images for the fragments come from one large PNG file, which has 24 animation frames for 10 different states (so it measures 1152 x 480 pixels), and the animation is performed by setting the contentsRect property for each CALayer as I move it.
In fact, it works very well, up to 7 pieces tracking the touch point in the window, but the strange thing is that when I initially start moving the pieces for the first 0.5 seconds or so, itβs very jerky how the CPU does something itβs still, but after that it will track and update the screen at a speed of 40+ FPS (according to the tools).
And does anyone have any ideas that could explain this initial rubbish?
The only theory I could come up with is to split the bits of the PNG file to a temporary location and then drop them after the animation stops, in which case is there any way to stop the Core Animation?
I could split the PNG file into 10 pieces, but I'm not sure if this helps, since all of them (potentially) still have to be in memory right away.
EDIT: OK, as described in the comment on the first answer, I divided the image into ten parts, which now are 576 x 96, to fit the hardware limitations. It's still not as smooth as it should be, so I put generosity on it.
EDIT2: I linked one of the images below. In fact, the user's touch is tracked, the offset from the start of tracking is calculated (they can move horizontally or vertically and only in one place). Then one of the images is selected as the contents of the layer (depending on what type of piece it is and whether it moves horizontally or vertically). Then the contentsRect property is set to select one 48x48 frame from a larger image with something like this: -
layer.position = newPos; layer.contents = (id)BallImg[imgNum]; layer.contentsRect = CGRectMake((1.0/12.0)*(float)(frame % 12), 0.5 * (float)(frame / 12), 1.0/12.0, 0.5);
by the way. My theory that he decomposed the original a-fresh image every time was wrong. I wrote code to copy raw pixels from a decoded PNG file to a new CGImage when the application loads, and that didn't make any difference.
The next thing I will try is to copy each frame into a separate CGImage , which at least gets rid of the ugly contentsRect calculation.

EDIT3: Further research is based on the issue of touch tracking, not the issue of Core Animation. I found a basic sample application that tracks touches and comments on code that actually causes the screen to redraw, and NSLog() shows exactly the same problem I encountered: the long delay between touchesBegin and first touchesMoved events.
2009-06-05 01:22:37.209 TouchDemo[234:207] Begin Touch ID 0 Tracking with image 2 2009-06-05 01:22:37.432 TouchDemo[234:207] Touch ID 0 Tracking with image 2 2009-06-05 01:22:37.448 TouchDemo[234:207] Touch ID 0 Tracking with image 2 2009-06-05 01:22:37.464 TouchDemo[234:207] Touch ID 0 Tracking with image 2 2009-06-05 01:22:37.480 TouchDemo[234:207] Touch ID 0 Tracking with image 2
A typical gap between touchesMoved events is 20 ms. The gap between touchesBegin and the first touchesMoved is ten times larger. And this without any calculations or screen updates at all, just an NSLog call. Sigh. I think I will open a separate question for this.