IOS frame animation script

There are a few SO questions related to frame-by- frame animation (e.g. frame-by-frame animation and other similar questions), however I feel mine are different, so here it goes.

This is partly a design question from someone who has little experience with ios.

I'm not sure frame by frame is the correct description of what I want to do, let me describe it. Basically, I have a "script" of an animated movie, and I would like to play this script.
This script is a json file that describes a set of scenes. Each scene has several elements, such as a background image, a list of actors with their positions, and a background sound clip. In addition, for each actor and background there is an image file that represents it. (this is a little more complicated - each actor has a "behavior", for example, how he blinks, how he talks, etc.). Therefore, my job is to follow the script data, referring to the participants and the background and with each frame, place the actors in their assigned position, draw the correct background and play the sound file.
A movie can be paused, cleared forward or backward, similar to YouTube player functions.
Most of the questions that I saw that relate to frame animation have different requirements than I do (I will talk about some additional requirements later). They usually suggest using the animationImages UIImageView property. This is great for animating a button or check box, but they all assume a short and predefined set of images to play.
If I had to go with the animation, I would have to pre-create all the images in front, and my pure assumption is that it will not scale (think about 30fps in one minute, you will get 60 * 30 = 1800 images. Scrub and pause / play in this case seems complicated).

So I'm looking for the right way to do this. My instinct, and I learn more when I walk, is that there are probably three or four main ways to do this.

  • Using Core Animations and defining “key points” and animated transitions b / w these key points. For example, if an actor must be at point A at time t1 and point B at time t2, then all I need to do is to revive what is in between. I did something similar in ActionScript in the past, and it was good, but it was especially difficult to implement the scrub action and keep the synchronization so that I was not a big fan of the approach. Imagine that you need to pause in the middle of the animation or scrub to the middle of the animation. It is doable, but not nice.
  • Set a timer, say, 30 times per second, and at each tick consult the model (the model is a json script file, as well as a description of the participants and background) and draw what you need to draw on this time. Use the Quartz 2D API and drawRect. This is probably a simple approach, but I don’t have enough experience to tell how well it will work on different devices, probably a processor one, it all depends on the number of calculations that I need to do on each tick, and the amount of effort Ios needs to draw Total. I have no clue.
  • Similar to 2, but use OpenGL for drawing. I prefer 2 b / c APIs easier, but maybe resource-saving OpenGL is more suitable.
  • Use a game framework like cocos2d , which I have never used before, but seems to solve more or less similar problems. They seem to have a good API, so I would be happy if I could find all my requirements that they answered.

In contrast to the requirements that I just described (play the movie with its "script" and a description of the actors, backgrounds and sounds), there is another set of requirements -

  • The movie must be played in full screen mode or in partial screen mode (where the rest of the screen is for other controls).
  • I start with the iphone, naturally the ipad should follow.
  • I would like to create a thumbnail of this movie for local use of the phone (display it in the gallery in my application). A sketch can only be the first frame of a movie.
  • I want to be able to "export" the result as a movie, which can be easily uploaded to youtube or facebook.

Thus, the big question is whether any of the proposed 1-4 implementations (or others that you could suggest) can somehow export such a movie.
If all four do not work in the task of exporting films, I have an alternative. An alternative is to use a server that launches ffmpeg and which accepts a package of all the video images (I would have to draw them on the phone and load them into a series according to their sequence), and then the server will compile all the images with their soundtrack to one movie.
Obviously, to make everything simple, I would prefer to do it without a server, i.e. To be able to export a movie from iphone, but if this is too much to ask, then the last requirement would be at least the ability to export a set of all the images (key frames in the movie), so I can link them and upload them to the server.

The duration of the film should be one or two minutes. Hope this question is not too long and it’s clear ...

Thanks!

+4
source share
2 answers

well written question. Your video export requires an AVFoundation check (available with iOS 4). If I was going to implement this, I would try # 1 or # 4. I think that No. 1 might be the fastest to just try, but probably because I have no experience with cocos2d. I think you can pause and clear CoreAnimation: check the CAMediaTiming protocol that it accepts.

+1
source

Ran, you have several options. You won't find a “complete solution,” but you can use existing libraries to skip a bunch of implementation and performance issues. You can, of course, try to build all of this in OpenGL, but my advice is that you go with a different approach. I suggest that you display all the "video" frame by frame on the device based on your json settings. It basically comes down to setting up your scene elements, and then determining the positions of each element for times [0, 1, 2], where each number indicates a frame with a certain frame rate (15, 20 or 24 FPS will be more than enough). please look at my library for non-trivial iOS animations , in it you will find a class called AVOfflineComposition, which executes the command “comp elements and save to file on disk”. Obviously, this class does not do everything you need, but it is a good starting point for the basic logic of creating a comp from N elements and writing the results to a video file. The point of creating the compiler is that all of your code that reads the settings and places objects in a specific place in comp can be run offline, and the result you get at the end is a video file. Compare this to all the details involved in storing all the elements in memory, and then move forward more quickly or slowly depending on how fast everything works.

The next step is to create 1 audio file, which is the length, which is the “movie” of all compiled frames and includes any sounds at a specific time. This basically means mixing sound at runtime and saving the results to an output file so that the results are easily reproduced using AVAudioPlayer. You can take a look at the very simple PCM mixer code I wrote for this type of thing. But you might want to consider a more complete sound engine, such as theamazingaudioengine .

Once you have an audio file and a video file, they can be played back together and easily synchronized using the AVAnimatorMedia class. Take a look at this AVSync sample source code, which shows a strictly synchronized example of video playback and movie display.

Your last requirement can be implemented using the AVAssetWriterConvertFromMaxvid class, it implements the logic that will read the .mvid movie file and write it as h.264 encoded video using the h.264 encoder hardware on iPhone or iPad. With this code, you will not need to write a server module based on ffmpeg. In addition, this will not work, because it takes too much time to download all uncompressed video to your server. You need to compress the video to h.264 before it can be downloaded or sent via email from the application.

0
source

All Articles