Is Objective-C fast enough for programming DSP / audio
real time rendering
Definitely not . The Objective-C runtime and its libraries are simply not designed for real-time rendering requirements. The fact is that it is almost impossible to guarantee that using the ObjC runtime or libraries such as Foundation (or even CoreFoundation) will not cause your rendering to fail.
A common case is locking β even a simple heap allocation ( malloc , new / new[] , [[NSObject alloc] init] ) will likely require locking.
To use ObjC, you must use libraries and a runtime that assume that locks are acceptable at any time during their execution. A lock can pause a rendering stream (for example, during a render callback), waiting for a lock to be received. Then you can skip the deadline because the rendering stream is held up, which ultimately leads to crashes / crashes.
Ask the pro audio plugin developer: they will tell you that blocking inside the real-time rendering domain is prohibited. You cannot, for example, run on a file system or create heap allocations because you do not have a practical upper bound on the time it takes to complete.
Here's a good introduction: http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing
Offline rendering
Yes, in most scenarios, high-level messaging would be acceptable quickly. At lower levels, I recommend not using ObjC because it would be wasteful - it can take many, many times to do if ObjC messaging is at that level (compared to the C or C ++ implementation).
See also: Will an iPhone app use performance if I use Objective-C for low level code?
justin
source share