At WWDC 2015, Session 409 is at around 18 minutes. The discussion leads me to the idea that generic tools can be optimized using universal specialization by turning on the optimization mode of the entire module. Unfortunately, my tests, which I'm not sure of, did not show anything useful.
I did some simple tests between the following two methods to see if the performance was the same:
func genericMax<T : Comparable>(x:T, y:T) -> T { return y > x ? y : x } func intMax(x:Int, y:Int) -> Int { return y > x ? y : x }
Simple XCTest:
func testPerformanceExample() { self.measureBlock { let x: Int = Int(arc4random_uniform(9999)) let y: Int = Int(arc4random_uniform(9999)) for _ in 0...1000000 { // let _ = genericMax(x, y: y) let _ = intMax(x, y: y) } } }
What happened
Without optimization, the following tests were reasonably different:
- genericMax: 0.018 s
- intMax: 0.005 sec
However, when optimizing the entire module, the following tests were not similar:
- genericMax: 0.014 s
- intMax: 0.004 sec
What i expected
With module optimization turned on, I expected similar times between two method calls. It makes me think my test is flawed.
Question
Assuming my tests are wrong / poor. How could I better measure how the optimization mode of the entire module is optimized using common specialization?
source share