I came up with a much simpler implementation:
def iterUnzip[A1, B1, A2, B2](it: Iterator[(A1, B1)], fA: (Iterator[A1]) => Iterator[A2], fB: (Iterator[B1]) => Iterator[B2]) = it.toStream match { case s => fA(s.map(_._1).toIterator).zip(fB(s.map(_._2).toIterator)) }
The idea is to convert an iterator to a stream. Stream in Scala is lazy, but also provides memoization . This effectively provides the same buffering mechanism as @AlexeyRomanovโs solution, but more compressed. The only drawback is that Stream stores memoized elements on the stack as opposed to an explicit queue, so if fA and fB create elements at an uneven speed, you can get a StackOverflow exception.
Test that the score is lazy:
val iter = Stream.from(0).map(x => (x, x + 1)) .map(x => {println("fetched: " + x); x}).take(5).toIterator iterUnzip( iter, (_:Iterator[Int]).flatMap(x => List(x, x)), (_:Iterator[Int]).map(_ + 1) ).toList
Result:
fetched: (0,1) iter: Iterator[(Int, Int)] = non-empty iterator fetched: (1,2) fetched: (2,3) fetched: (3,4) fetched: (4,5) res0: List[(Int, Int)] = List((0,2), (0,3), (1,4), (1,5), (2,6))
I also tried hard enough to get a StackOverflow exception by creating uneven iterators, but couldn't.
val iter = Stream.from(0).map(x => (x, x + 1)).take(10000000).toIterator iterUnzip( iter, (_:Iterator[Int]).flatMap(x => List.fill(1000000)(x)), (_:Iterator[Int]).map(_ + 1) ).size
Works great on -Xss5m and produces:
res10: Int = 10000000
Thus, in general, this is a reasonably good and concise solution if you do not have any extreme cases.