yield from , like for item in x: yield x , is linear. However, function calls are slow, and due to nesting in your l , when you double N, you do not just double the number of terms, you double the number of calls needed. Everything that scales with the number of calls, for example, the esp utility function itself. due to recursion, any overhead of yield from , for , regardless of what would cause the problem. If this is correct, then we expect that a list with the same number of elements and no nesting will be linear, the average nesting will be intermediate, and a lot of nesting will be super-slow. And this is what we see:
import timeit s = """ from collections import Iterable def flatten(l): for e in l: if isinstance(e, Iterable): yield from flatten(e) else: yield e def build_lots(N): l = [-1] for i in range(N): l = [i, [l], i] return l def build_some(N): l = [-1] for i in range(N): l = [i]+l+[i] if i % 2 else [i,[l],i] return l def build_none(N): return range(2*N+1) """ def get_time(size, which_build, n=100): setup = s + "l = build_{}({});".format(which_build, size) ans = timeit.Timer(setup=setup, stmt="z=list(flatten(l))").timeit(n) return ans print('length', 'none','some','lots') for w in range(0, 500, 50): print(2*w+1, get_time(w, 'none'), get_time(w, 'some'), get_time(w, 'lots'))
produces
length none some lots 1 0.0012789969332516193 0.0006600483320653439 0.000653265044093132 101 0.030214487109333277 0.06863495009019971 0.10554128512740135 201 0.05980244372040033 0.188231083098799 0.3237948380410671 301 0.08960435865446925 0.3752179825678468 0.6493003228679299 401 0.11986956000328064 0.6066137161105871 1.147628225851804 501 0.14737469609826803 0.9323012446984649 1.7087256000377238 601 0.18555369088426232 1.2575508910231292 2.2957410947419703 701 0.20820995513349771 1.712264522910118 3.094764341134578 801 0.23618148919194937 2.100640726275742 4.079551971051842 901 0.26863432209938765 2.617169467266649 4.858607416041195
