Sorting is not the fastest way to do this, it is done in O (N) time using a set (hash map).
>>> x = [8,2,3,4,5] >>> y = [6,3,7,2,1] >>> set_y = set(y) >>> next((a for a in x if a in set_y), None) 2
Or:
next(ifilter(set(y).__contains__, x), None)
This is what it does:
>>> def foo(x, y): seen = set(y) for item in x: if item in seen: return item else: return None >>> foo(x, y) 2
To show the time difference between different methods (naive approach, binary set search), here are some timings. I had to do this in order to refute an amazing number of people who thought binary search was faster ...:
from itertools import ifilter from bisect import bisect_left a = [1, 2, 3, 9, 1, 1] * 100000 b = [44, 11, 23, 9, 10, 99] * 10000 c = [1, 7, 2, 4, 1, 9, 9, 2] * 1000000
The test data was:
a = [1, 2, 3, 9, 1, 1] * 100000 b = [44, 11, 23, 9, 10, 99] * 10000 c = [1, 7, 2, 4, 1, 9, 9, 2] * 1000000
Results:
running tests - time limit of 20 seconds [ 9 ] Function: common_set, a, b. Time: 0.00569520707241 [ 9 ] Function: common_b_sort, a, b. Time: 0.0182240340602 [ 9 ] Function: common_naive, a, b. Time: 0.00978832505249 [ 7 ] Function: common_set, c, d. Time: 0.249175872911 [ 7 ] Function: common_b_sort, c, d. Time: 1.86735751332 [ 7 ] Function: common_naive, c, d. Time: 0.264309220865 [ 40000 ] Function: common_set, e, f. Time: 0.00966861710078 [ 40000 ] Function: common_b_sort, e, f. Time: 0.0505980508696 [ ] Function: common_naive, e, f. Time: Too long!! [ None ] Function: common_set, g, h. Time: 1.11300018578 [ None ] Function: common_b_sort, g, h. Time: 14.9472068377 [ ] Function: common_naive, g, h. Time: Too long!! [ 5411743 ] Function: common_set, i, j. Time: 1.88894859542 [ 5411743 ] Function: common_b_sort, i, j. Time: 6.28617268396 [ 5411743 ] Function: common_naive, i, j. Time: 1.11231867458
This gives you an idea of ββhow it will scale for large inputs, O (N) vs O (N log N) vs O (N ^ 2)