If memory is a problem, and if you know the size of the field ahead of time, you probably won't want to read the entire file first. Something like this is probably more appropriate:
#allocate memory (np.empty would work too and be marginally faster,
from a couple of quick (and unexpected) tests on my machine, it seems that map might not even be needed:
a=np.zeros((3000,300),dtype=np.float32) with open(filename) as f: for i,line in enumerate(f): a[i,:]=line.split()
It may not be the fastest, but, of course, it will be the most efficient way to work with memory.
Some tests:
import numpy as np def func1(): #No map -- And pretty speedy :-). a=np.zeros((3000,300),dtype=np.float32) with open('junk.txt') as f: for i,line in enumerate(f): a[i,:]=line.split() def func2(): a=np.zeros((3000,300),dtype=np.float32) with open('junk.txt') as f: for i,line in enumerate(f): a[i,:]=map(np.float32,line.split()) def func3(): a=np.zeros((3000,300),dtype=np.float32) with open('junk.txt') as f: for i,line in enumerate(f): a[i,:]=map(float,line.split()) import timeit print timeit.timeit('func1()',setup='from __main__ import func1',number=3) #1.36s print timeit.timeit('func2()',setup='from __main__ import func2',number=3) #11.53s print timeit.timeit('func3()',setup='from __main__ import func3',number=3) #1.72s