Reading a file using "open ()" vs "with open ()"

I know that there are many articles and questions related to reading files in python. But still, I wonder what caused python to have several ways to do the same task. Just what I want to know is what is the effect of performance on using these two methods?

+8
performance python file-io
source share
1 answer

Using the with operator is not to improve performance, I don’t think there are any benefits or performance losses associated with using the with operator if you perform the same cleanup operation as using the with operator automatically.

When you use the with statement with open function, you do not need to close the file at the end, because with will automatically close it for you.

In addition, the with statement is not only intended for opening files, but is used in conjunction with context managers. Basically, if you have an object that you want to make sure that it is cleared after you finish with it or any errors, you can define it as a context manager and the with statement call the __enter__() and __exit__() methods at the entrance and exit of the block. According to PEP 0343 -

This PEP adds a new " with " statement to the Python language so that standard applications of try / finally statements can be excluded.

In this PEP, context managers provide the __enter__() and __exit__() methods that call the with statement when you __exit__() and exit the body.

Also, testing performance using with and not using it -

 In [14]: def foo(): ....: f = open('a.txt','r') ....: for l in f: ....: pass ....: f.close() ....: In [15]: def foo1(): ....: with open('a.txt','r') as f: ....: for l in f: ....: pass ....: In [17]: %timeit foo() The slowest run took 41.91 times longer than the fastest. This could mean that an intermediate result is being cached 10000 loops, best of 3: 186 µs per loop In [18]: %timeit foo1() The slowest run took 206.14 times longer than the fastest. This could mean that an intermediate result is being cached 10000 loops, best of 3: 179 µs per loop In [19]: %timeit foo() The slowest run took 202.51 times longer than the fastest. This could mean that an intermediate result is being cached 10000 loops, best of 3: 180 µs per loop In [20]: %timeit foo1() 10000 loops, best of 3: 193 µs per loop In [21]: %timeit foo1() 10000 loops, best of 3: 194 µs per loop 
+18
source share

All Articles