How can I use a file in a command and redirect output to the same file without truncating it?

Basically I want to take as input from a file, delete a line from this file and send the output back to the same file. Something along these lines, if that makes it clearer.

grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > file_name 

however, when I do this, I get an empty file. Any thoughts?

+73
redirect bash io
Jul 14 2018-11-11T00:
source share
14 answers

You cannot do this because bash handles the redirects first and then executes the command. Thus, by the time grep looks at filename, it is already empty. You can use a temporary file though.

 #!/bin/sh tmpfile=$(mktemp) grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > ${tmpfile} cat ${tmpfile} > file_name rm -f ${tmpfile} 

like this, consider using mktemp to create tmpfile, but note that this is not POSIX.

+69
Jul 14 2018-11-11T00:
source share

Use sponge for such tasks. Its part is moreutils.

Try the following command:

  grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | sponge file_name 
+70
Jul 14 '11 at 17:07
source share

Use sed instead:

 sed -i '/seg[0-9]\{1,\}\.[0-9]\{1\}/d' file_name 
+16
Jul 14 2018-11-11T00:
source share

try this simple

 grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | tee file_name 

This time your file will not be empty :) and your output will also be printed to your terminal.

+7
Jul 03 '16 at 18:43
source share

You cannot use the redirection operator ( > or >> ) for the same file, because it has a higher priority, and it will create / trim the file before the command is even called. To avoid this, you should use appropriate tools such as tee , sponge , sed -i or any other tool that can write results to a file (for example, sort file -o file ).

In fact, redirecting input to the same source file does not make sense, and you should use the appropriate editors in place for this, for example, the Ex editor (part of Vim):

 ex '+g/seg[0-9]\{1,\}\.[0-9]\{1\}/d' -scwq file_name 

Where:

  • '+cmd' / -c - run any Ex / Vim command
  • g/pattern/d - delete lines matching the pattern using global ( help :g )
  • -s - silent mode ( man ex )
  • -c wq - execute :write and :quit commands



You can use sed to achieve the same (as already shown in other answers), however, the -s tandard extension of FreeBSD is not in place of ( -i ) (it can work differently between Unix / Linux) and basically it is s Tream ed Itor, not a file editor. See: Is there a practical Ex-mode?

+6
Apr 18 '16 at 9:45
source share

One alternative liner is to set the contents of the file as a variable:

 VAR=`cat file_name`; echo "$VAR"|grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' > file_name 
+3
Oct 24 '13 at 23:45
source share

There is also ed (as an alternative to sed -i ):

 # cf. http://wiki.bash-hackers.org/howto/edit-ed printf '%s\n' H 'g/seg[0-9]\{1,\}\.[0-9]\{1\}/d' wq | ed -s file_name 
+2
Jul 14 '11 at 17:25
source share

You can use slurp with POSIX Awk:

 !/seg[0-9]\{1,\}\.[0-9]\{1\}/ { q = q ? q RS $0 : $0 } END { print q > ARGV[1] } 

Example

+2
May 11 '14 at 6:48
source share

You can do this with process-substitution .

This is a bit of a hack, but how bash opens all pipes asynchronously, and we need to get around this with sleep , which is why YMMV.

In your example:

 grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > >(sleep 1 && cat > file_name) 
  • >(sleep 1 && cat > file_name) creates a temporary file that gets the grep result
  • sleep 1 delayed for a second to give grep time to parse the input file
  • finally cat > file_name writes the output
+2
Nov 03 '16 at 9:45
source share

Since this question is the main result in search engines, here is a single-liner from https://serverfault.com/a/547331 that uses a subshell instead of sponge (which often is not part of a vanilla installation such as OS X):

 echo "$(grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name)" > file_name 

Or the general case:

 echo "$(cat file_name)" > file_name 

Test with https://askubuntu.com/a/752451 :

 printf "hello\nworld\n" > file_uniquely_named.txt && for ((i=0; i<1000; i++)); do echo "$(cat file_uniquely_named.txt)" > file_uniquely_named.txt; done; cat file_uniquely_named.txt; rm file_uniquely_named.txt 

Must print:

 hello world 

While calling cat file_uniquely_named.txt > file_uniquely_named.txt in the current shell:

 printf "hello\nworld\n" > file_uniquely_named.txt && for ((i=0; i<1000; i++)); do cat file_uniquely_named.txt > file_uniquely_named.txt; done; cat file_uniquely_named.txt; rm file_uniquely_named.txt 

Prints an empty string.

I have not tested this on large files (possibly more than 2 or 4 GB).

I took this answer from Hart Simha and Kos .

+1
Sep 18 '18 at 20:09
source share

I usually use the tee program to do this:

 grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | tee file_name 

It creates and deletes the temporary file by itself.

0
Jul 04 '18 at 12:31
source share

Try this

 echo -e "AAA\nBBB\nCCC" > testfile cat testfile AAA BBB CCC echo "$(grep -v 'AAA' testfile)" > testfile cat testfile BBB CCC 
0
Aug 29 '18 at 17:59
source share

Next will do the same thing that sponge does, without requiring moreutils :

  shuf --output=file --random-source=/dev/zero 

--random-source=/dev/zero part shuf do the job without any mixing, so it buffers your input without changing it.

However, it is true that using a temporary file is best for performance reasons. So, here is the function I wrote that will do this for you in a generalized way:

 # Pipes a file into a command, and pipes the output of that command # back into the same file, ensuring that the file is not truncated. # Parameters: # $1: the file. # $2: the command. (With $3... being its arguments.) # See https://stackoverflow.com/a/55655338/773113 function siphon { local tmp=$(mktemp) local file="$1" shift $* < "$file" > "$tmp" mv "$tmp" "$file" } 
0
Apr 12 '19 at 15:49
source share

maybe you can do it like this:

 grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | cat > file_name 
-6
Feb 27 '14 at 9:19
source share



All Articles