How to process every second file in bash?

I have a directory with dozens of files. I would like to do something with every second file from this directory. To date, I have only used the find , but with this I process all the files:

 find ./dir/ -type f -exec cat {} \; 
+6
scripting bash find
source share
4 answers
 cnt=0; for file in $(find ./dir -type f); <-- if not too many matches do let cnt=cnt+1; if [ $cnt -eq 2 ]; then echo $file; <-- do something cnt=0; <-- alternate file fi; done 

or

 second_file=$(find -type f | head -2 | tail -1); 
+6
source share
 for file in `find dir -type f | awk 'NR % 2 == 0'`; do echo $file done 

NR - current line number. To get odd lines, use ... == 1 .

+14
source share

If you want to run my_cmd for each of the alternative files, this may help.

  find ./dir -type f | sort -n | sed -n '1~2!p' | sed 's/^/mycmd /' | sh 

I copied sed from How to delete every other line using sed?

0
source share

I had every file twice, and I needed to delete every second file. Find just returned me random files, so I added sorting. Now it looks like this:

 #!/bin/bash DIRNAME="<directoryNameContainingYourFiles>" for file in `find $DIRNAME -type f | sort | awk 'NR % 2 == 0'`; do echo "going to modify" $file # ls -laFh $file # show file details # rm $file # delete file # mv $file <newDirName> # move file to <newDirName> done 

put this in a file called scriptName, run

 chmod +x scriptName 

and run it by calling

 ./scriptName 
0
source share

All Articles