Bash: iterate over files listed in a text file and move them

I have a directory (directory A) with 10,000 files. I want to move some of them to directory B, and the rest to directory C. I created a text file containing the names of all the files that I want to move to directory B, and another with the names of all the files that I want to go to directory C How to write a bash for loop to move these files to new directories.

pseudo code:

for the file in text file B:
move file from directory A to directory B

for a file in textfileC:
move file from directory A to directory C

Sorry if this is asked elsewhere, but I spent hours trying to find out bash, and I just don't get it. I could not find something similar enough in another thread that I could understand (maybe I just don’t know the correct search words).

I have something like this, but I could not get it to work:

FILES=[dont' know what goes here? An array? A list? 

Can I just specify the name of the text file, and if so, in what format should the files be? name1.ext, name2.ext or name1.ext name2.ext]

 for f in $FILES; do mv $f /B/$f [not sure about the second argument for mv]; done 

THX

BTW Mac OSX 10.6.8 (Snow Leopard) Apple Terminal v. 2.1.2 / 273.1 Bash 3.2

+6
source share
5 answers
 cat file-list.txt | while read i; do # TODO: your "mv" command here. "$i" will be a line from # the text file. done 
+18
source

BASH FAQ # 1: "How can I read a file (data stream, variable) in turn (and / behind the field)?"

If the file name remains the same, the second argument to mv can only be a directory.

+3
source

the script directory should be your file location

 TO_B=file1.txt TO_C=file2.txt for file in $TO_B do mv ${file} B/ done for file in $TO_C do mv ${file} C/ done 
+1
source

You can move 1000 or 1000 user directories without taking a lot of time when there are a thousand user directories.

 cat deleteduser | while read i; do mv -vv $i ../deleted_user; done; deleteuser= user name list ../deleted_user= destination dir 
+1
source

Do you need to use BASH? What about Perl or Kornshell? The problem here is that Bash does not have hash key arrays (Kornshell and Perl do). This means that there is no easy way to track where files go. Imagine in Perl:

 my %directoryB; #Hash that contains files to move to Directory B my %directoryC; #Hash that contains files to move to Directory C open (TEXT_FILE_B, "textFileB") or die qq(Can't open "textFileB"); while (my $line = <TEXT_FILE_B>) { chomp $line; $directoryB{$line} = 1; } close TEXT_FILE_B; open (TEXT_FILE_C, "textFileC") or die qq(Can't open "textFileC"); while (my $line = <TEXT_FILE_C>) { chomp $line; $directoryC{$line} = 1; } close TEXT_FILE_C; 

The above lines create two hashes. One for files that need to be moved to directory B, and one for files that need to be moved to directory C. Now I only need to look at my hash and decide:

 foreach my $file (@directory) { #A little cheating here... if (exists $directoryB{$file}) { move ($file, $directoryB); } elsif (exists $directoryC{$file}) { move ($file, $directoryC) } 

My if statements can now see if a key has been defined in this hash. If so, I know that the file can be moved to this directory. I need to read two text files only once. After that, my two hashes will store files that move to one directory and the other.


However, we have no hashes, so we will use grep to find out if the file name is in the directory. I assume that you have one file name on each line.

 ls | while read file do if grep -q "^${file}$" textFileB then mv $file $directoryB elif grep -q "^${file}$" textFileC then mv $file $directoryC fi done 

grep -q will search your two text files to see if there is a corresponding file. If so, it will move the file to this directory. It is not very efficient, as it has to search the entire text file every time. However, it is quite effective, and you are only talking about 10,000 files, so the whole operation should only take a few minutes.

0
source

All Articles