Too many of these answers use a shell extension to store search results. This is not something you should do lightly.
Say I have 30,000 songs, and the names of these songs are about 30 characters. Even if it does not fall into the problem of white space.
My find will return more than 1,000,000 characters, and it is very likely that my command line buffer is not so large. If I did something like this:
for file in $(find -name "*.mp3") do echo "some sort of processing" done
The problem (besides the space in the file names) is that your command line buffer will simply remove the overflow from find . It can even fail badly.
This is why the xargs team was created. It ensures that the command line buffer is never full. It will execute the command following xargs as many times as necessary to protect the command line buffer:
$ find . -name "*.mp3" | xargs ...
Of course, using xargs in this way will be flooded anyway, but modern xargs and find implementations have a way to solve this problem:
$ find . -name "*.mp3 -print0 | xargs --null ...
If you can guarantee that the file names will not have tabs or \n (or double spaces) in them, it is best to find a search in the while loop:
find . -name "*.mp3" | while read file do
The pipeline will send files to while read before the command line buffer is full. Even better, the read file is read all over the line and puts all the elements found on that line in $file . This is not ideal, because read still breaks into a space, so file names such as:
I will be in \n your heart in two lines.mp3 I love song names with multiple spaces.mp3 I \t have \ta \t thing \t for \t tabs.mp3.
Still fail. The $file variable is as follows:
I will be in your heart in two lines.mp3 I love song names with multiple spaces.mp3 I have a thing for tabs.mp3.
To get around this problem, you should use find ... -print0 to use zeros as input delimiters. Then either change IFS to use zeros, or use the -d\0 option in the while statement in the BASH shell.