Extension of the answer already provided by Phil. Adding parallelism to it does not make sense in bash if you use xargs to call.
Here is the code:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' < url.lst
-n1: use only one value (from the list) as an argument to call curl
-P10: Save 10 curling processes at any time (e.g. 10 parallel connections)
Check the write_out parameter in the curl manual for more data that you can extract from it (times, etc.).
In case this helps someone, this is the call I'm using now:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective};%{http_code};%{time_total};%{time_namelookup};%{time_connect};%{size_download};%{speed_download}\n' < url.lst | tee results.csv
It simply outputs a bunch of data into a csv file that can be imported into any office tool.
estani Mar 13 '14 at 13:20 2014-03-13 13:20
source share