One way: sed :
sed 's/$/p/' linesfile | sed -n -f - datafile
You can use the same trick with awk :
sed 's/^/NR==/' linesfile | awk -f - datafile
Edit - Huge Alternative Files
As for the huge number of lines, it is impractical to store entire files in memory. The solution in this case could be sorting the numbers of files and reading one line at a time. GNU awk tested the following:
extract.awk
BEGIN { getline n < linesfile if(length(ERRNO)) { print "Unable to open linesfile '" linesfile "': " ERRNO > "/dev/stderr" exit } } NR == n { print if(!(getline n < linesfile)) { if(length(ERRNO)) print "Unable to open linesfile '" linesfile "': " ERRNO > "/dev/stderr" exit } }
Run it as follows:
awk -v linesfile=$linesfile -f extract.awk infile
Testing:
echo "2 4 7 8 10 13" | awk -v linesfile=/dev/stdin -f extract.awk <(paste <(seq 50e3) <(seq 50e3 | tac))
Output:
2 49999 4 49997 7 49994 8 49993 10 49991 13 49988
Thor
source share