Fastest / most compressed bash single-line file to extract specific lines from a file

I want to extract only lines with specific line numbers from a file (I have about 20-50 line numbers, the file has 30,000 lines). So far, the shortest way I've found for this is, for example:

gawk 'BEGIN {split("13193,15791,16891", A, ",")} NR in A' <file_name>

but it looks like I should be able to further reduce input. I looked at sed, but it seems to me that for each line number I need -nboth -p, and also cat -ns grep, but this is in more detail than indicated above. Does anyone know a better way?

+5
source share
3 answers

Sed :

sed -n "13193p;15791p;16891p" file_name
+9

,

gawk 'FNR==NR {line[$1]; next} NR in line' line_numbers file_name
+4

This might work for you (GNU sed?):

sed 's/$/p/' file_of_line_numbers | sed -nf - source
0
source

All Articles