Unix - head AND tail of file

Say you have a txt file, which command to view the top 10 and bottom 10 lines of a file at the same time?

i.e. if the file is 200 lines long, then scan lines 1-10 and 190-200 at a time.

+82
linux scripting unix bash shell
Dec 24 '11 at 12:59
source share
15 answers

You can simply:

(head; tail) < file.txt 

Note. Duplicate lines will be printed if the number of lines in the .txt file is less than the default lines in the line head + default.

+142
Dec 24 '11 at 13:29
source share

ed is standard text editor

 $ echo -e '1+10,$-10d\n%p' | ed -s file.txt 
+15
Dec 24 '11 at 1:06 p.m.
source share

For a clean stream (for example, output from a command), you can use "tee" to fork a stream and send one stream to the head and from one to the tail. To do this, use the function "> (list)" bash (+ / dev / fd / N):

 ( COMMAND | tee /dev/fd/3 | head ) 3> >( tail ) 

or using / dev / fd / N (or / dev / stderr) plus subshells with complex redirection:

 ( ( seq 1 100 | tee /dev/fd/2 | head 1>&3 ) 2>&1 | tail ) 3>&1 ( ( seq 1 100 | tee /dev/stderr | head 1>&3 ) 2>&1 | tail ) 3>&1 

(None of these will work in csh or tcsh.)

For something with a little better control, you can use this perl command:

 COMMAND | perl -e 'my $size = 10; my @buf = (); while (<>) { print if $. <= $size; push(@buf, $_); if ( @buf > $size ) { shift(@buf); } } print "------\n"; print @buf;' 
+8
Dec 04 '13 at 23:34
source share

the problem here is that thread-oriented programs do not know the file length in advance (because there may be more than one if it is a real stream).

tools like tail buffer the last n lines and wait for the end of the stream, then print.

if you want to do this in one command (and work with any offset and not repeat lines if they overlap), you will have to imitate this behavior that I talked about.

try this awk:

 awk -v offset=10 '{ if (NR <= offset) print; else { a[NR] = $0; delete a[NR-offset] } } END { for (i=NR-offset+1; i<=NR; i++) print a[i] }' yourfile 
+2
Dec 24 '11 at 13:35
source share

The first 10 lines of the .ext file, then its last 10 lines:

cat file.ext | head -10 && cat file.ext | tail -10

The last 10 lines of the file, then the first 10:

cat file.ext | tail -10 && cat file.ext | head -10

Then you can execute the output in another place:

(cat file.ext | head -10 && cat file.ext | tail -10 ) | your_program

+1
Dec 24 2018-11-12T00:
source share

head -10 file.txt; tail -10 file.txt

Other than that, you will need to write your own program / script.

+1
Dec 24 '11 at 13:05
source share

Well, you can always tie them together. So, head fiename_foo && tail filename_foo . If this is not enough, you can write your bash function in the .profile file or any input file that you use:

 head_and_tail() { head $1 && tail $1 } 

And later call it from the command line: head_and_tail filename_foo .

+1
Dec 24 '11 at 13:15
source share

Why not use sed for this task?

sed -n -e 1,+9p -e 190,+9p textfile.txt

+1
Mar 30 2018-12-12T00:
source share

I wrote a simple python application for this: https://gist.github.com/garyvdm/9970522

It processes channels (streams), as well as files.

+1
Apr 04 '14 at 8:35
source share

To process channels (streams), as well as files, add them to a .bashrc or .profile file:

 headtail() { awk -v offset="$1" '{ if (NR <= offset) print; else { a[NR] = $0; delete a[NR-offset] } } END { for (i=NR-offset+1; i<=NR; i++) print a[i] }' ; } 

Then you can not only

 headtail 10 < file.txt 

but also

 a.out | headtail 10 

(This still adds false blank lines when 10 exceeds the input length, unlike the usual old a.out | (head; tail) . Thanks, previous respondents.)

Note: headtail 10 , not headtail -10 .

0
Apr 22 '13 at 21:22
source share

building on the ideas above (tested bash and zsh)

but using the alias "hat" Head and Tails

 alias hat='(head -5 && echo "^^^------vvv" && tail -5) < ' hat large.sql 
0
Oct 20 '14 at 11:06
source share

Based on JF Comment by Sebastian :

 cat file | { tee >(head >&3; cat >/dev/null) | tail; } 3>&1 

Thus, you can process the first line, and the rest - in different ways in one channel, which is useful for working with CSV data:

 { echo N; seq 3;} | { tee >(head -n1 | sed 's/$/*2/' >&3; cat >/dev/null) | tail -n+2 | awk '{print $1*2}'; } 3>&1 
  N * 2
 2
 four
 6
0
Sep 21 '16 at 19:48
source share

It took a long time to complete this solution, which seems to be the only one that covered all use cases (for now):

 command | tee full.log | stdbuf -i0 -o0 -e0 awk -v offset=${MAX_LINES:-200} \ '{ if (NR <= offset) print; else { a[NR] = $0; delete a[NR-offset]; printf "." > "/dev/stderr" } } END { print "" > "/dev/stderr"; for(i=NR-offset+1 > offset ? NR-offset+1: offset+1 ;i<=NR;i++) { print a[i]} }' 

Feature List:

  • live exit for the head (obviously, it is impossible for the tail)
  • use of external files
  • progressbar, one point for each row after MAX_LINES, very useful for lengthy tasks.
  • progressbar on stderr, assuring that the progress points are separated from the head + tail (very convenient if you want to output stdout)
  • avoids the wrong logging order due to buffering (stdbuf)
  • avoid duplication of output when the total number of lines is less than the head + tail.
0
Jun 30 '17 at 15:17
source share
 (sed -u 10q; echo ...; tail) < file.txt 

Another variation of the theme (head;tail) , but avoiding the problem of initial loading of the buffer for small files.

0
Sep 06 '17 at 12:28
source share

I have been looking for this solution for a while. I tried it myself with sed, but the problem with not knowing the length of the file / stream was insurmountable. Of all the options available above, I liked the Camille Goudeseune awk solution. He made a note that his decision left extra empty lines in the output with a fairly small data set. Here I provide a modification of his solution that removes extra lines.

 headtail() { awk -v offset="$1" '{ if (NR <= offset) print; else { a[NR] = $0; delete a[NR-offset] } } END { a_count=0; for (i in a) {a_count++}; for (i=NR-a_count+1; i<=NR; i++) print a[i] }' ; } 
0
Oct 18 '17 at 2:51 on
source share



All Articles