Redirect printf to file in awk

I have a simple bash script.

The goal is to track the http-access protocol file (test.log) and output the updated speed to the file (out.log):

stdbuf -o0 tail -f test.log | awk -F'[ "]+' '{ ipcount[$1]++; print "test" > "out.log"; #Truncate out.log for (i in ipcount) { printf "%15s - %d\n", i, ipcount[i] >> "out.log"; printf "%15s - %d\n", i, ipcount[i] } }' 

The basic logic is working. my only problem is redirecting to "out.log" which doesn't seem to work. The last printf prints the expected result to standard output. But the other two printfs do not output anything to "out.log", and I cannot understand why. out.log has all permissions (777)

+4
source share
1 answer

This should work for you:

 tail -f test.log | awk -F'[ "]+' -v out_file="out.log" '{ val_count[$1]++ print "" > out_file for (i in val_count) { printf "%15s - %d\n", i, val_count[i] >> out_file printf "%15s - %d\n", i, val_count[i] } close(out_file) }' 

(Note: I have moved the definition of the output file to the command line to hopefully reduce repetition.)

There is one fatal problem in the original version: print "" > "out.log" only truncates out.log time of the first . All subsequent calls to him will simply join him, because he is already open. As a minor issue, awk likes to buffer output, so the content will only be flushed briefly.

To fix this, we need a close file after each iteration. This forcibly resets the output to out.log and forces the redirect > to crop the file again at the next iteration. If you did not need to truncate each iteration, simple fflush(out_file) .


To more clearly illustrate the problem ...

The result is output.txt , which has several lines , because it is truncated only once (first iteration):

 ls -l | awk '{ print "This file has many lines" > "output.txt"; }' 

The result is output.txt with the output string single , because it is truncated several times:

 ls -l | awk '{ print "This file has one line" > "output.txt"; close("output.txt"); }' 
+7
source

All Articles