How to view aws log in real time (e.g. tail -f)

I can view the log using the following command.

aws logs get-log-events --log-group-name groupName --log-stream-name streamName --limit 100 

what kind of command is it to get a function like tail -f so that I can see the log in real time

+42
amazon-cloudwatch aws-cli
source share
11 answers

Take a look at awslogs .

If you happen to work with the Lambda / API Gateway, check out apilogs .

+28
source share

I was very disappointed with awslogs and cwtail so I created my own tool called Saw, which efficiently translates CloudWatch logs to the console (and paints JSON output):

You can install it on macOS with:

 brew tap TylerBrock/saw brew install saw 

It has many useful features, such as the ability to automatically expand (indent) JSON output (try running the tool with --expand ):

 saw watch my_log_group --expand 

Do you have a lambda that you want to see in error logs? No problems:

 saw watch /aws/lambda/my_func --filter error 

I saw this admirably, because the output is easy to read, and you can broadcast magazines from the entire group of magazines, and not just from one stream in the group. Filtering and viewing streams with a specific prefix is ​​also easy!

+72
source share

I just discovered cwtail and it works well (for viewing CloudWatch's lambda function in logs).

Install:

 npm install -g cwtail 

To list the log groups:

 cwtail -l 

Then, after you select which group of logs is 'tail':

 cwtail -f /aws/lambda/ExampleFunction 
+8
source share

Since CloudWatch logs can be delayed (that is, not "in real time" using an exact definition), you analyze previous events for the last timestamp and start the next iteration there:

 #!/bin/bash group_name='<log-group-name>' stream_name='<log-stream-name>' start_seconds_ago=300 start_time=$(( ( $(date -u +"%s") - $start_seconds_ago ) * 1000 )) while [[ -n "$start_time" ]]; do loglines=$( aws --output text logs get-log-events --log-group-name "$group_name" --log-stream-name "$stream_name" --start-time $start_time ) [ $? -ne 0 ] && break next_start_time=$( sed -nE 's/^EVENTS.([[:digit:]]+).+$/\1/ p' <<< "$loglines" | tail -n1 ) [ -n "$next_start_time" ] && start_time=$(( $next_start_time + 1 )) echo "$loglines" sleep 15 done 

Or, if you want to link the entire log group, use filter-log-events without a stream name:

 #!/bin/bash group_name='<log-group-name>' start_seconds_ago=300 start_time=$(( ( $(date -u +"%s") - $start_seconds_ago ) * 1000 )) while [[ -n "$start_time" ]]; do loglines=$( aws --output text logs filter-log-events --log-group-name "$group_name" --interleaved --start-time $start_time ) [ $? -ne 0 ] && break next_start_time=$( sed -nE 's/^EVENTS.([^[:blank:]]+).([[:digit:]]+).+$/\2/ p' <<< "$loglines" | tail -n1 ) [ -n "$next_start_time" ] && start_time=$(( $next_start_time + 1 )) echo "$loglines" sleep 15 done 

I also used scripts that I use as github gists: https://gist.github.com/tekwiz/964a3a8d2d84ff4c8b5288d9a703fbce .

Warning: the above code and scripts are written for my macOS system that is configured (underlined ??) using Homebrew and GNU coreutils, so some command parameters may be required for your system. Editing is welcome :)

+6
source share

For efficient CloudWatch logging, I created a tool called cw .

It is very easy to install (supports brew, snap and scoop), fast (it focuses on a specific hardware architecture, without intermediate runtime) and has a set of functions that make life easier.

Your cw example would be:

 cw tail -f groupName:streamName 
+6
source share

Note that aws log tracking is now a supported feature of official awscli, albeit only in awscli v2, which has not yet been released. Tailing and following the logs (for example, tail -f ) can now be done with something like:

 aws logs tail $group_name --follow 

To install v2, see the instructions on this page . It was implemented in this PR . To see this at the last re: Invent conference, watch this video .

In addition to tracking logs, it allows you to view logs of a given time using the --since , which can take absolute or relative time

 aws logs tail $group_name --since 5d 

To separate awscli v1 and v2 versions, I installed awscli v2 in a separate python virtual environment and only activated it when I needed to use awscli v2.

+5
source share

I created a JetBrains plugin called awstail to do this :)

+4
source share

You can use awslogs , a python package to track aws logwatch logs.

Install it with

 pip install awslogs 

List all groups with

 awslogs groups 

Then select the stream and watch it

 awslogs get staging-cluster --watch 

You can also filter logs with the appropriate templates.

 # tail logs of a cluster awslogs get staging-cluster --watch # tail logs of a lambda function awslogs get /aws/lambda/some-service --watch # print all logs containg "error" awslogs get staging-cluster --watch --filter-pattern="error" # print all logs *not* containg "error" awslogs get staging-cluster --watch --filter-pattern="-error" 

See the readme of the project for more information on using awslogs.

0
source share

Aws cli does not have a live tail option -f.

These other tools mentioned above provide a shank function, however I tried all these tools, awslogs, cwtail and found them disappointing. They slowly loaded events, often were unreliable and useless when displaying JSON log data, and were primitive with query parameters.

I needed a very fast and simple log viewer that would allow me to instantly and easily see errors and application status. CloudWatch Log Viewer is slow, and CloudWatch Insights may take> 1 m for some fairly simple requests.

So I created SenseLogs, a free AWS CloudWatch Logs log viewer that works fully in your browser. No server services required. SenseLogs transparently downloads log data and stores events in the cache of the browser application for immediate viewing, smooth endless scrolling and full-text queries. SenseLogs has a live tail with infinite backward scrolling. See https://github.com/sensedeep/senselogs/blob/master/README.md for details.

0
source share

Here is a bash script that you can use. The script requires the AWS command line interface and jq .

 #!/bin/bash # Bail out if anything fails, or if we do not have the required variables set set -o errexit -o nounset LOG_GROUP_NAME=$1 LOG_BEGIN=$(date --date "${2-now}" +%s) LOG_END=$(date --date "${3-2 minutes}" +%s) LOG_INTERVAL=5 LOG_EVENTIDS='[]' while (( $(date +%s) < $LOG_END + $LOG_INTERVAL )); do sleep $LOG_INTERVAL LOG_EVENTS=$(aws logs filter-log-events --log-group-name $LOG_GROUP_NAME --start-time "${LOG_BEGIN}000" --end-time "${LOG_END}000" --output json) echo "$LOG_EVENTS" | jq -rM --argjson eventIds "$LOG_EVENTIDS" '.events[] as $event | select($eventIds | contains([$event.eventId]) | not) | $event | "\(.timestamp / 1000 | todateiso8601) \(.message)"' LOG_EVENTIDS=$(echo "$LOG_EVENTS" | jq -crM --argjson eventIds "$LOG_EVENTIDS" '$eventIds + [.events[].eventId] | unique') done 

Usage: save the file, chmod +x and run it: ./cloudwatch-logs-tail.sh log-group-name . The script also accepts start and end time parameters, which are now and 2 minutes by default, respectively. You can specify any strings that date --date can be date --date for these parameters.

How it works: the script stores a list of event IDs that were displayed, which is empty to start with. It queries CloudWatch logs to retrieve all log entries for a specified time interval, and displays those that do not match our list of event IDs. It saves all event IDs for the next iteration.

The script polls every few seconds (set by LOG_INTERVAL in the script) and continues to poll another interval after the end time to take into account the delay between receiving the log and availability.

Note that this scenario will not be good if you want to constantly monitor the logs for more than a few minutes, because the query results that it receives from AWS will increase with each log item added. This is good for fast runs though.

0
source share

This is currently not a CLI feature, as it simply provides an HTTP API for CloudWatch logs. You could trivially emulate functionality with a shell script:

 #! /bin/sh end_time=$(($(date +"%s") * 1000)) aws logs get-log-events --log-group-name groupName --log-stream-name streamName --end-time $end_time while : do start_time=$end_time end_time=$(($(date +"%s") * 1000)) aws logs get-log-events --log-group-name groupName --log-stream-name streamName --start-time $start_time --end-time $end_time sleep 1 done 

Disclaimer: This will not work on Windows, and there may be a better way to get time in milliseconds.

-one
source share

All Articles