This blog is a part of the homework of Andrew Brown's AWS Free Cloud Bootcamp for Week 2 - Distributed Tracing, sending logs to CloudWatch Logs for Cruddur containerised Flask application.
What is "AWS CloudWatch Logs"
You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources.
CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time.
CloudWatch Logs also supports querying your logs with a powerful query language, auditing and masking sensitive data in logs, and generating metrics from logs using filters or an embedded log format.
What is Watchtower
Watchtower is a log handler for AWS CloudWatch Logs.
CloudWatch Logs is a log management service built into AWS. It is conceptually similar to services like Splunk, Datadog, and Loggly, but is more lightweight, cheaper, and tightly integrated with the rest of AWS.
Watchtower, in turn, is a lightweight adapter between the Python logging system and CloudWatch Logs. It uses the boto3 AWS SDK, and lets you plug your application logging directly into CloudWatch without the need to install a system-wide log collector like awscli-cwlogs and round-trip your logs through the instance’s syslog. It aggregates logs into batches to avoid sending an API request per each log message, while guaranteeing a delivery deadline (60 seconds by default).
Steps to use Watchtower to send logs to AWS CloudWatch Logs
Add
watchtower
torequirements.txt
# For sending logs to AWS CloudWatch Logs watchtower
1.1. Run this command in the terminal to install the libraries for local development
pip install -r requirements.txt
Add these environment variables in
docker-compose.yml
for the related servicebackend-flask
services: backend-flask: environment: AWS_DEFAULT_REGION: "${AWS_DEFAULT_REGION}" AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}" AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
Passing AWS_REGION doesn't seems to get picked up by boto3 so pass default region instead
2.1 Set environment variables in the local bash terminal
export AWS_DEFAULT_REGION=<aws_region_name> export AWS_ACCESS_KEY_ID=<aws_access_key> export AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
Optional: When using Gitpod as CDE, tell Gitpod to remember these environment variables when relaunching our workspaces
gp env AWS_DEFAULT_REGION=<aws_region_name> gp env AWS_ACCESS_KEY_ID=<aws_access_key> gp env AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
- Add the following code in
app.py
to configure the logger handler to use CloudWatchimport watchtower import logging from time import strftime
LOGGER = logging.getLogger(__name__) LOGGER.setLevel(logging.DEBUG) console_handler = logging.StreamHandler() cw_handler = watchtower.CloudWatchLogHandler(log_group="cruddur") LOGGER.addHandler(console_handler) LOGGER.addHandler(cw_handler)
Add the following code in
app.py
to log at post-process stage of each requestIn Flask, @app.after_request is a decorator that allows you to register a function to be called after a request has been processed, but before the response is sent back to the client.
@app.after_request def after_request(response): timestamp = strftime('[%Y-%b-%d %H:%M]') LOGGER.error('%s %s %s %s %s %s', timestamp, request.remote_addr, request.method, request.scheme, request.full_path, response.status) return response
- Run this command in the terminal to start the application
docker compose up
Perform an API request by i.e. visiting an API endpoint. In AWS CloudWatch Logs under group "cruddur", observe the logs present.