Project-01: Optimizing Cloud Costs with Automated AWS Resource Tracking
Managing cloud resources effectively is crucial for any organization leveraging AWS. It’s easy to lose track of instances, volumes, and buckets, resulting in unexpected costs. To tackle this issue, I built a simple shell script to automate daily resource tracking and report generation using the AWS CLI and jq
for JSON parsing. This solution helps identify underutilized resources, enabling teams to make data-driven decisions and reduce unnecessary expenses.
In this blog, I’ll walk through how the script works, how to set it up with cron
for automation, and how it can help optimize cloud costs for your organization.
Why Automated Resource Tracking?
AWS provides powerful cloud services, but without proper visibility and management, costs can spiral out of control. Stopping or deleting unused resources manually is time-consuming and error-prone, especially in dynamic environments where resources are frequently spun up and down. Automated resource tracking offers the following benefits:
Cost Control: Identify and manage underutilized or idle resources.
Improved Visibility: Gain a clear overview of active resources and storage volumes.
Time Savings: Eliminate the need for manual monitoring and reporting.
Proactive Management: Avoid bill surprises by keeping track of growing storage costs.
Setting Up the Script
The script uses the AWS CLI and jq
to track the status of EC2 instances, S3 buckets, and EBS volumes. Before running the script, make sure to configure your AWS CLI using:
aws configure
This command will prompt you to enter your AWS credentials (AWS Access Key ID
and AWS Secret Access Key
), default region name, and output format. Setting up the credentials is crucial, as it ensures the script can interact with your AWS resources.
Here’s the script I created:
Script Overview:
#!/bin/bash
# Author: Pakeeza Saeed
# Date: 08-10-2024
# Description: A script to track and report AWS resources using AWS CLI and jq.
list_ec2_instances() {
echo "============"
echo "EC2 Instances: "
aws ec2 describe-instances | jq '.Reservations[].Instances[].InstanceId'
}
list_s3_buckets() {
echo "==============="
echo "S3 Buckets: "
aws s3 ls
}
list_ebs_volumes() {
echo "=============="
echo "EBS Volumes:"
aws ec2 describe-volumes | jq '.Volumes[] | {VolumeId: .VolumeId, Size: .Size, State: .State, VolumeType: .VolumeType, AvailabilityZone: .AvailabilityZone}'
}
main_function() {
echo "====================="
echo "AWS Resource Usage Report"
list_ec2_instances
list_s3_buckets
list_ebs_volumes
echo "====================="
echo "Report generation complete!"
}
main_function
How It Works:
EC2 Instances: The script lists all EC2 instance IDs, making it easy to identify running or stopped instances.
S3 Buckets: Lists all S3 buckets to track storage usage and organization.
EBS Volumes: Retrieves key details like volume ID, size, and state to identify and clean up unused storage.
Scheduling the Script Using cron
To automate this script, I used cron
, a time-based job scheduler, to run it daily at 11:10 AM and log the results to a file.
Crontab Entry:
10 11 * * * bash /home/vagrant/scripts/aws_resource_usage.sh >> /home/vagrant/aws_resource_usage.log 2>&1
10 11 * * *
: Schedules the script to run daily at 11:10 AM.bash /home/vagrant/scripts/aws_resource_
usage.sh
: Specifies the full path to the script and usesbash
to execute it.>> /home/vagrant/aws_resource_usage.log 2>&1
: Redirects both output and errors to a log file for review.
Setting Up the Script and Cron Job
Follow these steps to set up the AWS resource tracker on your system:
Create the Script: Save the script as
aws_resource_
usage.sh
in your desired directory (/home/vagrant/scripts
in this example).Make the Script Executable:
chmod +x /home/vagrant/scripts/aws_resource_usage.sh
Set Up the Cron Job: Open the
crontab
editor using:crontab -e
Add the cron entry to schedule the script:
10 11 * * * bash /home/vagrant/scripts/aws_resource_usage.sh >> /home/vagrant/aws_resource_usage.log 2>&1
Save and exit the editor. This ensures that the script will run daily at 11:10 AM and log the results to
/home/vagrant/aws_resource_usage.log
.
Reviewing the Output
Once the script runs, you can view the report in the log file:
cat /home/vagrant/aws_resource_usage.log
The log will display the current state of EC2 instances, S3 buckets, and EBS volumes, making it easy to spot underutilized resources.
How It Helps Save Costs
Spot Idle EC2 Instances: Identify instances that are stopped but still accumulating storage charges.
Review S3 Bucket Usage: Track which buckets are active and review their size.
Analyze EBS Volumes: Find and clean up unused EBS volumes that are still being billed.
With daily reports, the team can quickly assess resource utilization, optimize cloud usage, and implement cost-saving measures proactively.
Conclusion
Cloud resource tracking is essential for any organization using AWS. This simple script, combined with automation, can significantly improve visibility and help control cloud expenses. By automating resource reports, teams can focus more on optimization rather than manual tracking.
If you have any suggestions or questions, feel free to reach out!
#AWS #CloudManagement #Automation #DevOps #CloudCostOptimization #ShellScripting #AWSCLI #CostControl #CloudComputing