AWS S3 Bucket Magecart Attacks and How to Prevent Them
Currently there are happening automated attacks against AWS S3 buckets with publicly writable objects by so called Magecart groups. These are hacking groups who specialize in attacking systems and software to get credit card information. These are often widespread mass attacks which often fail, but when only some succeed, it is still very profitable.
The current attack which started in early April 2019 and was discovered and documented by RiskIQ scans for misconfigured S3 buckets who allow public write access (everyone with an AWS account can edit files). Then they scan for JavaScript files and append their skimming code at the end to capture CC information.
Rules of Thumb
To prevent these kind of problems you can choose several methods, processes and even AWS services which help you mitigating those kind of issues:
- Whitelisting: Explicitely allow access to people / processes who need them and disallow access for everyone else (allow a few, disallow all others)
- Principle of least privilege: For every access you grant, give them the least available and needed permissions (e.g. someone needs read access to some files in a bucket, so don’t give them the right to modifiy ACLs)
- Block public access: If possible, block all public access to buckets and contents at an account level
- Use TrustedAdvisor: Which can automatically check your S3 bucket permissions and is great AWS service for improving overall security posture of your resources
- Enable CloudTrail: Even if it doesn’t prevent access, it can detect and help inspect misbehaviour afterwards.
- Backups and Object Versioning: Same as before, it doesn’t prevent access, but IF something happened, you can easily roll back to a known good state.
- Auto Remediate Unintended Permissions: AWS even made a nice blog post about this problem and how you can automatically remediate unwantend permissions in your Amazon S3 Object ACLs with CloudWatch Events
So even if you have all recommended best practices in place, you may want to know for sure if any buckets are badly configured and prone to those kind of attacks.
You can use some tool like S3Scanner in your pipelines to regularly scan all your buckets and notify your team about any problems.
Here I give you an example for using AWS services for automatic scanning of all new buckets or when access permissions to a bucket have changed.
- Create new CloudTrail for all S3 API events
- Create new CloudWatch Events for all S3 API events you are interested in (e.g.
CreateBucket
,PutBucketPolicy
) - Create new Lambda function (like this one) to be triggered by new CloudWatch S3 API events you are interested in
- Use
$.detail.requestParameters.bucketName
in CloudWatch Event CodeBuild Input to be handed over to the resource which gets triggered in the next step - Trigger the target of your choosing (e.g. Lambda function)
- Get notified by SNS (or configure Slack alerts in your function)
CloudTrail for S3 API events
To capture any events which may change S3 bucket or object permissions, you first have to enable CloudTrail for S3 events.
CloudWatch Events
In CloudWatch Events we then create a new rule and select the S3 events we are interested in:
Choose Simple Storage Service as Service Name and Bucket Level Operations for the Event Type.
Additionally, you can create another rule and choose Object Level Operations to get all S3 object related events, too.
Interesting events to look out for permission misconfiguration are:
Bucket Events
- CreateBucket
- DeleteBucketPolicy
- PutBucketAcl
- PutBucketPolicy
- PutBucketPublicAccessBlock
- PutBucketWebsite
Object Events
- PutObject
- PutObjectAcl
CloudWatch Event Target
Here you can choose the Lambda function you want to trigger (or other services you might to be triggered by a new event).
As for the inputs, you can use everything which is in an CloudWatch Event pattern.
For example, use the Input Transformer to extract the S3 bucket name of the event and assign it to the $BUCKET environment variable:
Input Path:
{"bucketname":"$.detail.requestParameters.bucketName"}
Input Template:
{"environmentVariablesOverride": [ { "name": "BUCKET", "value": <bucketname> } ]}
Target
Here it is up to you whether you use your custom Lambda function or just publish to an AWS SNS topic so get notified. Some ideas for you to think about:
- Automatic remediation of unwantend permissions
- Notifications to security/devops teams about changes which may be security related (e.g. too public permissions of objects/buckets)
- Notifcations to the user which may have created the faulty permissions
In most cases you will choose a custom Lambda function to fit your needs. But keep in mind you could even trigger a CodeBuild Project, CodePipeline, BatchJob, EventBus or publish to a SQS topic and get notified by SNS.
Final words
Like most of the time in the DevOps space, there is more than one possible solution to a problem. Choose the one which fits best for your project and adapt it if neccessary!
Whether you create your own custom Lambda function,use AWS Trusted Advisor or just get notified by SNS / Slack - always keep it automated, maintainable and KISS (“Keep It Simple Stupid”). Finally, always follow the Principle of Least Privilege and Whitelisting, as it will make your life easier and more secure.