S3 Uploads — Proxies vs Presigned URLs vs Presigned POSTs

Zac Charles
9 min readApr 12, 2020

--

An API was recently being designed at work that would enable restaurant owners to request changes to their logo and other images. The idea is that restaurants use a website to upload a new image and trigger a review process. I was asked to advise on how we should handle the image upload.

Initially, there were two options being considered. The first was to encode the image using Base64 and include it in the JSON of the API request that starts the review process. The other option was to upload the image to S3 using a presigned URL, then pass the S3 object details to the API instead.

In this post, I’ll cover these two popular solutions and another lesser-known solution that I preferred in this case. I’ll also share some example code and point out some gotchas.

Regardless of how files get into AWS, the best place to store them will usually be S3 because it provides cheap, reliable, and durable storage with a simple API. Once a file is in S3, it can easily be used from other AWS services. The real problem is how to get a file from a client into S3.

Solutions to this problem can be grouped into two categories. The first is the cloud native way where the client sends the file directly to S3, and the other involves one or more proxy services forwarding the file from the client to S3.

Proxy Solutions

Single Request

This is the natural progression of the way we’ve uploaded files for a long time. In this solution, you provide an API that the client sends data to and the API stores the data in S3. The API may perform other actions as well, such as starting a review process.

This diagram shows API Gateway and Lambda as the proxy, but this could be any AWS service(s).

Two Requests

Having additional services between the client and S3 adds complexity and cost. As discussed by Yan Cui in a recent post, it can also add limits.

Lambda, for example, has a 6 MB limit on the size of synchronous payloads (which includes API Gateway requests).

Yan’s post offered API Gateway Service Integrations as a way of removing Lambda from the equation. In this solution, the client calls API Gateway once to upload the file, then again to perform the other actions. The second API call contains details of the file in S3 instead of the file itself.

Lambda is gone from the first call and its 6 MB limit went with it. However, API Gateway itself has a 10 MB payload size limit.

If 10 MB is enough for you, this might be an acceptable solution. In my example case, we want to allow uploading images up to 20 MB, so this won’t work. Luckily, files can be uploaded directly to S3.

Two Requests: Bonus

I just noticed that Yan’s post has been updated to include an additional solution suggested on Twitter by Timo Schilling.

This solution involves creating a CloudFront distribution and intercepting the client’s PUT using Lambda@Edge. The Lambda@Edge performs any necessary auth then redirects the client to a presigned URL (see below).

This gets around API Gateway’s 10 MB limit (CloudFront has a 20 GB default limit), but it also adds significant operational overhead, cost, and complexity.

It’s a clever option and could be worth considering. For my use case, it falls short in the same ways as using presigned URLs directly. More below.

Direct Solutions

Please don’t run off and open your S3 bucket up to public writes. The proper way to securely allow clients to read and write private S3 objects is using short-lived, presigned requests.

In these direct solutions, server-side code uses its IAM credentials to presign an S3 request. The client can then execute the presigned request to read or write directly to S3.

For a browser to PUT or POST directly to your S3 bucket, you need to configure CORS. Below is a rule that works for both presigned URLs (PUTs) and presigned POSTs.

Next, the client needs a way of getting the presigned request details. They could be sent down when the page loads, but we’ll continue with the API approach which requires three requests in total.

The first request asks our API to sign a write request. The second executes that request to upload a file directly to S3. The final request to our API contains the S3 object details instead of the file itself and performs the other actions.

Let’s take a look at the two ways S3 write requests can be presigned.

Presigned URL

This is the one people have usually heard of. A presigned URL is just the URL of an S3 object with a bunch of query string parameters. The query string parameters are the magic part. They contain the signature and other security related data. Below is a simplified presigned URL, formatted for readability.

In JavaScript, the getSignedUrl or getSignedUrlPromise methods are used to generate presigned URLs. Signing is in-memory with no network requests.

In the above example, I’ve included the optional Expires param to make this presigned URL expire after five minutes. You can include other URI Request Parameters from the S3 PutObject operation to further restrict what the client can do. Options set now are signed and verified by S3 later.

Below is a simple example of uploading a file from a <input type="file" /> to a presigned URL. It assumes the url variable contains the presigned URL.

I wrapped the FileReader in a Promise to make it reusable and easier to read. After reading the file into an ArrayBuffer, the Fetch API is used to send it off to the presigned URL in a PUT request. That’s all there is to it.

This solution doesn’t meet our requirements, though. There is no way to limit the size of the uploaded file. In fact, a malicious client could upload 5 GB files each time. The good news is that we’re not paying for that traffic, but we will pay for the storage. We could use S3 Events or lifecycle policies to work around this, but wouldn’t it be nice if we could just set a maximum file size?

Gotchas

  1. Be careful with small Expires values. I had issues with a 60 second expiration due to clock skew. The AccessDenied error is not very helpful.
  2. Ensure your Lambda function’s IAM role has the s3:PutObject permission or you’ll also get AccessDenied when trying to upload.
  3. Make sure to use the PUT method, not POST.
  4. Don’t forget CORS! (See above)

Presigned POST

This is the lesser-known solution I mentioned. The documentation is quite intimidating, but I promise its just as easy as presigned URLs while also being more powerful.

It’s more powerful because of that the POST Policy feature. The POST Policy is simply conditions you set when creating the presigned POST. Using it, you can allow certain MIME types and file extensions, allow multiple files to be uploaded with a given prefix, restrict the file size, and more.

In JavaScript, presigned POSTs are created with the createPresignedPost method. Again, it’s all done in-memory with no network requests.

The params are similar to getSignedUrl except Key has moved inside Fields (and is now lowercase) and Conditions has been added. In this example, there is one condition limiting the file size to between 0 bytes and 512 KiB.

The object returned by createPresignedPost contains the url to POST to and fields that must be sent to S3. Below is an example. Except the first two, the fields are the same as the query parameters in a presigned URL.

It’s very easy to use presigned POSTs in the client. The code below assumes data was retrieved from the backend and contains the above object.

Unlike presigned URLs, we don’t need a FileReader to read the file. We just need to build up a FormData using the fields created earlier.

Gotchas

  1. file must be the last field added to FormData.
  2. Make sure to use the POST method, not PUT.
  3. Do not set the Content-Type header in fetch. If you do, the browser won’t set the correct boundary and you’ll get a MalformedPOSTRequest error.
  4. Again, be careful with small Expires values.
  5. Again, ensure your function’s IAM role has the s3:PutObject permission.
  6. Don’t forget CORS! (See above)

Security

In the proxy solutions, auth should be performed by one of the proxy services. That could be an API Gateway Lambda Authorizer, the Lambda handler, your service running on ECS, etc. In the direct solutions, auth should be performed when the client requests a presigned URL or POST.

It’s a good idea to include the user’s ID in the file names to easily trace who created the file. For example, you may concatenate the user’s ID with a random string (like a UUID).

Depending on your use case, you could take this a step further and use only the user’s ID without a random string. This would mean each user can only upload or replace that one file. When the file is consumed (request #3) you could move it to another bucket, delete it, or simply rename it.

Some AWS SDKs, such as the .NET SDK, will interpret ../ in a key name before signing. This means that users/zac/../jon/image.jpg will be signed as users/jon/image.jpg. Others, like the JavaScript SDK, will use your params literally. In any case, you should avoid including user input in keys. If you must, then you should sanitise the input.

Presigned POSTs allow you to sign a key prefix condition. This lets the client use any key as long as it starts with, for example, users/zac/. The client can include ../ in the key, but this isn’t a problem as S3 will treat it literally instead of interpretting it (the console will display a folder named ..).

Only presigned POSTs can restrict uploads based on MIME type or file extension, but neither of these are sufficient protection against bad actors. Therefore, in addition, you can use libraries such as file-type to verify the file’s actual type. These libraries look for magic numbers near the start of the file.

Instead of getting the entire file from S3 for this check, you can use the Range header with GetObject to partially read the file.

Transfer Acceleration

With the client uploading directly to S3, you can use Transfer Acceleration to improve the experience of your users.

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

Transfer Acceleration can make uploads faster. It’s of most benefit to users geographically far away from your S3 bucket. AWS host a speed comparison tool to promote the feature. It comes at a price, but it may be worth it.

To use Transfer Acceleration, you first need to enable it via the AWS Console or CloudFormation, then add useAccelerateEndpoint: true to the S3 client you use to presign requests.

Example Code

A working example of uploading using presigned URL and presigned POST is available in this GitHub repo.

After you clone the repo, run npm install followed by serverless deploy inside the backend directory. This will create an S3 bucket, two Lambda functions, and an API Gateway REST API.

Then, you can then open frontend\client.html in your browser. Replace the example API URL with the one from your deployed backend and you’re ready.

Have a look around the code. url-signer.js and presigned-url.js show the backend and frontend for presigned URLs, respectively. post-signer.js and presigned-post.js show the same for presigned POST. To keep things simple, I’ve left out most error checking. Check the developer tools if you get stuck.

For more like this, please follow me on Medium and Twitter.

--

--