How to Upload Large Files to S3, Efficiently

If you try to upload large files to S3 from the browser console, you might get errors or else your upload might fail. For a better performance — as well as to maximize the use of your network bandwidth — you can use the S3 multipart upload API.

Amazon S3 allows a maximum size of 5TB for each object. But you might be asking, what is considered a large file upload to S3? Amazon Web Services recommend you use multipart upload for objects larger than 100MB. If your file size exceeds 5GB, due to constraints set by S3, you have to use the multipart upload.

With multipart upload, you can upload large files in parts and in parallel automatically, reducing the time your upload needs to complete. You can use the multipart upload API to upload a large file to S3 using the AWS CLI:

If your file is 8MB or larger, navigate to the folder on the Command Line, and use this command:

aws s3 cp example.png s3://bucketname

Of course, you want to replace the example.png with your file name and bucketname with your bucket name.

If your file is between 5MB and 8MB, run the configure command first like so:

aws configure set default.s3.multipart_threshold 5MB
aws s3 cp example.png s3://bucketname

Note:
You don’t have to use the cp command; any of the aws s3 upload commands will use multipart upload whenever the file is larger than the multipart_threshold (which is 8MB by default.)

If you encounter errors, you can try updating your AWS CLI.

If the process fails, please consult this page.

To use multipart upload via an SDK, you have to consult the documentation for the specific SDK you are using.

For more information on S3 large file uploads, you might want to visit this userguide.

*If you want to increase your upload speed further, you have the S3 Transfer Acceleration as an option. But beware, this service may have zero impact in your use case. Hence, before considering Transfer Acceleration, you might want try their simulator.



Leave a Comment