From time to time you need to download a very large file in AWS S3. If this is not an issue for small files, the big files upload could be problematic. There is much more chance that the download will be interrupted, and once interrupted you have to upload it again from the beginning.
An alternative is multipart upload. You can upload the file in parts and then compile it into S3. This is a wonderful option, but less known, less documented and has few working examples.
The script creates chunk files in memory (tmpfs), that usually avaliable as /dev/shm. If your settings are different, then fix the script.
The chunk size, that I've chose (10m) is probably small for you. I took this size in memory of the P2P protocols that have chunk size about 8-10m. If you want to increase it, do it.
You can pay attention to the variable UPLOAD_THREADS, which is not really used in this version. Loading occurs in one thread.
If the upload is interrupted, you can start script again without any problems. Once all the pieces have been loaded, the script will combine them into one huge file.