If the space is at a height in the place where you initially upload the file, then upload the file to S3 and then upload, compress and reload the file on S3 to an EC2 instance in the same region as the S3 bucket is actually very reasonable (if seems to be the opposite of an intuitive) sentence, for one simple reason:
AWS does not charge you for bandwidth between EC2 and S3 within the same region.
This is the perfect job for the spot instance ... and a good example of using SQS to tell the spot machine what it should have done.
On the other hand ... you spend most of your local bandwidth downloading this file if you do not compress it first.
If you are a programmer, you can create a utility similar to the one I wrote for internal use (this is not a plugin that is currently not available for release), which compresses (through external tools) and uploads files to S3 on the fly.
It works something like the command line example pseudocode:
cat input_file | gzip -9c | stream-to-s3 --bucket 'the-bucket' --key 'the/path'
This is a simplified use case to illustrate a concept. Of course, my stream-to-s3 utility takes a number of other arguments, including x-amz-meta metadata, a passkey, and the aws secret code, but you can get this idea, perhaps.
Common compression utilities such as gzip, pigz, bzip2, pbzip2, xz, and pixz can read the source file from STDIN and write compressed data to STDOUT without writing the compressed version of the file to disk.
The utility that I use reads the file data from STDIN through the pipeline and using S3 Multipart Upload (even for small files that do not need it technically, since S3 Multipart Upload does not dexterously require you to know the file size in advance), it simply continues to send data to S3 until it reaches EOF in its input stream. He then completes the multi-page download and ensures that everything is completed.
I use this utility to create and load entire archives with compression without affecting a single block of disk space. Again, writing was not particularly difficult, and this could be done in several languages. I didn’t even use the S3 SDK, I rolled back from scratch using the standard HTTP agent and the S3 API documentation.