Django storages not detecting modified static files

I use django storages and amazon s3 for my static files. Following the documentation, I put these options in my settings.py

STATIC_URL = 'https://mybucket.s3.amazonaws.com/' ADMIN_MEDIA_PREFIX = 'https://mybucket.s3.amazonaws.com/admin/' INSTALLED_APPS += ( 'storages', ) DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' AWS_ACCESS_KEY_ID = 'mybucket_key_id' AWS_SECRET_ACCESS_KEY = 'mybucket_access_key' AWS_STORAGE_BUCKET_NAME = 'mybucket' STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage' 

And the first time I started the static collection, everything worked correctly, and my static files were loaded into my s3 bucket.

However, after making changes to my static files and running python manage.py collectstatic this is displayed even though the static files have been changed

 -----> Collecting static files 0 static files copied, 81 unmodified. 

However, if I rename the modified static file, the modified static file will be correctly copied to my s3 bucket.

Why django storages don't load my modified static files? Is there a configuration problem or a problem deeper?

+7
python django amazon-s3 amazon-web-services
source share
4 answers

collectstatic skips files if the โ€œtargetโ€ file is โ€œyoungerโ€ than the original file. Amazon S3 repository seems to be returning the wrong date for your file.

you can examine [code] [1] and debug server responses. There may be a problem with the time zone.

Or you can simply pass the --clear argument to collectstatic so that all files are deleted on S3 before collecting

+13
source share

https://github.com/antonagestam/collectfast

From the readme.txt file: a user control command that compares the sum of MD5 and etag with S3, and if the two copies are the same. This speeds up the collection of static MUCH faster if you use git as a version control system that updates timestamps.

+5
source share

Create a settings file only for collective synchronization with this configuration:

 TIME_ZONE = 'UTC' 

Run the assembly with the specific settings using this line:

 python manage.py collectstatic --settings=settings.collectstatic 
+2
source share

This question is a bit old, but if it helps someone in the future, I decided that I would share my experience. Following the tips outlined in other topics, I confirmed that for me this was due to the difference in time zone. My django time was not wrong, but it was set to EST and S3 to GMT. When testing, I returned to django repositories 1.1.5, which seemed to work collectively. Partly due to personal preferences, I didnโ€™t want to: roll back three versions of django repositories and lose any possible bug fixes or b) change the time zones for the components of my project, which basically comes down to the convenience function (although one is important).

I wrote a short script to do the same work as collectstatic without the above changes. This will require a little modification for your application, but should work for standard cases if it is hosted at the application level and "static_dirs" is replaced with the names of your project applications. It runs through a terminal with the name "python whatever_you_call_it.py -e environment_name" (install this in your aws file).

 import sys, os, subprocess import boto3 import botocore from boto3.session import Session import argparse import os.path, time from datetime import datetime, timedelta import pytz utc = pytz.UTC DEV_BUCKET_NAME = 'dev-homfield-media-root' PROD_BUCKET_NAME = 'homfield-media-root' static_dirs = ['accounts', 'messaging', 'payments', 'search', 'sitewide'] def main(): try: parser = argparse.ArgumentParser(description='Homfield Collectstatic. Our version of collectstatic to fix django-storages bug.\n') parser.add_argument('-e', '--environment', type=str, required=True, help='Name of environment (dev/prod)') args = parser.parse_args() vargs = vars(args) if vargs['environment'] == 'dev': selected_bucket = DEV_BUCKET_NAME print "\nAre you sure? You're about to push to the DEV bucket. (Y/n)" elif vargs['environment'] == 'prod': selected_bucket = PROD_BUCKET_NAME print "Are you sure? You're about to push to the PROD bucket. (Y/n)" else: raise ValueError acceptable = ['Y', 'y', 'N', 'n'] confirmation = raw_input().strip() while confirmation not in acceptable: print "That an invalid response. (Y/n)" confirmation = raw_input().strip() if confirmation == 'Y' or confirmation == 'y': run(selected_bucket) else: print "Collectstatic aborted." except Exception as e: print type(e) print "An error occured. S3 staticfiles may not have been updated." def run(bucket_name): #open session with S3 session = Session(aws_access_key_id='{aws_access_key_id}', aws_secret_access_key='{aws_secret_access_key}', region_name='us-east-1') s3 = session.resource('s3') bucket = s3.Bucket(bucket_name) # loop through static directories for directory in static_dirs: rootDir = './' + directory + "/static" print('Checking directory: %s' % rootDir) #loop through subdirectories for dirName, subdirList, fileList in os.walk(rootDir): #loop through all files in subdirectory for fname in fileList: try: if fname == '.DS_Store': continue # find and qualify file last modified time full_path = dirName + "/" + fname last_mod_string = time.ctime(os.path.getmtime(full_path)) file_last_mod = datetime.strptime(last_mod_string, "%a %b %d %H:%M:%S %Y") + timedelta(hours=5) file_last_mod = utc.localize(file_last_mod) # truncate path for S3 loop and find object, delete and update if it has been updates s3_path = full_path[full_path.find('static'):] found = False for key in bucket.objects.all(): if key.key == s3_path: found = True last_mode_date = key.last_modified if last_mode_date < file_last_mod: key.delete() s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path)) print "\tUpdated : " + full_path if not found: # if file not found in S3 it is new, send it up print "\tFound a new file. Uploading : " + full_path s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path)) except: print "ALERT: Big time problems with: " + full_path + ". I'm bowin' out dawg, this shitz on u." def get_mime_type(full_path): try: last_index = full_path.rfind('.') if last_index < 0: return 'application/octet-stream' extension = full_path[last_index:] return { '.js' : 'application/javascript', '.css' : 'text/css', '.txt' : 'text/plain', '.png' : 'image/png', '.jpg' : 'image/jpeg', '.jpeg' : 'image/jpeg', '.eot' : 'application/vnd.ms-fontobject', '.svg' : 'image/svg+xml', '.ttf' : 'application/octet-stream', '.woff' : 'application/x-font-woff', '.woff2' : 'application/octet-stream' }[extension] except: 'ALERT: Couldn\'t match mime type for '+ full_path + '. Sending to S3 as application/octet-stream.' if __name__ == '__main__': main() 
0
source share

All Articles