Caching S3 Objects
22 May 2008
So I just deployed a new version of Spokt.com with the static content hosted by the Amazon S3. I underestimated the number of requests that would be generated and found a way to reduce it with caching. I hope it works.
require 'rubygems' require 'aws/s3' AWS::S3::Base.establish_connection!( :access_key_id => 'access id not shown', :secret_access_key => 'secret key not shown' ) photos = AWS::S3::Bucket.find('media.spokt.us',:prefix=>'images/') puts "#{photos.size} files found." photos.each do |photo| puts "updating #{photo.key}..." photo.cache_control = 'max-age=315360000' photo.save({:access => :public_read}) end
To ensure the cache-control header was added I ran a curl command to get the headers:
$ curl -I http://media.spokt.us/images/contentBoxHeader_04.jpg HTTP/1.1 200 OK x-amz-id-2: kEPTZ1ZdNo2nGsUnel5wDwsGi1pXTrkk6XGtSKKzb7zZguJjIwpaUCoUgESYbzkA x-amz-request-id: 5DD73A7EEB692C0D Date: Fri, 23 May 2008 05:09:53 GMT x-amz-meta-s3fox-filesize: 322 x-amz-meta-s3fox-modifiedtime: 1183963918000 Cache-Control: max-age=315360000 Last-Modified: Fri, 23 May 2008 05:05:16 GMT ETag: "1db8a1d3cb59acda667de962516c3cef" Content-Type: image/jpeg Content-Length: 322 Server: AmazonS3
There it is… now we’ll see if we see fewer GET requests tomorrow.