### Setup S3 & CloudFront

To get started, we need to sign up for Amazon's S3 and CloudFront services. If you already have an account with Amazon you'll just need to login and finish the signup. If not, you'll need to create an account then proceed to signup for S3 and CloudFront. The signup is simply adding the service to your account. There's nothing complicated involved.

Click each image to go to the service's information and signup page.

Once you've signed up, you'll get an Access Key ID and Secret Access Key which can be found under "Your Account" > "Security Credentials". This is basically your username and password for accessing S3.

#### Setup S3 Bucket For Files

First we need to create a bucket to store all our files in. For more information on "buckets" read "Amazon S3 Buckets Described in Plain English".

To do this, we'll first log into our S3 account using the Access Key ID and Secret Access Key with an application like Transmit (OS X), which is what I'll be using. To see more apps or browser add-ons for accessing S3 see "Amazon S3 Simple Storage Service – Everything You Wanted to Know".

Once signed in, we'll create a bucket to put our files in. I've named mine "files.jremick.com". Buckets must have unique names, need to be between 3 and 63 characters and can contain letters, numbers and dashes (but can't end with a dash).

By unique, they mean unique on the AWS network. So it's a good idea to use something like a URL or something similar.

The files we put in this bucket can now be accessed at "files.jremick.com.s3.amazonaws.com". However, this URL is pretty long and we can quickly setup a shorter one. We'll setup a new CNAME entry at our web host to do this.

#### Setup Custom S3 Subdomain

To shorten the default URL we'll create a CNAME entry as I've done below (this is at your web host). I've chosen "files" as my subdomain but you could use whatever you like.

Now we can access these bucket files at "files.jremick.com". Much better! Then simply upload the files you want within the "files.jremick.com" bucket.

Once your files are uploaded, you'll want to set the ACL (Access Control List) to allow everyone to read the files (if you want them public). In Transmit you simply right click, select get info, under permissions set "Read" to "World" and click "Apply to enclosed items...". This will give all the files within this bucket read access to the world.

By default, files uploaded to your S3 account will only allow read and write access to the owner. So if you upload new files later on, you'll need to go through these steps again or apply different permissions for just those files.

#### Create CloudFiles Distribution

Now that we have setup S3, created a shorter URL and uploaded our files, we'll want to make those files accessable through CloudFront to get super low latency to reduce our load times. To do this we need to create a CloudFront distribution.

Log into your AWS account and navigate to your Amazon CloudFront management console (under "Your Account" drop down menu). Then click the "Create Distribution" button.

We'll select the origin bucket (the bucket we created earlier), turn on logging if you would like, specify a CNAME and comments and finally either enable or disable the distribution. You don't have to enter a CNAME or comments but we'll want to setup a shorter URL later like we did for S3. I would like to use "cdn.jremick.com" so that's what I'm setting here.

As you can see, the default URL is pretty ugly. That's not something you're going to want to try to remember. So now let's setup a CNAME for the pretty, short URL.

#### Setup Custom CloudFiles Subdomain

To setup the custom CloudFiles subdomain, we'll go through the same process as we did for S3.

Now we can access files through CloudFront using "cdn.jremick.com".

#### How It All Works

When someone accesses a file through your S3 bucket, it acts just like a regular file host. When someone accesses a file through CloudFiles though, it requests the file from your S3 bucket (the origin) and caches it at the CDN server closest to the orignial request for all subsequent requests. It's a little more complicated than that, but that's the general idea.

Think of a CDN as a smart network that is able to determine the fastest possible route for request delivery. Another example would be if the server closest is bogged down with traffic, it may be faster to get the file from a server a little farther away but with less traffic. So CloudFront will deliver the requested file from that location instead.

#### Caching Problems

Once a file is cached in the CloudFront network servers, it does not get replaced until it expires and is automatically removed (after 24 hours of inactivity by default). This can be a major pain if you're trying to push updates out immediately. To get around this you'll need to version your files. For example, "my-stylesheet.css" could be "my-stylesheet-v1.0.css". Then when you make an update that needs to go out immediately you would change the name to "my-stylesheet-v1.1.css" or something similar.

### Performance Testing

Our content is uploaded to our S3 bucket, our CloudFront distribution is deployed and our custom subdomains are setup for easy access. It's time to put it to the test to see what kind of performance benefits we can expect.

I've setup 44 example images ranging in size from approximately 2KB up to 45KB. You might be thinking that this is more images than most websites will load on a single page. That may be true but there are many websites such as portfolios, ecommerce sites, blogs, etc. that load just as many and possibly more images.

Although I'm only using images for this example, what's important is the file size and the quantity for the comparison. Today's websites are loading several javascript, CSS, HTML and image files on every page. 44 file requests is probably fewer than most websites actually make so a CDN could have an even greater impact on your website than we'll see in this comparison.

I'm using Safari's Web Inspector to view performance results, I've disabled caches and shift + refresh 10-15 times (about every 2-3 seconds) for each test to get a decent average of total load time, latency and duration.

• 45 Total files (including HTML document)
• 561.13KB Total combined file size

#### Regular Web Host

Here are the performance results when hosted via my regular web host. Sorted by latency.

• 1.82-1.95 Seconds total load time
• 90ms Fastest latency (last test)
• 161ms Slowest latency (last test)
• ~65% of the images had a latency of less than 110ms

Sorted by duration.

• 92ms Fastest duration (last test)
• 396ms Slowest duration (last test)

#### Amazon S3

The exact same files were used for testing S3. Sorted by latency.

• 1.3-1.6 Seconds total load time
• 55ms Fastest latency (last test)
• 135ms Slowest latency (last test)
• ~90% of the images had a latency of less than 100ms

Sorted by duration.

• 56ms Fastest duration (last test)
• 279ms Slowest duration (last test)

S3 is faster than my regular web host but only marginally. If you didn't feel like messing around with a CDN, S3 is still a great option to give your website a decent speed boost. I still recommend using a CDN though and we'll see why in this next test.

#### Amazon CloudFiles

The exact same files were used for testing CloudFront.

• 750-850ms Total load time
• 25ms Fastest latency (last test)
• 112ms Slowest latency (last test)
• ~85% of the images had a latency of less than 55ms.
• Only one file had a latency of more than 100ms.

Sorted by duration.

• 38ms Fastest duration (last test)
• 183ms Slowest duration (last test)

#### Comparison

Here's a quick breakdown of the performance comparison between my regular web host and the same files on Amazon's CloudFront service.

• 1.82-1.95 seconds vs 0.75-0.85 seconds total load time (~1.1 seconds faster)
• 90ms vs 25ms fastest latency (65ms faster)
• 161ms vs 112ms slowest latency (49ms faster)
• CloudFront: Only one file with latency greater than 100ms and 85% of the files with latency less than 55ms
• Regular Web Host: Only 65% of the files had a latency of less than 110ms

Duration comparison

• 92ms vs 38ms Fastest duration (54ms faster)
• 396ms vs 183ms Slowest duration (213ms faster)

50ms or even 100ms doesn't sound like a very long time to wait (0.1 seconds) but when you repeat that for 30, 40, 50 or more files then you can see how it quickly adds up to seconds.

#### Visual Comparison

Here's a quick video to show just how noticeable the increase in load time is. I've disabled caches and do a forced refresh (shift + refresh) to make sure images aren't being cached.

### Other Ways To Increase Performance

There are several other ways to increase website performance when using a CDN.

• Create different subdomains for different types of files to maximize parallel downloads. For example, load images from "images.jremick.com" and other files like scripts and CSS from "cdn.jremick.com". This will allow more files to load in parallel reducing the total load time.
• Gzip files like JavaScript and CSS
• Configure ETags

See Best Practices for Speeding Up Your Web Site for more.

### Serving Gzipped Files From CloudFiles

One of the options above for increasing performance even more was providing gzipped files. Unfortunately CloudFront isn't able to automatically determine if a visitor can accept gzipped files or not and serve up the correct one. Fortunately, all modern browsers support gzipped files these days.

#### Create & Upload Your Gzipped Files

To serve gzipped files from CloudFront, we can give our website some logic to serve up the right files or we can set the Content-Encoding and Content-Type on a few specific files to keep things a little simpler. Gzip the files you want and rename them so it doesn't end it .gz. For example, "filename.css.gz" would become "filename.css" or to remind yourself that it is a gzipped file, name it "filename.gz.css". Now upload the gzipped file to the location in your S3 bucket you want (don't forget to set the ACL/Permissions).

If you're not sure how to gzip a file, see http://www.gzip.org (OS X can do this in terminal)

#### Set Content-Encoding and Content-Type

We need to set the Content-Encoding and Content-Type (if it isn't already set) on our files so that when requested via browser it knows the content is gzipped and will be able to decompress it properly. Otherwise it will look like this.

We can do this easily with Bucket Explorer. Once you've downloaded it, enter your AWS Access Key and Secret key to log into your S3 account. Find the gzipped file you uploaded earlier, right click and select "Update MetaData".

As you can see, it already has the Content-Type set to text/css so we don't need to set that (javascript would be text/javascript). We just need to add the right Content-Encoding. Click "Add" and in the popup dialoge enter "Content-Encoding" in the Key field and "gzip" in the Value field. Click OK, then Save and you're done! Now the browser will view the file correctly.

Gzipping a file can greatly reduce the file size. For example, this test stylesheet was around 22KB and was reduced to approximately 5KB. For my blog I've combined all my jQuery plugins with jQuery UI Tabs. After minification it was reduced to 26.49KB, after being gzipped it was reduced to 8.17KB.

### Conclusion

There are a lot of ways to increase the performance of your website and in my opinion they're worth trying. If visitors are only 0.5 seconds or even 1 second away from leaving your website, a CDN could keep that from happening. Plus, most of us are speed freaks anyway so why not crank up the performance of your website if you can? Especially if it could save you money in the process.

If you have any questions, please let me know in the comments and I'll try to respond to them. Thanks!