Blog

My Journey To Managing My Own WordPress Server

4773457853_b10fcc8294_b

TL;DR — I used to shy away from managing my own server, but with experimentation I got more comfortable with it and now manage my own server.

I wouldn’t call myself a sys admin. Sure I know my way around a Linux box pretty well, but there’s a lot I don’t know. I don’t spend my days managing servers, so I don’t read up on the latest trends and or security news. I know of a few things that I don’t know, but really I’m most afraid of what I don’t know that I don’t know.

When I first setup WP App Store, I didn’t even consider hosting it myself. At the time, Stripe was not available in Canada, so I had to pass credit card information through the app. That meant I needed PCI compliant web hosting. Even more reason to not host it myself!

After bouncing between a few different hosting solutions that I wasn’t happy with, I finally found out that Amazon EC2 could be setup as PCI compliant and so I went to oDesk and hired a sys admin specializing in Amazon Web Services to set it up. I kept him on retainer to monitor the server and keep it up-to-date.

At the same time, I decided to challenge myself and setup an EC2 instance to host this blog. I dug into the server that the sys admin had setup and figured out what he had done, how he had setup the file system (EBS) properly, what scripts he was using to make EC2 snapshot backups, etc. And so I managed to setup a new EC2 instance, very similar to what had been setup for WP App Store: Varnish + Apache + MySQL. (Amazon Web Services actually allows you to use most of its services for a year for free, so it’s a great opportunity to learn.)

Server CablesI installed New Relic monitoring to keep an eye on things. And I did run into a problem. Once every few weeks, Apache would crash and need to be restarted. After some research, I found out that Apache and MySQL had to be tuned for the EC2 micro instance’s low-memory. Once that was done, no other problems. And the tiny amount of CPU and memory was enough for Varnish to serve over 12,000 page views when one of my blog posts made the front page of Hacker News last year.

Shortly after I setup my blog on EC2, I faced the dilemma of choosing a hosting solution for my new ecommerce site, deliciousbrains.com. Since I had just started the experiment of hosting my blog on EC2, I wasn’t ready to host an ecommerce site myself. Stripe had launched in Canada, so the server wouldn’t need to be PCI compliant, but I still wasn’t comfortable with the idea. I decided to try AppFog, a cloud platform-as-a-service (PaaS). Unfortunately each request was very slow. We’re talking upwards of 2 seconds to complete a page request. Just the HTML. I’m not talking about images and other assets. It was brutal. For months I just put up with this all the while asking AppFog for ways to tune performance but they took a very long time to reply (25 days!) and were not helpful.

Then everything came to a head. I had had a conversation with John Turner where he described managing his own servers, how he had optimized his WordPress sites, and how important page request speed is for ecommerce conversion rates. Plus I had been managing my own server for my blog for nearly a year with very little problems. Plus I was very unhappy with AppFog. I felt it was time to manage my own server.

After a bit of research, I found that Linode was a better option than Amazon EC2 and setup a new server on Linode. I really made an effort to learn more about Varnish and setup the config files nicely for multiple web sites. I’m now hosting all my sites (deliciousbrains.com, bradt.ca, wpappstore.com, and bigsnowtinyconf.com) on that server and have been very happy with the results. Most pages are served from Varnish’s cache which consistently comes in at under 200 ms. Varnish is truly incredible. A post from this blog made the front page of Hacker News earlier this month and the server didn’t even break a sweat. Varnish served up the cached page to 10k visitors in 4 hours with no noticeable decrease in performance.

What I’ve realized is that there’s a lot of value in managing my own server. I can tweak things at the server level specifically for my site. Managed WordPress hosts typically bypass the Varnish cache when a cookie is set (they do ignore some cookies, like Google Analytics). So if you’re running a shopping cart on your site, likely every request will bypass Varnish and be processed and served by Apache (much slower). Managing my own server means I can configure Varnish to not cache certain sections of my site, regardless of what cookies are set. I tell it I want everything cached except /my-account/*, /cart/* and /checkout/*.

Sure I’m spending more of my time managing the server. But I think that’s a good thing. I have complete control over its performance. And when you’re talking about ecommerce, this is extremely important.

I’m considering trying my own dedicated hardware next. Codero Smart Servers look very good. Less money for more power. And although I’m skeptical about it, I’d also like to try an Nginx + PHP-FPM configuration. I’ve heard it performs a lot better than Apache + mod_php, but I am concerned that it doesn’t obey the set_time_limit() PHP function, which results in more frequent timeouts. I’ve been seeing a lot of 503 errors out there, so I’m not convinced it’s better than Apache for overall reliability. I’m also considering trying CloudFlare or serving pages through Amazon CloudFront, which should result in another page speed boost. The journey continues!

Do you manage your own server? Would you like to? Would you like me to read more about how to set this up?

  • Pingback: My journey to managing my wwn WordPress server : Post Status()

  • jason404

    Amazon CloudFront has transformed the speed of four WordPress e-commerce sites that I run, which have a lot of large images. It’s easy getting it set up with W3 Total Cache.

  • Ben May

    I have always managed my own servers, for the same reasons you’ve discovered.

    I’ve never actually used Linode, but I suspect they’re more like a VPS setup, rather than AWS that is a much from confronting architecture platform. Linode is also geared at developers, not sys admins?

    I’ve done a few experiments with routing all your web traffic via CloudFront (since It now supports POST etc.) – but have had mixed success. Need to do more work on cache headers that CF understands and honours. Also, SSL + Cloudfront is crazy expensive.

    I’ve also only ever used the php-fpm/nginx stack, seems to be the approach more WP people use / wp.com etc. You can set timeouts and all that stuff, but you can’t do it the same way you can with apache. You should definitely give that a go.

    I have a puppet boiler-plate so I can now pull that boiler plate, make changes to suit the build and deploy within an hour. I don’t know why you’d want to go to dedicated / “tin can” hardware. All I see is cons. Hardware failure + impossible to scale up quickly

  • Thanks for the feedback and reminder that SSL + CloudFront is expensive. I knew there was a reason I’m still shying away from that. As for the dedicated hardware, take a closer look at Codero’s Smart Servers. I believe it’s virtualization but on dedicated hardware and comes with snapshot backups included. So if I understand correctly, you can fire up another Smart Server from a backup snapshot, just like a VPS. Sounds pretty awesome, but I haven’t tried it yet.

  • Ben May

    So essentially what you can do with AWS, and EBS snapshots, right? 🙂

    Each to their own.

    There are certain things AWS don’t do that I wish they did. They also don’t do support which I know is a massive turn off for many people.

  • jason404

    You can also get dedicated servers from AWS, if you need that for PCI compliance or something, but they still run virtually, so they are still elastic and live migratable.

  • jason404

    What do you wish AWS could so, but can’t? I can’t think of anything I want, and there are a lot of services I haven’t tried yet (or need to).

  • Ben May

    More flexibility with routing in ELBs would be great. Otherwise you need to have an extra 2 EC2s on 2 different AZs with nginx running to do advanced / URL based routing

  • jason404

    Ah yeah, that is something I miss. If it had some of the features of OpenStack, that would be great.

    I used to use something called vCider, which was a distributed virtual switch. It linked cloud servers and even your LAN on the same Layer 2 network. It did not need a Cisco-class router and VPC to connect to your LAN.

    It was fantastic, and free up to 8 nodes, but then Cisco bought the startup with the intention of using the technology in OpenStack.

  • Actually, AWS “dedicated” is just a billing category. It has nothing to do with architecture or hardware AFAIK. When I say “dedicated hardware”, I mean a slot on a rack holding physical server hardware dedicated to you. If PCI compliance and elasticity is something you value, AWS is a no brainer.

  • jason404

    No, that’s wrong. AWS dedicated instances run on dedicated hardware servers that nobody else uses, until you move the instance to another server, dedicated or not, which you can still do, as the instance runs within a virtual machine on the hardware server, so till has all the benefits of virtualisation.

    Most of the extra cost only needs to be paid for the first instance in the same region.

    http://aws.amazon.com/dedicated-instances/

  • I’ve always managed my own servers. I’ve even built apache and PHP from source although these days I use binaries to save time.

  • Ah thanks, that’s interesting! I was thinking of Reserved Instances and didn’t realize they offered dedicated hardware. Looks like it’s only slightly more for dedicated hardware. I wonder if performance is better.

  • fuck, you have balls of steel. i’m still super-shy of doing this myself.

  • jason404

    Well it costs an additional 2 cents per hour per region (not matter how many instances in that region), so it’s not cheap if you only have a handful or less.

  • saltcod

    I’ve opened and closed a Linode account 3 (4?) times now. Same timeline every time:

    1. Install Ubuntu, LAMP, etc….get things setup. FUN.
    2. Play around with config things, setup ssh no password login, add a few new sites, feel like a ninja. ULTRA FUN.
    3. Figure out why I’m unable to install plugins (permissions). NO FUN.
    4. Get notified that there are updates available. OKAY, I CAN HANDLE THIS.
    5. Restart apache on a production site. Get an error. Site down. PANIC.
    6. Start migrating sites off Linode.

    Happens every time. At the end of the day, though I find playing with it enjoyable, the extra stress and responsibility isn’t worth it for me!

  • Ryan Hellyer

    Amazon Cloudfront is awesome for serving complete pages. The only catch, is that you can’t easily refresh the cache. I’ve been considering trying out other CDN services for that, since others offer better (faster) cache refreshing systems.

  • Ryan Hellyer

    I don’t think W3 Total Cache supports using Cloudfront for serving pages (as mentioned in the post above), only for static assets like images, CSS and JS.

  • Nathaniel

    Im a fan of Digital Ocean. I was using Linode before but migrated over to DO and it’s been great. Pricing is better too.

    Also, if you’re into Vagrant, you can set up Digital Ocean as a provider ( as opposed to Virtualbox ) and spin up new instances from the command line.

Comments Elsewhere