Planning to Cycle the Oregon Coast

I’ve been talking to people for a while now about doing a trip to cycle the Oregon coast — starting from Astoria and working my way to the California border. I’ve been riding my bike a lot in order to begin training, but today I actually took the time to plan out what that trip will look like.

I found that there are lots of places to find information, many of which are free. In particular the Oregon Department of Transportation has this handy pdf. Of particular interest is a table listing all of the state parks and the various amenities that they offer (Hot showers! Yurts!). More detailed information pertaining to the state parks found along the way I found at

While the above information is helpful, the best tool for actually creating my itinerary was Strava’s Route Builder. Dropping waypoints is easy, and a dynamic profile of the elevation changes is provided. After a half hour of tinkering I not only had a destination for each day, but I also have an idea of the kind of training I need to be doing.

Seeing as the worst day has an elevation gain of over 3,600 feet, I foresee laps up and down Mt. Tabor in my future!

The Cycle Schedule

Day 1 : Astoria to Nehalem Bay State Park

Haystack Rock on a partly cloudy day.

Nehalem Bay State Park is located just south of Manzanita, and is sandwiched between the Nehalem River and the Ocean. This day’s ride passes Cannon Beach, whose Haystack Rock is one of my favorite spots along the coast.

Day 2 : Nehalem Bay State Park to Cape Lookout State Park

A sunset in Oceanside OregonThis stretch of the coast includes Tillamook, which everyone knows has ice cream to pair with your Bourbon. Immediately after Tillamook is the Three Capes Scenic Route, with steep climbs — and amazing views.

Day 3 : Cape Lookout State Park to Beverly Beach State Park


A big climb up Cape Lookout first thing in the morning, with another equally steep climb up Cascade Head about half way through the day. Beverly Beach State Park is past Lincoln City and Depoe bay, but not quite to Newport.

Day 4 : Beverly Beach State Park to Jessie M. Honeyman Memorial State Park

This day sees us through Yachats and Florence, with the longest day in terms of miles – 60.6.

Day 5 : Jessie M. Honeyman Memorial State Park to Sunset Bay State Park

And now we’re at the part of the coast that I don’t know much about — anything south of Florence is bright new territory!

Day 6 : Sunset Bay State Park to Humbug Mountain State Park

Day 7 : Humbug Mountain State Park to California Border

Password Protecting a Directory Using NGINX

Coming from an Apache world, I was familiar with setting up a simple password protected directory using .htaccess rules.

However, in setting up a bare-bones LEMP server on Digital Ocean I wasn’t installing Apache, so I didn’t have access to the htpasswd utility. So, pulling together information from a number of different sources:

I came up with this workflow:

Determine the Directory To Protect

Since you are reading this post, you probably already know this. However for the sake of this walk through let’s be thorough.

In a default LEMP stack on Ubuntu 14.04, the NGINX document root is found at:


You can verify this by taking a look at the sites that are enabled. Open any files found in /etc/nginx/sites-enabled and look for lines that say root /usr/some/path;

For this post I’ll create a new folder and webpage under /usr/share/nginx/html/ to test and make sure things work correctly:

cd /usr/share/nginx/html
mkdir private
echo "A Protected Webpage" > private/index.html

Navigating to the url should give you a white page with the text ‘A Protected Webpage’.

Create The Password

Since this is a LEMP stack we have PHP, so we have an easy way to generate the password to use on our protected directory:

php -r 'echo crypt("password", "salt");echo PHP_EOL;'

The resulting string of text is what we will use in our password file. It should go without saying — don’t use the word password as your password. As for salt part, that helps by giving some random magic to your password. For more info, checkout the manpage for the crypt: man crypt

So, if we did in fact use the above values of ‘password’ and ‘salt’, the resulting string should give us sa3tHJ3/KuYvI

We simply need to store this string in a file so that NGINX can access this. It is important to create this file outside of the server root. I prefer to put this near the HTML root so that it is easily accessible:

cd /usr/share/nginx
nano .passwords

Now add the user and password to that .passwords file. For NGINX the syntax is username:passwordhash:anycomments, and the resulting text to paste is:

benjamin:sa3tHJ3/KuYvI:Protecting the private folder

Save and exit nano.

Update NGINX Server Conf to Actually Protect the Folder

Finally we need to let NGINX know which folder to protect, and what file to check for valid user/password combinations. Edit the site configuration file for our site found in the sites-enabled folder. If it is an out of the box setup, that would be the default file:

nano /etc/nginx/sites-enabled/default

Within the server block, you should see a location block that looks something like:

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;

Directly below this block (but still within the server block) add another location rule for any URLs that match our private folder, and which points to our passwords file:

location ^~ /private/ {
auth_basic "Restricted";
auth_basic_user_file /usr/share/nginx/.passwords;

Save this file and exit nano.

Now restart NGINX:

service nginx restart

If all went well, visiting your protected folder in a browser should prompt you for your password!

Discovering Browser-sync

There’s a new tool that I’m loving as I explore various ways to streamline my multi-device development.

This tool is a node module, which synchronizes interactions between multiple browsers and devices, enabling multiple, concurrent views of the same state of a website.

So as an example, we would start browser-sync on the root directory of our site. A new node server listening to a specific port would be created on the local network, allowing any device to access that page. Each device that connects to this temporary site syncs various events such as scrolling, touch, active links, etc.

Additionally, when we start this server, we can set it to watch for changes to the underlying files so that all devices are updated when upon saving.

It’s like having LiveReload on steroids.

See the homepage linked to above for more in depth documentation, but really it’s a node module, so installing and immediately using browser-sync on your local site directory:

npm install -g browser-sync
    browser-sync start --server --files "css/*.css"

Breaking down the second command:

browser-sync start
Starts browser-sync.
Tells browser-sync to use its built in server to serve the files. Alternatively we could tell it to use an existing server: --proxy "".
--files "css/*.css"
What files do you want browser-sync to watch, and reload upon changes to the files. In this case we are watching any CSS files found in the css directory.

Tools to Measure and Improve Battery Life on Thinkpads in Linux

Here are some tools to start measuring and improving battery life in Linux.

Measuring Power Performance


Working with Power Settings

Tool to help set optimal battery settings:

With has a decent intro on makeusof.

To summarize, install using apt-get:

sudo add-apt-repository ppa:linrunner/tlp
sudo apt-get update
sudo apt-get install tlp tlp-rdw

Optionall for ThinkPad:

sudo apt-get install tp-smapi-dkms acpi-call-tools

Start the Service:

sudo tlp start

Using Sed to Update PHP Short tags


There are a couple of Stackoverflow discussions of what constitutes PHP shorthand, and why you would possibly not want to use them.

For me, it boils down to portability. It is fairly easy to enable shorttags in the php.ini file by setting


However, I’d just as soon reduce the troubleshooting that I have to do if I ever move to a new environment. This happened recently as I was importing an old WordPress site into my Varying Vagrant Vagrants machine (which has shorttags disabled by default).

Using Sed

I could have used search and replace inside of my editor, however, why not use the power of Sed to quickly replace the tags? Taken even further – why not create a bash script to iterate over all PHP files in a directory?

The Sed command

sed 's:<?=:<?php echo:g' index.php | sed 's:<?:<?php:g' | sed 's:<?phpphp:<?php:g'

At the heart of this is three sed ‘substitution’ commands, piped together.

The first replaces the echo short tag <?= with the verbose <?php echo. The next pipe replaces the basic opening short tag, while the last pipe reverts any tags that originally started out correctly, but were garbled by the second sed statement.

Also note the g at the end of each sed statement. This is the global replacement flag. This is necessary to ensure that sed will operate on all matches within a line of text, not just the first match.

But it Doesn’t Update the File

So the above code is good, there is only one problem, it only pipes the output to stdout. Additionally if you try and use our friend the greater than sign to pipe it into the original file – ‘index.php’ in the above example, it will completely wipe out that file. Yikes!

Of the listed solutions in the above stackoverflow thread, I particularly like writing the contents of the pipe to a temporary file, and then moving the temporary file to the original file’s location. So our new command, which now writes the changes to the original file, looks like:

sed 's:<?=:<?php echo:g' index.php | sed 's:<?:<?php:g' | sed 's:<?phpphp:<?php:g' > index.php.tmp && mv index.php.tmp index.php

Script It!

And finally, because it would be tedious to write out that command for every file that we want to update, lets make a script file that will run our sed command for every php file:

#! /bin/bash
for f in *.php
echo "Converting $f"
sed 's:<?=:<?php echo:g' $f | sed 's:<?:<?php:g' | sed 's:<?phpphp:<?php:g' > $f.tmp && mv $f.tmp $f

Just create a new file, copy the above code to that file, make it executable, and run it!

Using Vagrant for WordPress Development

Vagrant and WordPress Icons.

Creating WordPress Plugins and Themes is a fairly easy process. However, creating a new WP site on a local machine can be fairly tedious. Add to this the individual quirks of servers, and the setup differences between local and production environments and and you start to appreciate the benefits of a real sand-boxed solution.

The general idea of using Vagrant is the same as the idea and justification for using Virtualbox, or VMWare to run a development server on your local machine.

What is Vagrant?

Vagrant helps to build and automate virtual machines. Where it really shines is through its ability to provision, or configure the software that the server uses through simple configuration files.

Vagrant uses base boxes, common virtual servers as its building blocks for more specific virtual machines. If you are curious about all the different ways that people are using Vagrant, checkout a couple of good introductory articles:

Vagrant and WordPress

Because Vagrant can be used to quickly turn a basic virtual server into a specialized WordPress installation, we can use it to easily share the specific settings for our development machine with other developers, as well as mimic the properties of the production server.

Luckily, there are a couple of very good Vagrant projects that have already been setup with WordPress.

Varying Vagrant Vagrants

Jeremy Felt has a good article outlining why he has started to use Vagrant for WordPress Development. Development is still progressing quickly, so to see the work that is currently being applied to the Vagrant Boxes running WordPress, checkout 10up’s Varying Vagrant Vagrants (VVV) repo on Github. The code from these guys is really well documented, and the file has a number of good jumping off points for more learning.

In my mind, there is only one real drawback with VVV — its complexity. Even though the code is well documented, there still is a lot going on. There certainly is a lot of power under the hood, but for smaller, simpler tasks, like a quick test of a new plugin or theme, I found myself wanting a simple, LAMP stack with only one site to choose from.


And then I found Vagrantpress. This is a much simpler setup – just a LAMP stack with GIT and WordPress.

Both of these projects are good for different reasons. In general, VVV is large and powerful — almost too much — but really the right tool for most jobs. But there is something powerful in Vagrantpress’ simplicity.

At the end of the day, both projects should be watched. Active development means that both will change and improve, making collabrative development much faster and much easier.

A Case for Virtual Machines

When Tinkering Gets Out of Hand

As a content management system, WordPress excels at tracking your content. By simply applying a new theme, the feeling and tone of the underlying content can be perceived in new and expressive ways.

So, what do you do when you’re ready to go beyond tweaking a few lines in the CSS stylesheet and really begin developing your own themes and plugins for WordPress?
Continue reading A Case for Virtual Machines