Running your own Dynamic DNS Service (on Debian)

I used to have a static IP address on my home ADSL connection, but then I moved to BT Infinity, and they don’t provide that ability.  For whatever reason, my Infinity connection resets a few times a week, and it always results in a new IP address.

Since I wanted to be able to connect to a service on my home IP address, I signed up to dyn.com and used their free service for a while, using a CNAME with my hosting provider (Gandi) so that I could use a single common host, in my own domain, and point it to the dynamic IP host and hence, dynamic IP address.

While this works fine, I’ve had a few e-mails from dyn.com where either the update process hasn’t been enough to prevent the ’30 day account closure’ process, or in recent times, a mail saying they’re changing that and you now need to log in on the website once every 30 days to keep your account.

I finally decided that since I run a couple of VPSs, and have good control over DNS via Gandi, I may as well run my own bind9 service and use the dynamic update feature to handle my own dynamic DNS needs.  Side note: I think Gandi do support DNS changes through their API, but I couldn’t get it working.  Also, I wanted something agnostic of my hosting provider in case I ever move DNS in future (I’m not planning to, since I like Gandi very much).

The basic elements of this are,

  1. a bind9 service running somewhere, which can host the domain and accept the updates.
  2. delegation of a subdomain to that bind9 service.  Since Gandi runs my top level domain for me, I need to create a subdomain and delegate to it, and then make dynamic updates into that subdomain.  I can still use CNAMEs in the top level domain to hide the subdomain if I wish.
  3. configuration of the bind9 service to accept secure updates.
  4. a script to do the updates.

In the interests of not re-inventing the wheel, I copied most of the activity from this post.  But I’ll summarise it here in case that ever goes away.

Installing / Configuring bind9

You’ll need somewhere to run a DNS (bind9 in my case) service.  This can’t be on the machine with the dynamic IP address for obvious reasons.  If you already have a DNS service somewhere, you can use that, but for me, I installed it on one of my Debian VPS machines.  This is of course trivial with Debian (I don’t use sudo, so you’ll need to be running as root to execute these commands),

apt-get install bind9 bind9-doc

If the machine you’ve installed bind9 onto has a firewall, don’t forget to open ports 53 (both TCP and UDP).  You now need to choose and configure your subdomain.  You’ll be creating a single zone, and allowing dynamic updates.

The default config for bind9 on Debian is in /etc/bind, and that includes zone files.  However, dynamically updated zones need a journal file, and need to be modified by bind.  I didn’t even bother trying to put the file into /etc/bind, on the assumption bind won’t have write access, so instead, for dynamic zones, I decided to create them in /var/lib/bind.  I avoided /var/cache/bind because the cache directory, in theory, is for transient files that applications can recreate.  Since bind can’t recreate the zone file entirely, it’s not appropriate to store it there.

I added this section to /etc/bind/named.conf.local,

// Dynamic zone
  zone "home.example.com" {
    type master;
    file "/var/lib/bind/home.example.com";
    update-policy {
      // allow host to update themselves with a key having their own name
      grant *.home.example.com self home.example.com.;
    };
  };

This sets up the basic entry for the master zone on this DNS server.

Create Keys

So I’ll be honest, I’m following this section mostly by rote from the article I linked.  I’m pretty sure I understand it, but just so you know.  There are a few ways of trusting dynamic updates, but since you’ll likely be making them from a host with a changing IP address, the best way is to use a shared secret.  That secret is then held on the server and used by the client to identify itself.  The configuration above allows hosts in the subdomain to update their own entry, if they have a key (shared secret) that matches the one on the server.  This stage creates those keys.

This command creates two files.  One will be the server copy of the key file, and can contain multiple keys, the other will be a single file named after the host that we’re going to be updating, and needs to be moved to the host itself, for later use.

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k thehost.home.example.com -s thehost.home.example.com. | tee -a /etc/bind/home.example.com.keys > /etc/bind/key.thehost.home.example.com

The files will both have the same content, and will look something like this,

key "host.home.example.com" {
algorithm hmac-md5;
secret "somesetofrandomcharacters";
};

You should move the file key.thehost.home.example.com to the host which is going to be doing the updating.  You should also change the permissions on the home.example.com.keys file,

chown root:bind /etc/bind/home.example.com.keys
chmod u=rw,g=r,o= /etc/bind/home.example.com.keys

You should now return to /etc/bind/named.conf.local and add this section (to use the new key you have created),

// DDNS keys
include "/etc/bind/home.example.com.keys";

With all that done, you’re ready to create the empty zone.

Creating the empty Zone

The content of the zone file will vary, depending on what exactly you’re trying to achieve.  But this is the one I’m using.  This is created in /var/lib/bind/home.example.com,

$ORIGIN .
$TTL 300 ; 5 minutes
home.example.com IN SOA nameserver.example.com. root.example.com. (
    1 ; serial
    3600 ; refresh (1 hour)
    600 ; retry (10 minutes)
    604800 ; expire (1 week)
    300 ; minimum (5 minutes)
    )
NS nameserver.example.com.
$ORIGIN home.example.com.

In this case, namesever.example.com is the hostname of the server you’ve installed bind9 onto.  Unless you’re very careful, you shouldn’t add any static entries to this zone, because it’s always possible they’ll get overwritten, although of course, there’s no technical reason to prevent it.

At this stage, you can recycle the bind9 instance (/etc/init.d/bind9 reload), and resolve any issues (I had plenty, thanks to terrible typos and a bad memory).

Delegation

You can now test your nameserver to make sure it responds to queries about the home.example.com domain.  In order to properly integrate it though, you’ll need to delegate the zone to it, from the nameserver which handles example.com.  With Gandi, this was as simple as adding the necessary NS entry to the top level zone.  Obviously, I only have a single DNS server handling this dynamic zone, and that’s a risk, so you’ll need to set up some secondaries, but that’s outside the scope of this post.  Once you’ve done the delegation, you can try doing lookups from anywhere on the Internet, to ensure you can get (for example) the SOA for home.example.com.

Making Updates

You’re now able to update the target nameserver, from your source host using the nsupdate command.  By telling it where your key is (-k filename), and then passing it commands you can make changes to the zone.  I’m using exactly the same format presented in the original article I linked above.

cat <<EOF | nsupdate -k /path/to/key.thehost.home.example.com
server nameserver.example.com
zone home.example.com.
update delete thehost.home.example.com.
update add thehost.home.example.com. 60 A 192.168.0.1
update add thehost.home.example.com. 60 TXT "Updated on $(date)"
send
EOF

Obviously, you can change the TTL’s to something other than 60 if you prefer.

Automating Updates

The last stage, is automating updates, so that when your local IP address changes, you can update the relevant DNS server.  There are a myriad ways of doing this.  I’ve opted for a simple shell script which I’ll run every couple of minutes via cron, and have it check and update DNS if required.  In my instance, my public IP address is behind a NAT router, so I can’t just look at a local interface, and so I’m using dig to get my IP address from the opendns service.

This is my first stab at the script, and it’s absolutely a work in progress (it’s too noisy at the moment for example),

[sourcecode language=”bash”]#!/bin/sh

# set some variables
host=thehost
zone=home.example.com
dnsserver=nameserver.example.com
keyfile=/home/bob/conf/key.$host.$zone
#

# get current external address
ext_ip=`dig +short @resolver1.opendns.com myip.opendns.com`

# get last ip address from the DNS server
last_ip=`dig +short @$dnsserver $host.$zone`

if [ ! -z "$ext_ip" ]; then
if [ ! -z "$last_ip" ]; then
if [ "$ext_ip" != "$last_ip" ]; then
echo "IP addresses do not match (external=$ext_ip, last=$last_ip), sending an update"

cat <<EOF | nsupdate -k $keyfile
server $dnsserver
zone $zone.
update delete $host.$zone.
update add $host.$zone. 60 A $ext_ip
update add $host.$zone. 60 TXT "Updated on $(date)"
send
EOF

else
echo "success: IP addresses match (external=$ext_ip, last=$last_ip), nothing to do"
fi
else
echo "fail: couldn’t resolve last ip address from $dnsserver"
fi
else
echo "fail: couldn’t resolve current external ip address from resolver1.opendns.com"
fi[/sourcecode]

Raspberry Pi sensors – Munin graphing plugin

I love Munin!  I’ve finally got one of the Raspberry Pi’s to be reasonably stable, so I’ve set up a munin-node on it.  The standard Linux sensord stuff doesn’t run on the arm core, so I had assumed I wouldn’t be able to see any exciting temperature graphs, but I was wrong!

The Raspberry Pi Debian image includes a command called vcgencmd, which allows root to interrogate various settings and measurements.  That includes the temperature, clock frequencies and voltages across various components.

So I’ve knocked up a quick plugin for Munin which gathers that stuff and graphs it.  You can get it over at GitHub here.  The current code looks like this (but the GitHub copy will be most up-to-date),

[sourcecode language=”bash”]
#!/bin/bash
# -*- sh -*-

: << =cut

=head1 NAME

pisense_ – Wildcard-plugin to monitor Raspberry Pi sensors (temp, volts, clock speed)

=head1 CONFIGURATION

This plugin needs to be run as root for vcgencmd to work.

[pisense_*]
user root

=head2 ENVIRONMENT VARIABLES

This plugin does not use environment variables.

=head2 WILDCARD PLUGIN

This is a wildcard plugin. To specify if you want temperature,
clock speed or volts, link this file to _volt, _temp
or _clock.

For example,

ln -s /usr/share/munin/plugins/pisense_ \
/etc/munin/plugins/pisense_clock

will monitor the clock speeds of your pi.

=head1 BUGS

None known.

=head1 NOTES

This plugin is shamelessley based on the ip_ plugin (structure).

=head1 MAGIC MARKERS

#%# family=auto
#%# capabilities=autoconf suggest

=head1 AUTHOR

Tony (tony@darkstorm.co.uk).

=head1 LICENSE

It’s yours, do with it what you like.

=cut

. $MUNIN_LIBDIR/plugins/plugin.sh

sensor=${0##*/pisense_}

if [[ "$1" == "autoconf" ]]; then
if ! /opt/vc/bin/vcgencmd firmware >/dev/null 2>/dev/null; then
echo "no (could not run /opt/vc/bin/vcgencmd as user $(whoami))"
exit 0
else
echo yes
exit 0
fi
fi

# this is flawed, vcgencmd always returns with RC 0. Needs expanding.
if [[ "$1" == "suggest" ]]; then
if /opt/vc/bin/vcgencmd measure_temp >/dev/null 2>/dev/null; then
echo temp
fi
if /opt/vc/bin/vcgencmd measure_volts >/dev/null 2>/dev/null; then
echo volt
fi
if /opt/vc/bin/vcgencmd measure_clock core >/dev/null 2>/dev/null; then
echo clock
fi
exit 0
fi

if [[ "$1" == "config" ]]; then

if [[ "$sensor" == "temp" ]]
then
echo "graph_title Raspberry Pi core temp"
echo "graph_args –base 1000"
echo "graph_vlabel dgrees Celsius"
echo "graph_category sensors"
echo "temp.label Core Temperature"
echo "temp.min 0"
fi
if [[ "$sensor" == "clock" ]]
then
echo "graph_title Raspberry Pi clock frequencies"
echo "graph_args –base 1000"
echo "graph_vlabel herz"
echo "graph_category sensors"
for clock in arm core h264 isp v3d uart pwm emmc pixel vec hdmi dpi
do
echo "clock$clock.label $clock clock Frequency"
echo "clock$clock.min 0"
echo "clock$clock.type GAUGE"
done
fi
if [[ "$sensor" == "volt" ]]
then
echo "graph_title Raspberry Pi voltages"
echo "graph_args –base 1000"
echo "graph_vlabel volts"
echo "graph_category sensors"
for volt in core sdram_c sdram_i sdram_p
do
echo "volt$volt.label $volt voltage"
echo "volt$volt.min 0"
echo "volt$volt.type GAUGE"
done
fi

exit 0
fi;

if [[ "$sensor" == "temp" ]]
then
temp=$(/opt/vc/bin/vcgencmd measure_temp | awk -F"=" ‘{print $2}’ | awk -F"’" ‘{print $1}’)
echo "temp.value $temp"
fi
if [[ "$sensor" == "clock" ]]
then
for clock in arm core h264 isp v3d uart pwm emmc pixel vec hdmi dpi
do
clockval=$(/opt/vc/bin/vcgencmd measure_clock $clock | awk -F"=" ‘{print $2}’)
echo "clock$clock.value $clockval"
done
fi
if [[ "$sensor" == "volt" ]]
then
for volt in core sdram_c sdram_i sdram_p
do
voltage=$(/opt/vc/bin/vcgencmd measure_volts $volt | awk -F"=" ‘{print $2}’ | tr -d "V")
echo "volt$volt.value $voltage"
done
fi
[/sourcecode]

I’ll post some sample screenshots in a bit!

Debian squeeze gdm / Xserver won’t start without a monitor

The little Atom based computer I was using as a Linux server in the house died.  Well technically, the fan died, which led to the rest of it dying.  It’s a proprietary case and motherboard,  so the fan isn’t something I can just pick up and replace (looks like a laptop fan, squeezed into the case), and I need something working faster than I could repair it.  I had a spare PC upstairs, which isn’t as quiet or as energy efficient as the Atom, but at least it works!  (This was after several days of Raspberry Pi’s failing and trashing SDHC cards, so I was already pretty pissed off with hardware, Linux and the whole building it yourself thing).

Anyway, as it happens, Debian reminded me how exceedingly trivial it is to build a server, and since I had full backups of the Atom PC it didn’t take long to get everything back up and running.  I was also reminded how slow the Atom chips can be, the P4 I’ve replaced it with is a world apart in terms of speed for whatever reason.

The Atom machine was running Ubuntu, but it used to frustrate me when it was quite the same as Debian, and I wanted to go back to a basic Debian build.  Also, since the P4 is more stock than the Atom I don’t need the bleeding edge drivers you get with Ubuntu.  All of this did leave me with one issue though.

When connected to a monitor, the Debian build works fine.  It works if you boot it with a monitor and then remove the monitor as well, but if you boot it without a monitor, it won’t start the X Server.  It tries about a million times [1] and then gives up.

I use Gnome under Debian, and because this machine is sitting physically in my house, I enable the autologon, and remote desktop control, so if I want, I can VNC in from my main machine.  I don’t usually need to do it, and I’m comfortable doing everything I want on that box from the command line, but every now and then it’s nice to use one or two GUI based apps.  Since Gnome supports this out of the box, I don’t feel the need to install VNC and start changing the config – it worked under Ubuntu, and it works under Debian if the monitor is there, I just needed it to work without the monitor.

I did a lot of reading around, there are plenty of suggestions about using VNC instead, some suggestions of modifying xorg.conf with some default display settings, and some other stuff.  I tried setting up an xorg.conf (new versions of X don’t use one by default, so you have to create one), but that didn’t seem to help.  More reading, and more playing around, and then I finally found the exact solution.

You can read the original page here.

Essentially, you need to add an entry to xorg.conf as I had been, but even then, X will probably refuse to start, because it detects modesetting drivers in the kernel and refuses to load the VESA driver.  Here’s the specific section from the error log if you get this,

(II) VESA: driver for VESA chipsets: vesa
(EE) VESA: Kernel modesetting driver in use, refusing to load
(WW) Falling back to old probe method for vesa
(EE) No devices detected.

So, as you can see from the document linked above, you need to disable the modesetting and then X will happily start.  This is the /etc/X11/xorg.conf config file I used (same as the original document),

Section "Device"
Identifier "VNC Device"
Driver "vesa"
EndSection

Section "Screen"
Identifier "VNC Screen"
Device "VNC Device"
Monitor "VNC Monitor"
SubSection "Display"
Modes "1280x1024"
EndSubSection
EndSection

Section "Monitor"
Identifier "VNC Monitor"
HorizSync 30-70
VertRefresh 50-75
EndSection

I also used both the i915 modeset change (because I had an i915 config file already), and the Nvidia one, since the machine has an Nvidia card in it.  A quick reboot, gdm and the X Server both started fine.  Very happy!

So the key isn’t just the xorg.conf above, which most people have posted about, you probably have to disable modesetting in the relevant graphics driver as well.

[1] okay, about half a million.

Simple Debian Squeeze LAMP Config

Want the instant gratification solution?  Scroll down to “All In One” below.

Several of the Linux (Debian Squeeze) / Apache2 / MySQL / PHP (LAMP) configs I’ve seen on the ‘net include complexity you just don’t need (like suEXEC or suPHP).  If you’re setting up a basic server (physical, VPS, Xen, OpenVZ, whatever) and you’re going to be the only person running it, then setting up a LAMP environment is trivial.

This post assumes your server boots and you can log in and that it’s a basic Squeeze install with none of the relevant software pre-installed.  I hate sudo, so I’m going to assume you have either su’d to root (su – root) or that you’ve switched to root via sudo (sudo su – root).  If not, you’ll need to prefix all of these commands with sudo (after setting sudo up).

Although this little guide is for Debian, I think it’ll work unchanged on Ubuntu as well.  This guide also assumes that your server hosts a single website, so nothing fancy with any Apache2 VirtualHosts.

All of the follow steps can be combined, but I like to install by stages so that I can test each element on it’s own, and not get overwhelmed.

One of the big mistakes people new to Debian or Ubuntu do, is they think they have to manually edit config files to get things working.  The Debian packagers spend a lot of time making Debian packages work properly with each other with the absolute minimum manual effort.

Install Apache2

While Apache2 might be a bloated-warthog in the eyes of some system administrators, it’s ubiquitous, reliable, feature rich, well documented and well supported.  You can play with lighttpd, nginx and other options later, but this is LAMP, so let’s install the A.

The default Apache2 install on Debian will try and use the Worker MPM for Apache2.  We want the Prefork MPM when we use PHP5 later, so we’ll specify that straight away.

apt-get install apache2 apache2-mpm-prefork

And that’s it.  Assuming that runs successfully (errors are outside the scope of this walkthrough), you should be able to connect to the web server of your server on port 80 (iptables is also outside the scope of this, but a blank Debian install won’t have any firewalling in the way anyway).

If the install worked, connecting should return a page which says “It works!” and some information about this being the default web page.

Install PHP

Again, PHP gets bad press these days and maybe rightly so, but again it’s well used, well supported and well understood.  It’s probably easier to install it along with Apache2 in reality, because Debian does all the post config work at once then, but if you do it later, you are at least in control of the various stages.

We need to install two things, PHP5 and the Apache2 PHP module (on Debian, this package is libapache2-mod-php).

apt-get install php5 libapache2-mod-php5

and then restart Apache2 (I use the following command, any method that reloads the Apache2 config file works)

/etc/init.d/apache2 restart

That’s it.  You don’t need to edit anything, enable anything, configure anything.  Debian handles all that for you.  The default install of Apache2 creates a /var/www directory which it uses as the root for your web site.  If you go to that directory and create a file called test.php, and put this into it,

<?php phpinfo(); ?>

You can then test the PHP5 install has worked by connecting to your web server and requesting the file test.php.  If it asks you to save it, something went wrong (or you didn’t restart Apache2), otherwise you should get a full list of the PHP settings.

NB: Debian installs Suhosin by default.  This should be okay, but it can cause issues with some features of WordPress and phpMyAdmin.

Install MySQL

The MySQL install is the most complex, because you’ll have to create a password!  Otherwise, it’s as simple as the above steps.  As well as MySQL we need to remember to install the PHP5 MySQL module, otherwise we won’t be able to interact with the database.

apt-get install mysql-server php5-mysql

After some packages are downloaded and installed, you’ll be asked to set a password for the MySQL root user.  The dialog says this is not mandatory.  I think it should be.  You should absolutely set this password.  It does not need to match the Linux root user password, and as in all cases, it should actually differ significantly.

Restart Apache2 for good measure, and you’re done.  MySQL will be started, the relevant libraries are installed, and Apache2 / PHP5 / MySQL can all communicate as required.

By default, MySQL won’t be listening on any external interfaces, which is a good thing, so only your website can communicate with it.  Some guides recommend installing phpMyAdmin at this point, and you can if you want, although I prefer not to.

Permissions

In a default Debian install, Apache2 and PHP5 run as the www-data user.  By default, the permissions on /var/www are

drwxr-xr-x root root

That means the web server can’t create any files or directories in /var/www.  That’s a problem when installing things like WordPress which want to create their own config files or .htaccess files.  Because we’re not worried about multiple users on our server, or different customers, it’s safe to set the owner of /var/www to www-data:www-data, and do the same for all files in that directory.  This advice is only true for a server where you don’t mind all the websites running as the same user, but that’s the point of this example anyway!

All In One

The following command will install the whole thing in one go, and all you need to do is set the MySQL password.

apt-get install apache2 apache2-mpm-prefork php5 libapache2-mod-php5 mysql-server php5-mysql

Follow Up Steps

Later on you might want to install additional PHP modules (such as php5-curl, php5-gd, php5-mcrypt), and for sites which you expect to be busy on servers with not much memory you might want to look at using apache2-mpm-worker and FastCGI.

Exim4 (SMTP MTA) + Debian + Masquerading

I love Debian, and Exim4 seems to ‘just work’ for me most of the time, so I tend to use it for my MTA by preference.  Debconf handles the basic options for Exim4 pretty well, and usually I don’t need to mess with anything.

However, on one of my VPS’s I wanted to do what I used to refer to as masquerading.  I use the term to refer to having an SMTP masquerade as a different host on outbound e-mail addresses automatically.  So the server may be fred.example.net, but all outgoing mail comes from user@example.net.  It’s common if you want to handle the return mail via some other route – and for me I do.  My servers are in the darkstorm.co.uk domain, but I don’t want them handling mail for somehostname.darkstorm.co.uk, and I don’t want to have to configure every user with a different address, I just wanted a simple way to get Exim to masquerade.  Additionally, I only want external outbound mail re-writing, mail which is staying on the server should remain untouched.  This allows bob to mail fred on the server, and fred to reply without the mail suddenly going off the server, but if bob mails bill@example.org, then his address is re-written correctly.

I think I spent some time looking at this a couple of years ago, and had the same experience as recently – it’s a bit frustrating tracking down the best place to do it.  Firstly, Exim doesn’t have a masquerade option as such, and the manual doesn’t refer to masquerading in that way.  What it does have is an extensive rewriting section in the config file and support for doing that rewriting in various ways.

On top of this, the Debian configuration of Exim can be a little daunting at first, and how you achieve the configuration may depend on whether you’re using a split config or the combined config.

Anyway, enough rambling, you can get Exim to rewrite outgoing mail / masquerade by setting one macro.  This works on Debian 6 (Squeeze) with Exim4, but I assume it’ll work with Exim4 on any Debian installation.

Create (or edit)

/etc/exim4/exim4.conf.localmacros

Add the following line,

REMOTE_SMTP_HEADERS_REWRITE = *@hostname.example.net ${1}@example.net

Rebuild the Exim config (might not be essential but I do it every time anyway),

update-exim4.conf

and then recycle Exim (reload might work, but I tend to recycle stuff),

/etc/init.d/exim4 restart

That same macro is used for both the single monolithic config file, and the split config file.  It tells Exim that for remote SMTP only, it should rewrite any header that matches the left part of the line with the replacement on the right.  The ${1} on the right matches the * on the left (multiple *’s can be matched with ${1}, ${2}, etc.)

You can supply multiple rules by separating them with colons, such as this,

REMOTE_SMTP_HEADERS_REWRITE = *@hostname.example.net ${1}@example.net : *@hostname ${1}@example.net : *@localhost ${1}@example.net

There are more flags you can provide to the rewrite rules, and you can place rewrites in other locations, but the above will achieve the basic desire of ignoring locally delivered mail, but rewriting all headers on outbound e-mail which match.

Full details of Exim’s rewrite stuff is here.  Details about using Exim4 with Debian can be found in Debian’s Exim4 readme (which you can read online here).

I’m sure there are other ways of achieving this, there’s certainly an option in the debconf config (dc_hide_mailname) which seems hopeful, but it didn’t seem to do anything for me (maybe it only works when you’re using a smart relay?)  Either way, this option does what I wanted, and hey, this is UNIX, there’s always more than one way to skin a cat.

Edit: just had a look at the various Debian package files, and it looks like dc_hide_mailname only works if you’re using a smart relay option for Exim.  If you’re using the full internet host option from Debconf, it never asks you if you want to hide the mailname, and ignores that option when you rebuild the config files.

Virtual Machines – taking the pain out of major upgrades

If your computers are physical machines, where each piece of hardware runs a single OS image, then upgrading that OS image puts your services at risk or makes them unavailable for a period of time.

Sure, you have a development and test environment, where you can prove the process, but those machines cost money.  So processes develop to either ensure you have a good backout, or you can make changes you know will work.

Virtual Machines have changed the game.  I have a couple of Linux (Debian) based VM’s.  They’re piddly little things that run some websites and a news server.  They’re basically vanity VM’s, I don’t need them.  I could get away with shared hosting, but I like having servers I can play with.  It keeps my UNIX skills sharp, and let’s me learn new skills.

Debian have just released v6 (Squeeze).  Debian’s release schedule is slow, but very controlled and it leads to hopefully, very stable servers.  Rather than constantly update packages like you might find with other Linux distributions, Debian restricts updates to security patches only, and then every few years a new major release is made.

This is excellent, but it does introduce a lot of change in one go when you move from one release of Debian to the next.  A lot of new features arrive, configuration files change in significant ways and you have to be careful with the upgrade process as a result.

For matrix (the VM that runs my news server), I took the plunge and ran through the upgrade.  It mostly worked fine, although services were out for a couple of hours.  I had to recompile some additional stuff not included in Debian, and had to learn a little bit about new options and features in some applications.  Because the service is down, you’re doing that kind of thing in a reasonably pressured environment.  But in the end, the upgrade was a success.

However, the tidy neat freak inside me knows that spread over that server are config files missing default options, or old copies of config files lying round I need to clean up; legacy stuff that is supported but depreciated sitting around just waiting to bite me in obscure ways later on.

So I decided to take a different approach with yoda (the server that runs most of the websites).  I don’t need any additional hardware to run another server, it’s a VM.  Gandi can provision one in about 8 minutes.  So, I ordered a new clean Debian 6 VM.  I set about installing the packages I needed, and making the config changes to support my web sites.

All told, that took about 4 hours.  That’s still less time than the effort required to do an upgrade.

I structure the data on the web server in such a way that it’s easy to migrate (after lessons learned moving from Gradwell to 1and1 and then finally to Gandi), so I can migrate an entire website from one server to another in about 5 minutes, plus the time it takes for the DNS changes to propagate.

Now I have a nice clean server, running a fresh copy of Debian Squeeze without any of the confusion or trouble that can come from upgrades.  I can migrate services across at my leisure, in a controlled way, and learn anything I need to about new features as I go (for example, I’ve switched away from Apache’s worker MPM and back to the prefork MPM).

Once the migration is done, I can shut down the old VM.  I only pay Gandi for the hours or days that I have the extra VM running.  There’s no risk to the services, if they fail on the new server I can just revert to providing them from the old.

Virtual Machines mean I don’t have to do upgrades in place, but equally I don’t have to have a lot of hardware assets knocking around just to support infrequent upgrades like this.

There are issues of course, one of the reasons I didn’t do this with matrix is that it has a lot of data present, and no trivial way to migrate it.  Additionally, other servers using matrix are likely to have cached IP details beyond the content of DNS, which makes it less easy to move to a new image.  But overall, I think the flexibility of VM’s certainly brings another aspect to major upgrades.

Debian / IPv6 / ip6tables / arno-iptables-firewall

Gandi turned IPv6 on, on my virtual host and I’ve been playing catch up ever since.  I’d not spent much time looking at IPv6 other than a cursory glance and I sort of knew the basics.  But once they’d switched it on I had to put in a little bit of reading time.

Did I want the same hostname to resolve to both the IPv4 and IPv6 address, or did I want to use a different hostname for each?  What was I going to do about firewalls?  And a few other things.

Because the iptables documentation makes my brain bleed, I use an out-of-the-box firewall tool (arno-iptables-firewall) which I’ve found extremely useful.  However, the Debian stable version doesn’t support IPv6 configurations.

That left me with three choices.  Try and work out an ip6tables setup for myself, grab a different firewall product, or backport the latest version of arno-iptables-firewall to Debian Squeeze.  Backporting seemed like the most interesting option – so I did that.

Surprisingly it wasn’t as hard as I expected, although I did have to learn a bunch of Debian Package Management terminology in very short notice.  This post helped a ton.  Up until this point, IPv6 access to the server had been working fine, because there was nothing in the way 😉 A couple of connections with other servers had started using the IPv6 and I wanted to retain those.

I checked the config for the firewall, and restarted it.  Everything seemed okay.  However a few days later, another sysadmin got in touch and told me they could no longer get to the server on it’s IPv6 address.  It turned out I could, but only from another server on the same network, and after a little digging and investigation it became clear the issue was routing.

Turning the firewall on and off didn’t fix it, but it seemed like rebooting got it working, and as soon as I started arno-iptables-firewall the problem came back.  So, I stopped using the firewall for IPv6 and everything was okay.  Until overnight the problem came back on it’s own.

One of the key things about IPv6 is that it relies on ICMPv6 far more than IPv4 did.  One of the most important things, is that ICMP is used to do Neighbor Discovery.

Although the arno-iptables-firewall setup was set to allow ICMP through, I had missed one critical setting.  Gandi uses IPv6 stateless autoconfiguration to provide IPv6 information to the host.  This means the host continues to check how to route traffic.  The missed config stopped this information from arriving at the host, and as a result, the essential route to the outside world expired from the routing table.

If you’re uisng arno-iptables-firewall v2.0.0a, and your server uses stateless autoconfiguration, make sure you set the following two options,

# Only disable this if you're NOT using forwarding (required for NAT etc.) for
# increased security.
# Note: If enabled and IPV6 enabled, local IPv6 autoconf will be disabled.
# -----------------------------------------------------------------------------
IP_FORWARDING=0
# (EXPERT SETTING!) Only disable this if IP_FORWARDING is disabled and
# you do not use autoconf to obtain your IPv6 address.
# Note: This is ignored if IP_FORWARDING is enabled. (IPv6 Only)
# -----------------------------------------------------------------------------
IPV6_AUTO_CONFIGURATION=1

By default, IP_FORWARDING will be set to 1, and that stops the IPV6_AUTO_CONFIGURATION setting from taking effect.  Once I switched IP_FORWARDING to 0, the route came back and everything has been fine since.

 

Printing – it’s a nightmare surely?

I hate printers.  In fact, I know a lot of PC owners hate printers.  For a long time they were really the bane of many home computer users.  Initially every application needed it’s own drivers for every printer type, then we got unified drivers but they were crap, and so on and so forth.  It’s gotten better over the years, but windows printer drivers are still bulky and annoying.

I suspected the one big area of Ubuntu I’d have to bleed to get working was printing.  I’ve played with CUPS previously and an HP LaserJet 4L (a long time ago), and it worked but it wasn’t always ideal.  So I settled down today to spend three hours making Ubuntu drive my HP PhotoSmart C4585.

Holy crap was I wrong.

5 minutes.  Literally.  Googled for ‘HP PhotoSmart Linux’, found that HP have developed their own open source printer drivers.  That looked like a good sign, filled in a few fields on the website and it told me the drivers are already in Ubuntu.  That sounded good.  Did an apt-cache search hplip and apt-get install hplip only to discover the drivers were already installed.  So, opened System, Administration, Printing, told it to search for a printer, it found the PhotoSmart, installed the config, printed a test page.

I am literally gobsmacked.

It even happily drives the scanner as well (using XSane, also already installed).  The printer driver is less annoying than the Windows one (just hides away), and the only thing I’m missing is a display of how full the ink cartridges are, but the Windows one estimates that badly anyway.

So, well done HP, well done Ubuntu, and well done open source printing.  Now I have to find something else to do for 2 hours 55 minutes.

Flirting with Ubuntu (again!)

Anyone unlucky enough to have read anything in my blog before knows I’ve been a long-time Linux user.  I’ve had various Linux servers and now have a couple of Linux virtual machines on the ‘net hosting these pages.  I’ve flirted in the past with Linux based desktops, but for various reasons never made a solid effort to give up Windows.  Mostly that’s because there were a handful of things I wanted that I could still only really get from Windows.  Games primarily, and that’s still the case today.  Lord of the Rings Online might run on Linux under WINE, but since I have a valid XP license and my machine runs it quite happily already, why go to the bother?

However, the list of apps that I do need and only come with Windows has shrunk considerably.  I made the switch to OpenOffice a while back (both at home and work), and although the paragraph numbering pisses me off a great deal, I’m happy enough with the applications.  I don’t play any other PC games any more (other than Flash based stuff) because we got the PS3 and so that has removed a huge chunk of Windows reliance.  Just about anything else I do is either a web app (mail) or there are plenty of Linux apps that cover it (Usenet, browsing, etc.)

So I thought I’d make a solid effort to use Ubuntu and see how I really get on with it.  But I don’t really want a dual boot system until I know for sure I’m going to migrate my data to Linux and only boot into Windows to play LOTRO.  So I’m running Ubuntu in a VirtualBox VM, running Full Screen with Bridged Networking and ignoring Windows in the background.  The VM has ~1GB of memory and plenty of CPU (especially for Linux) so performance isn’t an issue.  The only question is really can I find the apps and a way of working that I’m comfortable with.

I’ve been setting this up for two days and already there’s been some pain.

  • Looks like NAT networking in VirtualBox 3.1.4 is hosed.  I started browsing and downloading various things yesterday and every now and then a web page wouldn’t load, and I’d need to click refresh a few times.  Then I installed a Usenet client (XPN, very nice) and it would randomly hang getting headers.  Took me a while to realise there was a problem, but since this is a Debian based distribution the investigation was trivial – sudo apt-get install wireshark; sudo wireshark.  Tracing the network traffic it was obvious the client was losing packets and there was a lot of bright red ACK’ing and re-ACK’ing going on.  I checked online and there were reports of VirtualBox NAT being broken a few sub-releases ago but being fixed now.  Well, it’s clearly not fixed, however Bridged networking seems (so far) to work fine.  Sadly, this caused me serious frustration yesterday and earlier today while I was trying to download and install various apps.
  • Finding a replacement for Twhirl (Twitter Client).  I could of course, still use Twhirl which is an Adobe AIR app and so runs under Linux.  However, support for Twhirl has been dropped and I hate the replacement (too big!).  So I scouted about and found Gwibber.  Sadly, it suffers from the major problem with a lot of open source apps, crap documentation.  Yes I know, it’s open source and so I can fix this myself, but it doesn’t help when you’re first trying to get it installed and working.  So, the current package is buggy, but I worked around that and got it running, then I couldn’t get any themes to work until I found they’d changed the theme system and none of the ones found by Google worked.  Then I found there was a theme package you could apt-get install and it was all okay.  But now in order to run it, I have to launch it twice from the menu, I’m sure I’ll get that worked out.
  • USB support – not critical, but I did manage to blue screen my entire machine today trying to get USB devices to show up inside the VM.  I might try again later, would be nice if I ever need to move data around (although I do have a shared folder, so I can leave stuff on the Windows partition).

Some things worked really well,

  • Pidgin, it’s just excellent.  The plugins are great, and GFire especially useful since I can hang out in the XFire channel with friends.
  • I loved apt-get the first time I used it, and I still love it now (even if it’s called something else ;))

Some things are okay, but could be better,

  • Picasa works under Linux, but only because it runs with the WINE libraries.  When I first ran it, I had some issues but that might be due to the network problems I was having at the same time.  Annoyingly, because it’s running in WINE it looks like a Windows XP app, which bugs me because if I’ve switched to Ubuntu I want it to look like a Gnome app.  But hell, at least it runs; Picasa was the one major app I would miss other than LOTRO.
  • After being a Windows desktop user for a very, very long time, a lot of the shortcut keys I’m used to (such as shift-num-pad-1 to select everything on a line) don’t work, and those are going to be the things that take me longest to get used to.

I’ve promised myself that if I’m just sitting at the computer, I’ll use the Ubuntu VM.  If I’m playing LOTRO I’ll close it down to free up resources, but return to it once I’m done.  I have a couple of other options.  Wubi looks very interesting, it installs Ubuntu into a single file under Windows, and adds a boot option for it on the Windows boot menu.  It installs like a Windows app, and you can uninstall it again afterwards.  When you boot into Ubuntu the Windows partition is mounted so you can share files.  The other option is a straight install and dual-boot into it’s own partition (but I’d need to do some partition shrinking to get there).  Until I know for certain I want to move, I’ll stick with the VM, since it gives me the quickest way to get into LOTRO and back out again.

I suppose the only question I haven’t answered is why I want to move?  Unlike some, I don’t hate Windows (although I still use XP so maybe that’ll come), and I think that Microsoft is no worse that many major software vendors.  I think I just like the idea of software being free and available for anyone to use, improve and share.  Certainly in the next 10 years the face of computing is going to change radically and I’d rather the stuff I use be driven by the people who use it, than the people who want to make money selling it to us.