Category Archives: Technology

Technology related things

Too Warm!

It’s pretty warm here atm.  I realised a few weeks ago, that since I record the temperature of a little computer in the house, we can see how the house temperature has changed over time.

The computer does very little, and has roughly the same workload throughout the day, so these temperatures are purely as a result of the ambient air temperature in the room.


Hard Disk

Data collected with Munin and graphed by Munin using the ubiquitous rrdtool.

Simple eggdrop script – random quotes

Simple eggdrop tcl script to return random movie quotes from a file. I haven’t fully tested this, it’s partially copied from a working script with some stuff removed that isn’t necessary.  Posted because someone was asking how it worked in the IRC channel.

#create the bind to allow the !movie (and aliases) to get a quote
bind pub - "!movie" myscript::quotes::movie
bind pub - "!film" myscript::quotes::movie
bind pub - "!quote" myscript::quotes::movie

namespace eval myscript {

variable version "0.4"

# file with the quotes, must be in the same directory as the script
variable quotefile "quotes.txt"

proc randline {file} {
  if {[catch {open scripts/$file r} fs]} {
    putlog "Failed to open scripts/$file"
    return ""
  } else {
    variable data [read -nonewline $fs]
    close $fs
    variable data [split $data \n]
    return [lindex $data [rand [llength $data]]]

namespace eval quotes {
  proc movie {nick uhost hand chan text} {
    variable aquote [myscript::randline $myscript::quotefile]
    puthelp "privmsg $chan :$myscript::quotes::aquote"
    return 0

putlog "Loaded: MyScript v$myscript::version"

1080p, HDTV and HD Ready makes me sad

In the old days, when I was a boy, it was usually the case that if you bought a monitor that was larger than your current one (diagonally larger screen), it supported more pixels as well.  These days, it’s sad to see monitor vendors sticking to the flawed idea that somehow, 1080 pixels is the new one size fits all.

If you buy a bigger monitor, you don’t get more pixels, you just get bigger pixels.

This is because monitor vendors have bought into the HDTV size of 1900 x 1080.  Why would anyone want to use anything different?  I think it’s actually because monitor vendors realised they were being dumb.  I mean, people spent thousands of pounds buying larger and larger televisions in the old days, and they never got any increase in resolution?  If people would pay top dollar for huge TV’s at the same resolution as 14″ portables, why the hell couldn’t they bring that business model to the LCD monitor market.

So they did.

There’s a good rant on this over here.

When I bought the LCD’s we use at the moment, I got 5:4 ratio LCD monitors.  People probably laughed.  They’re 19″ displays.  That means (sorry to switch units), that the actual screen is ~30.5cm high and ~37.5cm wide.  That gives about a ~48cm display (diagonal).  We were thinking of getting some new monitors, but I knew it wouldn’t be that easy so I made sure I had the measurements.  These monitors run at 1280 x 1024.  A 19″ widescreen (16:9) might give 1900 x 1080, but it’s vertically much smaller than the monitors we have.  That’s okay, 21″ widescreen?  Still shorter.  22″?  Still shorter.  23″?  Still shorter.  I’d have to buy a 24″ monitor, running at just 56 more pixels high, to give me roughly the same physical height as my existing monitor.  And the screen would be ~20 inches wide (or ~50cm).

To get 56 more pixels (vertically).

And that’s it – you have to be specifically looking to find anything higher than 1080 vertical resolution and you pay for it.  And there’s no good reason for it.  If I want to watch movies, I do that on my television.  So we didn’t buy any new monitors.

I want a choice of monitors, with a choice of native resolutions, in a choice of ratios.

Debian, apache2, virtualhosts, FastCGI and PHP5

I’ve spent an amusing evening revisiting FastCGI under Apache2, in order to server PHP5 content through Apache’s threaded MPM (worker).  I set this up ages ago on my previous web server and then forgot about it.

It was fine for a long time, but I hadn’t really customised it and to be frank, wasn’t really sure what it was doing.  I just know at the time it was very confusing reading a lot of conflicting stuff on the web.  But it worked.  Until recently, when I noticed the server was running out of memory and processes were being killed.  I didn’t really spend much time looking at the cause though.

When I moved to the new server, I thought I’d try out the prefork MPM again, as per my previous posts and it seemed okay.  However, it’s not okay (although I may do some more load testing if I get a chance).  So I quickly switched back to the worker MPM and FastCGI.

Which is where I started getting frustrated again – I wanted to understand better what’s going on with FastCGI and make sure I was handling it correctly.

If you search the web, there’s a lot of stuff, much of it from 2007 – 2009 with conflicting information and stuff you might or might not need to do.

So, first some caveats,

  1. this is Debian Squeeze, other distributions might be different.
  2. I run PHP5 under FastCGI and nothing else, so my config changes only affect PHP5.
  3. I’m guessing about most of this stuff – so if I’m wrong, please feel free to provide constructive comments.

Here’s what I learned.

Two FastCGI Modules?

Debian comes with two FastCGI modules, those being libapache2-mod-fcgid and libapache2-mod-fastcgi.  They are binary compatible, I’m lead to believe, but fcgid is newer and works better with suexec.  So you should use libapache2-mod-fcgid unless you know you need libapache2-mod-fastcgi for some specific reason.  If you read examples talking about libapache2-mod-fastcgi you can probably just use libapache2-mod-fcgid instead.

Don’t install them both at once – you can do, but there’s no point and it’ll only cause confusion.  You only need one.

Some Fcgid settings are per virtual host.

I run with a low memory setup, so I wanted the PHP5 processes to shut down after a while, rather than hang around.  I couldn’t work out why they weren’t going away.  Also, I couldn’t work out why there were so many.  But it looks to me like you get at least one PHP5 process per virtual host, sometimes more (if the load is high, but remember, these are mostly vanity VPS’s, low load).  The default settings for fcgid processes is to start with none, and create as many as needed, and then drop back to 3.  But it looks like with the way I’ve got it configured (maybe all ways), that’s per virtual host.  I had to set the FcgidMinProcessesPerClass to be 0, so that on each virtual host, fcgid will close all the unused PHP5 processes after a while.


Most of the articles online suggest you write a little wrapper to launch your PHP5 stuff via Fast CGI.  I couldn’t remember doing that on the previous server and spent a while looking for my wrapper script – until I realised I’d not created one.  You don’t need a wrapper script, but you do need to tell your virtual host to run PHP5 code using the Fast CGI module.  I have this in each of my virtualhost Apache2 config sections.

AddHandler fcgid-script .php
FCGIWrapper /usr/lib/cgi-bin/php5 .php
Options ExecCGI

You need to add the ExecCGI to the Options directive to ensure the PHP pages can be run as CGI apps, and the Handler and FCGIWrapper lines tell Apache2 how to run the PHP.  The default wrapper is just the PHP5 CGI binary (as shown above).  You can put a shell script there and set some defaults, but you don’t have to, it ran fine for over a year on my other server without doing so.

You can set values in fcgid.conf

Because I’m only running PHP5 stuff via Fast CGI, I can happily put settings in Apache’s fcgid.conf file.  Some articles suggest creating a PHP specific one, and putting the wrapper script stuff above in that as well.  I’m sure that works, but so does the way I did it (there’s always more than one way!).  Here’s my fcgid.conf,

<IfModule mod_fcgid.c>
 AddHandler    fcgid-script .fcgi
 FcgidConnectTimeout 20
 FcgidIOTimeout              60
 FcgidMaxRequestsPerProcess  400
 FcgidIdleTimeout            60
 FcgidMinProcessesPerClass   0

The two Timeout entries ensure unused PHP5 processes are closed down.  The MinProcessesPerClass is required as mentioned above, otherwise it defaults to 3 per virtualhost.  The MaxRequestsPerProcess I’ve set at 400.  PHP will by default handle 500 and then shutdown, it can do that even if Fast CGI has already made a connection, resulting in a 500 error at the client.  If you force Fast CGI to stop PHP after <500 requests, you avoid that issue.  You can if you want write a PHP Wrapper script, and increase PHP’s max requests value, but you don’t have to.

There’s always another way

This is one way of setting it up, there’s always another way, and with Linux there’s usually another 10 ways.  I may do some more testing to narrow down some confusion I still have and see what the benefits of wrapper scripts may or may not be, and whether it’s worth moving some of the per-virtualhost config entries into the fcgid.conf file (like the handler bits).

Apache2 MPM’s

A couple of hours ago I wrote a post about migrating web services to a Debian VM running Squeeze, from one which had been running Lenny.  I said I’d switched to the prefork MPM under Apache2.

Well, if you’re reading this on my site, you’re reading it via the worker MPM once again – only a couple of hours later.  It became obvious pretty quickly once the site had real web pages, and real users, that prefork was not going to cut it.  The VM’s are small, only 256MB of memory and so I can’t run many Apache2 processes.  Although I tested a lot of accesses against PHP based pages using Apache’s AB, I missed doing some testing of both PHP content and the large amount of static content that goes with it (such as style sheets, javascript, images, etc.) at the same time.

Under those conditions, the server needed either so many Apache processes that it filled memory, or it reached the limits I had set and page loads took 20+ seconds.

So, I quickly switched back to the worker MPM and PHP running under Fast CGI, and the page loads are back down to 2 seconds or so on average.

I still have some work to do, to make sure I don’t start too many PHP5 CGI processes, but at least the sites are useable again.

Virtual Machines – taking the pain out of major upgrades

If your computers are physical machines, where each piece of hardware runs a single OS image, then upgrading that OS image puts your services at risk or makes them unavailable for a period of time.

Sure, you have a development and test environment, where you can prove the process, but those machines cost money.  So processes develop to either ensure you have a good backout, or you can make changes you know will work.

Virtual Machines have changed the game.  I have a couple of Linux (Debian) based VM’s.  They’re piddly little things that run some websites and a news server.  They’re basically vanity VM’s, I don’t need them.  I could get away with shared hosting, but I like having servers I can play with.  It keeps my UNIX skills sharp, and let’s me learn new skills.

Debian have just released v6 (Squeeze).  Debian’s release schedule is slow, but very controlled and it leads to hopefully, very stable servers.  Rather than constantly update packages like you might find with other Linux distributions, Debian restricts updates to security patches only, and then every few years a new major release is made.

This is excellent, but it does introduce a lot of change in one go when you move from one release of Debian to the next.  A lot of new features arrive, configuration files change in significant ways and you have to be careful with the upgrade process as a result.

For matrix (the VM that runs my news server), I took the plunge and ran through the upgrade.  It mostly worked fine, although services were out for a couple of hours.  I had to recompile some additional stuff not included in Debian, and had to learn a little bit about new options and features in some applications.  Because the service is down, you’re doing that kind of thing in a reasonably pressured environment.  But in the end, the upgrade was a success.

However, the tidy neat freak inside me knows that spread over that server are config files missing default options, or old copies of config files lying round I need to clean up; legacy stuff that is supported but depreciated sitting around just waiting to bite me in obscure ways later on.

So I decided to take a different approach with yoda (the server that runs most of the websites).  I don’t need any additional hardware to run another server, it’s a VM.  Gandi can provision one in about 8 minutes.  So, I ordered a new clean Debian 6 VM.  I set about installing the packages I needed, and making the config changes to support my web sites.

All told, that took about 4 hours.  That’s still less time than the effort required to do an upgrade.

I structure the data on the web server in such a way that it’s easy to migrate (after lessons learned moving from Gradwell to 1and1 and then finally to Gandi), so I can migrate an entire website from one server to another in about 5 minutes, plus the time it takes for the DNS changes to propagate.

Now I have a nice clean server, running a fresh copy of Debian Squeeze without any of the confusion or trouble that can come from upgrades.  I can migrate services across at my leisure, in a controlled way, and learn anything I need to about new features as I go (for example, I’ve switched away from Apache’s worker MPM and back to the prefork MPM).

Once the migration is done, I can shut down the old VM.  I only pay Gandi for the hours or days that I have the extra VM running.  There’s no risk to the services, if they fail on the new server I can just revert to providing them from the old.

Virtual Machines mean I don’t have to do upgrades in place, but equally I don’t have to have a lot of hardware assets knocking around just to support infrequent upgrades like this.

There are issues of course, one of the reasons I didn’t do this with matrix is that it has a lot of data present, and no trivial way to migrate it.  Additionally, other servers using matrix are likely to have cached IP details beyond the content of DNS, which makes it less easy to move to a new image.  But overall, I think the flexibility of VM’s certainly brings another aspect to major upgrades.

You like to, move it!

Spent about two hours on Sunday playing Start the Party, a PS3 game which uses the Move controller (camera + motion sensitive controller).  We had friends visiting on Saturday for some Warhammer FRP and they stayed over.  Sunday, after breakfast, Grete convinced us all to give the game a shot (we’d played the demo, which was amusing, but not the full game with a bunch of people).

It’s pretty fun!  We played a 5 round and a 10 round game.  The rounds are either full ‘games’ or quick-fire mini-games, and there’s bonus and joker rounds to keep people interested (more on bonus rounds in a sec).  The full games are stuff like painting in shapes (which eventually turn out to be pictures of something like a monkey), stabbing exploding coloured ‘things’ (hard to explain), shooting robots, cutting hair (hardest of all the games).  The quick-fire rounds are made up of things such as catching pizza toppings, whacking moles, bouncing balls into nets, finding bugs (creepiest of all the games).

All of the games involve you (as seen by the camera) standing in the middle of the action wielding a different implement (hammer, pick-axe, pizza, magnifying glass, harpoon spear, bug-squisher, etc.)

If someone falls behind in terms of scores, they get a free bonus round where they can make a few extra points – which I thought was a nice touch and demonstrated where the game is targeted – people having a laugh – not competing for the best score in the world.

I have to say I was pretty sceptical at first, it was Sunday morning, we were tired, and I hate enforced fun, but the game won me over.  Easy to get into, quick to play, light hearted and it kept us laughing for a couple of hours.

Custom Logwatch script for ngIRCd

So before I begin (or technically, just after I’ve begun), let me remind you that my perl skills are shockingly bad.  All my perl scripts are written in the same style as the script I was copying from at the time I wrote them.


I’ve recently set up an IRC server, partly to mess about with it and partly to consider using it to keep in touch with friends.  I’m acutely aware that it’s the kind of thing that gets attacked, so I’ve made sure ngIRCd (the daemon I chose) is logging everything, and then I started looking for a logwatch (homepage) script to monitor the logs and alert me of anything suspicious going on.

Sadly, I couldn’t find one, so I decided to do the only sensible thing and write my own, which is fine, but as you’ll see if you search the web for ‘writing custom logwatch scripts’, it’s sort of both easy and hard.  It’s easy once all the bits fall into place, but sometimes the terminology gets in the way.  So, here’s how I did it.


You absolutely need two files, one which describes which logs you’re going to handle, and another which is the script which does the handling.  You should name them in some way which makes sense (after the service you’re monitoring for example).  Once you put them in the right place, logwatch will execute your script and you’re away.  There are some optional files, if you want to do some logfile pre-processing (I think) but as I never used those, I can’t comment.

So, I want to monitor ngIRCd which on my server logs everything it does to /var/log/messages under the service name ngircd.  Here’s an example line,

Aug 13 08:01:54 hostname ngircd[10898]: User "bob!~ident@some.machine" registered (connection 8).

The first thing I did was create a file describing which logs to monitor and how to filter the data, and I stole various bits of information from the other files distributed with logwatch.  I called my file ngircd.conf and place it in,


That’s the default location on Debian.  Here’s the content of my file with some comments,

# set the title for the reports
Title = "ngIRCd"
# set the logfile to the messages log file *group*
LogFile = messages
# only return entries made by ngIRCd which reduces our effort in the script
*OnlyService = ngircd
# remove the date / time stamp, hostname, service name, etc.

Line 3 is important and took me a little while to work out.  In the config file for your service, you describe the log file group that is used, which in turn tells logwatch which file in the /logfiles/ directory structure describes the actual log files which are scanned.  So the above line tells logwatch (in the case of Debian) to use the log files described in /usr/share/logwatch/default.conf/logfiles/messages.conf.  That file handles the log file names, how to deal with date/time stamps, archived logs, etc.

If the log files for your new service don’t already have a matching log file group configuration file, you should create one in /etc/logwatch/conf/logfiles, using an example from /usr/share/logwatch/default.conf/logfiles.  Anyway, in my case, since I was using /var/log/messages which is already described in /usr/share/logwatch/default.conf/logfiles/messages.conf I didn’t need to create one.

Now that you’ve got the service configuration covered, you need a script, and it needs to be named after the config file (so if you call your config file foo.conf, then your script needs to be called foo).  You can write this script in any language that can read from STDIN and write to STDOUT, but like other folk before me I made the joyful error of sticking to perl.

You place this file in


The important things to remember are,

  1. your script will receive the content of the appropriate logs via STDIN
  2. it should write output to STDOUT and should use the environment variable LOGWATCH_DETAIL_LEVEL to determine the detail level passed to the logwatch program
  3. the output should be tidy and should avoid being verbose
  4. if you’ve configured the service conf script correctly you won’t need to worry about parsing dates, stripping headers, or other rubbish.  This does depend on the log file in question though and the application.
  5. To keep in line with other scripts, you should capture everything you know you don’t care about and ignore it, process stuff you do care about, and report stuff you don’t recognise.

The link below is the script I cobbled together to handle ngIRCd so far.  At the moment, I ignore my own advice and don’t check the detail level, I just wanted initially to get my data out.  I have no idea if the regexp’s are correct or efficient, but at present, it displays what I care about. Is that enough caveats?  I’m not looking for feedback on the quality of my perl! I’m just trying to show how it can be done.


# ngircd

use Logwatch ':all';

my $Detail = $ENV{'LOGWATCH_DETAIL_LEVEL'} || 0;
my $Debug = $ENV{'LOGWATCH_DEBUG'} || 0;

my %FailedLogin = ();
my %FailedOpers = ();
my $FailedOpCommands;
my %TriedConnections = ();
my %GoodConnectionsi = ();
my %GoodOper = () ;
my %BadOpCommands = ();
my %OtherList = ();

if ( $Debug >= 5 ) {
        print STDERR "\n\nDEBUG: Inside ngircd Filter \n\n";
        $DebugCounter = 1;

while (defined(my $ThisLine = <STDIN>)) {
   if ( $Debug >= 5 ) {
      print STDERR "DEBUG($DebugCounter): $ThisLine";

   if ( # We don't care about these
      ( $ThisLine =~ m/connection .* shutting down / ) or
      ( $ThisLine =~ m/^New TLSv1 connection using cipher/ ) or
      ( $ThisLine =~ m/^Now listening on/ ) or
      ( $ThisLine =~ m/^IO subsystem: epoll/ ) or
      ( $ThisLine =~ m/^Reading configuration from/ ) or
      ( $ThisLine =~ m/^ngircd .* started/ ) or
      ( $ThisLine =~ m/^Created pre-defined channel/ ) or
      ( $ThisLine =~ m/^Not running with changed root directory/ ) or
      ( $ThisLine =~ m/^Notice: Can't change working directory to/ ) or
      ( $ThisLine =~ m/^getnameinfo: Can't resolve address/ ) or
      ( $ThisLine =~ m/^Shutting down all listening sockets/ ) or
      ( $ThisLine =~ m/^ServerUID must not be 0, using/ ) or
      ( $ThisLine =~ m/^OpenSSL .* initialized/ ) or
      ( $ThisLine =~ m/^Configuration option .* not set/ ) or
      ( $ThisLine =~ m/^User .* unregistered/ ) or
      ( $ThisLine =~ m/^Server restarting NOW/ ) or
      ( $ThisLine =~ m/^Server going down NOW/ ) or
      ( $ThisLine =~ m/^Shutting down connection .* \(Got QUIT command\.\)/ ) or
      ( $ThisLine =~ m/^Connection .* with .* closed / ) or
      ( $ThisLine =~ m/^Running as user/ ) or
      ( $ThisLine =~ m/^Shutting down connection .* \(Server going down/ ) or
      ( $ThisLine =~ m/^Shutting down connection .* \(Socket closed/ ) or
      ( $ThisLine =~ m/^Shutting down connection .* \(Ping timeout/ ) or
      ( $ThisLine =~ m/is closing the connection/ ) or
      ( $ThisLine =~ m/^ngircd done/ ) or
      ( $ThisLine =~ m/^Client unregistered/ ) or
      ( $ThisLine =~ m/^Client .* unregistered/ ) or
      ( $ThisLine =~ m/^User .* changed nick/ )
   ) {
      # We don't care, do nothing
   } elsif ( my ($Host) = ($ThisLine =~ /Accepted connection .* from ([\d\.]+)/ )) {
   } elsif ( my ($User,$Connection) = ($ThisLine =~ /^User \"([^ ]+)!([^ ]+)\" registered /)) {
   } elsif ( my ($User,$Connection) = ($ThisLine =~ /^Got invalid OPER from \"([^ ]+)!([^ ]+)\": / )) {
   } elsif ( my ($User,$Connection) = ($ThisLine =~ /^No privileges: client \"([^ ]+)!([^ ]+)\", command / )) {
   } elsif ( my ($Host) = ($ThisLine =~ /^Shutting down connection .* \(Bad password\) with ([^ ]*):/)) {
   } elsif ( my ($User,$Connection) = ($ThisLine =~ /^Got valid OPER from \"([^ ]+)!([^ ]+)\", user is an IRC operator now/ )) {
   } else {
      # Report any unmatched entries...


if (keys %BadOpCommands) {
   print "\nIRCOp commands from regular users:\n";
   foreach my $key (keys %BadOpCommands) {
      my $totcount = 0;
      $totcount += $BadOpCommands{$key};
      my $plural = ($totcount > 1) ? "s" : "";
      print "   $key: $totcount time$plural\n";

if (keys %FailedLogin) {
   print "\nFailed logins from:\n";
   foreach my $key (keys %FailedLogin) {
      my $totcount = 0;
      $totcount += $FailedLogin{$key};
      my $plural = ($totcount > 1) ? "s" : "";
      print "   $key: $totcount time$plural\n";

if (keys %FailedOpers) {
   print "\nFailed attempts to become IRCOps from:\n";
   foreach my $key (keys %FailedOpers) {
      my $totcount = 0;
      $totcount += $FailedOpers{$key};
      my $plural = ($totcount > 1) ? "s" : "";
      print "   $key: $totcount time$plural\n";

if (keys %GoodOper) {
   print "\nGood attempts to become IRCOps from:\n";
   foreach my $key (keys %GoodOper) {
      my $totcount = 0;
      $totcount += $GoodOper{$key};
      my $plural = ($totcount > 1) ? "s" : "";
      print "   $key: $totcount time$plural\n";

if (keys %TriedConnections) {
   print "\nAttempted connections from:\n";
   foreach my $ip (sort SortIP keys %TriedConnections) {
      my $name = LookupIP($ip);
      my $totcount = 0;
      $totcount += $TriedConnections{$ip};
      my $plural = ($totcount > 1) ? "s" : "";
      print "   $name: $totcount time$plural\n";

if (keys %GoodConnections) {
   print "\nGood connections from:\n";
   foreach my $key (keys %GoodConnections) {
      my $totcount = 0;
      $totcount += $GoodConnections{$key};
      my $plural = ($totcount > 1) ? "s" : "";
      print "   $key: $totcount time$plural\n";

if (keys %OtherList) {
   print "\n**Unmatched Entries**\n";
   foreach $line (sort {$OtherList{$b}<=>$OtherList{$a} } keys %OtherList) {
      print "   $line: $OtherList{$line} Time(s)\n";


If I have the inclination, I plan to update this to display different levels of detail based on the logwatch detail option, format the output a little nicer, handle some different bits of information and split the input lines up into more fields.  But you know, now it does 90% of what I want, that might never happen.


  • Pick a name (based on the service you’re reporting on)
  • Create /etc/logwatch/conf/services/myname.conf and describe the log file group to use, and any other options
  • Create a script /etc/logwatch/scripts/services/myname in your favourite language and parse STDIN, sending useful information to STDOUT
  • Bingo

Hosting Changes

Hopefully no one’s noticed that I’ve moved my websites (including this one) to a new hosting provider (well, new for this site, but not new to me in general).  When I first decided to move away from shared hosting at Gradwell, I wanted to find a VPS solution to give me more control over the sites.  At the time, I was hosting an EverQuest guild website which got a reasonable amount of traffic, and so I chose a VPS solution that would definitely have enough oomph and bandwidth to deliver that.  Not a huge amount, but enough.  It was my first time in the VPS market and I wasn’t really sure who were best, what was good value and what the different offerings resulted in.

The 1and1 service I went with was pretty good, but not what I would call cheap.  Over the 18 months or so I’ve had the service it’s been pretty reliable, a few unexplained outages, and a couple of periods of downtime that were longer than I would have liked.  But the VPS was powerful, had plenty of memory and lots of network capacity and easily delivered the 7 or so domains I hosted on it.

Not long after I moved the sites though, the EverQuest guild site dropped off dramatically in terms of load (lots of people left the game), and after 18 months it’s become apparent that the 5 or 6 vanity domains I host really don’t justify the cost and performance of the 1and1 VPS.

I looked again at shared hosting, because I’m a pretty good example of who should use it.  I tried tsohosts and while they’re excellent value and I have nothing bad to say about them, I really don’t get on with cPanel and the shared hosting mentality, especially after running my own VPS for so long.  I wanted to get into the config files and set things up ‘just so’, and after only 2 or 3 days fighting with cPanel I gave up and bought another VPS from Gandi.

I love the Gandi system (I already have another VPS from them hosting a usenet server), and although a single share VPS is pretty low resource you can deliver quite a lot from a Linux machine with not much in the way of power these days, especially when what you’re delivering are vanity domains with almost no traffic.

So I’ve got a single share (256MB memory) VPS from Gandi and over the last couple of days I’ve moved everything over, and I’m pleased at how easy it was.  The 1and1 VPS was running CentOS but I’ve gone for Debian with Gandi (I prefer Debian) so not everything could be simply copied over, but the content (static and mysql data) was easy enough to transfer, so now it’s just a case of making sure the VPS is secure and managed properly.

If you’re looking for some domains or a VPS, I really can’t recommend Gandi enough, my only gripe is there’s no easy way to pay monthly (no direct debit facility, so you have to top up a pre-pay account when you remember), but it’s a small issue when the service is so excellent.

Dumb Software

I used TweetDeck as my desktop twitter client.  I think it’s too big (i.e. takes up too much screen space), but it’s the most reliable, feature rich client there is, and it’s easy to use.

I used to be able to drag and drop images into the text input box and it would upload them to TwitPic, after the last update it stopped working.  It’s taken me some trial and error to work out why, and it’s stupid.  If I expand the window so that I can see around 1.8 columns instead of 1, then the text input box says ‘drag links and media here’, however, if I shrink the window so that it’s a more reasonable 1 column wide then it just says ‘drag links here’ and no longer accepts images.

Why the hell does the size of the input box matter?

#tweetdeck fail