Google block spam, instead of marking it as spam?

I have a number of virtual servers, and they run Logwatch.  They mail the daily Logwatch reports to a local user which forwards them to my gmail account.

Sometimes, Google marks them as spam because sometimes they contain spammy URL’s, source domains, etc.  Recently however, they stopped arriving from one server, and following the rejection note, it appears Google has started blocking spam at source.

said: 550-5.7.1 [******] Our system has detected that    
this 550-5.7.1 message is likely unsolicited mail. To reduce the amount of    
spam sent 550-5.7.1 to Gmail, this message has been blocked. Please visit    
for 550 5.7.1 more information. 7si10895397qeh.110 - gsmtp (in reply to end    
of DATA command)

Google isn’t blocking all the mail from my server, and it sends quite a bit to various destinations, so this is just because of the content of the message (which is a standard Logwatch formatted text e-mail).

I guess it was inevitable, and maybe they’ve been doing it for some time, but now you can never be sure that your mail is arriving at Google, and you’re not longer sure you’re seeing everything even if you check your spam folder.

Command line updates

I’ve been looking for a client to create WordPress posts from the command line. There’s a few, but nothing self contained or easy to install. A python script that doesn’t seem to work on Debian, or a VIM script that has some weird pre-requisites, etc. So I finally decided just to see what it was like using links (text only web browser), and it turns out – it’s not half bad if you’re adding a basic update.

So all that work to go full circle, and end up with console based web browser.

Running your own Dynamic DNS Service (on Debian)

I used to have a static IP address on my home ADSL connection, but then I moved to BT Infinity, and they don’t provide that ability.  For whatever reason, my Infinity connection resets a few times a week, and it always results in a new IP address.

Since I wanted to be able to connect to a service on my home IP address, I signed up to and used their free service for a while, using a CNAME with my hosting provider (Gandi) so that I could use a single common host, in my own domain, and point it to the dynamic IP host and hence, dynamic IP address.

While this works fine, I’ve had a few e-mails from where either the update process hasn’t been enough to prevent the ’30 day account closure’ process, or in recent times, a mail saying they’re changing that and you now need to log in on the website once every 30 days to keep your account.

I finally decided that since I run a couple of VPSs, and have good control over DNS via Gandi, I may as well run my own bind9 service and use the dynamic update feature to handle my own dynamic DNS needs.  Side note: I think Gandi do support DNS changes through their API, but I couldn’t get it working.  Also, I wanted something agnostic of my hosting provider in case I ever move DNS in future (I’m not planning to, since I like Gandi very much).

The basic elements of this are,

  1. a bind9 service running somewhere, which can host the domain and accept the updates.
  2. delegation of a subdomain to that bind9 service.  Since Gandi runs my top level domain for me, I need to create a subdomain and delegate to it, and then make dynamic updates into that subdomain.  I can still use CNAMEs in the top level domain to hide the subdomain if I wish.
  3. configuration of the bind9 service to accept secure updates.
  4. a script to do the updates.

In the interests of not re-inventing the wheel, I copied most of the activity from this post.  But I’ll summarise it here in case that ever goes away.

Installing / Configuring bind9

You’ll need somewhere to run a DNS (bind9 in my case) service.  This can’t be on the machine with the dynamic IP address for obvious reasons.  If you already have a DNS service somewhere, you can use that, but for me, I installed it on one of my Debian VPS machines.  This is of course trivial with Debian (I don’t use sudo, so you’ll need to be running as root to execute these commands),

apt-get install bind9 bind9-doc

If the machine you’ve installed bind9 onto has a firewall, don’t forget to open ports 53 (both TCP and UDP).  You now need to choose and configure your subdomain.  You’ll be creating a single zone, and allowing dynamic updates.

The default config for bind9 on Debian is in /etc/bind, and that includes zone files.  However, dynamically updated zones need a journal file, and need to be modified by bind.  I didn’t even bother trying to put the file into /etc/bind, on the assumption bind won’t have write access, so instead, for dynamic zones, I decided to create them in /var/lib/bind.  I avoided /var/cache/bind because the cache directory, in theory, is for transient files that applications can recreate.  Since bind can’t recreate the zone file entirely, it’s not appropriate to store it there.

I added this section to /etc/bind/named.conf.local,

// Dynamic zone
  zone "" {
    type master;
    file "/var/lib/bind/";
    update-policy {
      // allow host to update themselves with a key having their own name
      grant * self;

This sets up the basic entry for the master zone on this DNS server.

Create Keys

So I’ll be honest, I’m following this section mostly by rote from the article I linked.  I’m pretty sure I understand it, but just so you know.  There are a few ways of trusting dynamic updates, but since you’ll likely be making them from a host with a changing IP address, the best way is to use a shared secret.  That secret is then held on the server and used by the client to identify itself.  The configuration above allows hosts in the subdomain to update their own entry, if they have a key (shared secret) that matches the one on the server.  This stage creates those keys.

This command creates two files.  One will be the server copy of the key file, and can contain multiple keys, the other will be a single file named after the host that we’re going to be updating, and needs to be moved to the host itself, for later use.

ddns-confgen -r /dev/urandom -q -a hmac-md5 -k -s | tee -a /etc/bind/ > /etc/bind/

The files will both have the same content, and will look something like this,

key "" {
algorithm hmac-md5;
secret "somesetofrandomcharacters";

You should move the file to the host which is going to be doing the updating.  You should also change the permissions on the file,

chown root:bind /etc/bind/
chmod u=rw,g=r,o= /etc/bind/

You should now return to /etc/bind/named.conf.local and add this section (to use the new key you have created),

// DDNS keys
include "/etc/bind/";

With all that done, you’re ready to create the empty zone.

Creating the empty Zone

The content of the zone file will vary, depending on what exactly you’re trying to achieve.  But this is the one I’m using.  This is created in /var/lib/bind/,

$TTL 300 ; 5 minutes IN SOA (
    1 ; serial
    3600 ; refresh (1 hour)
    600 ; retry (10 minutes)
    604800 ; expire (1 week)
    300 ; minimum (5 minutes)

In this case, is the hostname of the server you’ve installed bind9 onto.  Unless you’re very careful, you shouldn’t add any static entries to this zone, because it’s always possible they’ll get overwritten, although of course, there’s no technical reason to prevent it.

At this stage, you can recycle the bind9 instance (/etc/init.d/bind9 reload), and resolve any issues (I had plenty, thanks to terrible typos and a bad memory).


You can now test your nameserver to make sure it responds to queries about the domain.  In order to properly integrate it though, you’ll need to delegate the zone to it, from the nameserver which handles  With Gandi, this was as simple as adding the necessary NS entry to the top level zone.  Obviously, I only have a single DNS server handling this dynamic zone, and that’s a risk, so you’ll need to set up some secondaries, but that’s outside the scope of this post.  Once you’ve done the delegation, you can try doing lookups from anywhere on the Internet, to ensure you can get (for example) the SOA for

Making Updates

You’re now able to update the target nameserver, from your source host using the nsupdate command.  By telling it where your key is (-k filename), and then passing it commands you can make changes to the zone.  I’m using exactly the same format presented in the original article I linked above.

cat <<EOF | nsupdate -k /path/to/
update delete
update add 60 A
update add 60 TXT "Updated on $(date)"

Obviously, you can change the TTL’s to something other than 60 if you prefer.

Automating Updates

The last stage, is automating updates, so that when your local IP address changes, you can update the relevant DNS server.  There are a myriad ways of doing this.  I’ve opted for a simple shell script which I’ll run every couple of minutes via cron, and have it check and update DNS if required.  In my instance, my public IP address is behind a NAT router, so I can’t just look at a local interface, and so I’m using dig to get my IP address from the opendns service.

This is my first stab at the script, and it’s absolutely a work in progress (it’s too noisy at the moment for example),

[sourcecode language=”bash”]#!/bin/sh

# set some variables

# get current external address
ext_ip=`dig +short`

# get last ip address from the DNS server
last_ip=`dig +short @$dnsserver $host.$zone`

if [ ! -z "$ext_ip" ]; then
if [ ! -z "$last_ip" ]; then
if [ "$ext_ip" != "$last_ip" ]; then
echo "IP addresses do not match (external=$ext_ip, last=$last_ip), sending an update"

cat <<EOF | nsupdate -k $keyfile
server $dnsserver
zone $zone.
update delete $host.$zone.
update add $host.$zone. 60 A $ext_ip
update add $host.$zone. 60 TXT "Updated on $(date)"

echo "success: IP addresses match (external=$ext_ip, last=$last_ip), nothing to do"
echo "fail: couldn’t resolve last ip address from $dnsserver"
echo "fail: couldn’t resolve current external ip address from"

The BT / thing, part 2

So the issue with routing between BT and made The Register, and finally got a mention on the BT Status page.  Lo and behold, this evening, everything seems a lot better.

If only BT had taken the issue seriously on Friday, or maybe, you know, perhaps have even detected it on their own network with their own monitoring, and resolved it themselves, they would have had a bunch of people feeling much happier.

BT & broken

So the network routing between BT and has been broken since Friday.  The problem only really manifests at around 7pm when packet loss through that route climbs to unacceptable levels (like 80%+).  It varies all the way through to about midnight by which time it’s down to 1-10% loss and services are usable.

It’s affecting access to Twitter, EA games, Eve Online and anything else which goes via that route.  Access to other web services are entirely unaffected.  To anyone who knows how to use traceroute it’s obvious where the problem lies.

Yet it’s been impossible to get BT to accept there is an issue or demonstrate they’re really investigating.  The @BTCare account on Twitter tells us they are, but no evidence.  The status page says something about ‘upgrades to the network’ but that only showed up at 8pm last night.  My fault report last night has been marked as ‘cleared’ probably because the problem went away, as normal, at around midnight.

Come on BT, take some notice – do something.  These two threads show the extent of the problem,

There’s plenty of traceroutes in them showing the issue between BT and – speak to each other, work it out, fix it permanently.   Everyone in those threads is fully expecting the issue to come back tonight.

Hashtag Rant

This is a hash -> #

This is a hashtag -> #hashtag

This is not a hashtag -> # <- it’s just a hash.

I’m sick of people on TV and/or radio saying, “Send us a tweet using @someone hashtag funnysaying”.  No.  It’s either, “Send us a tweet using @someone hash funnysaying” or “Send us a tweet to @someone with the hashtag funnysaying”.

Get it?

Hash -> #

Hashtag -> #plussomewords

Hands off my content!

I gave up one of my domains a few months ago, the one relating to David Gemmell.  I won’t repeat the domain here for reasons which will be obvious in a moment.  Last night, I wondered wistfully if anyone had picked the domain up.  A quick whois showed it had, and so I visited it in my browser to see what they were doing with it, hopeful it was being used to bring David’s fans together.

Sadly, rather than that, someone had basically taken my David Gemmell eulogy, and a brief bio, combined them as the only post on a WordPress install and stuck adverts between each of the paragraphs.

Maybe if it had been something less personal I would have simply ignored it, but that eulogy was very personal to me, despite me posting it to the web.  It was still mine.  There was no attribution on the post on the site in question, and although my eulogy finished with “I will miss you ….” the way it had been re-posted just to generate advertising revenue made it meaningless.

The whois entry didn’t give a contact e-mail address, so I tracked down the web host (same company as the domain registrar) and sent a polite e-mail to their support department, showing the original content on my blog, explaining that it was my copyright, and asking if they could please speak to the owner of the site.

In their defence, they replied in a few hours saying they would contact the owner, and this morning when I rechecked my content had gone.  To be replaced by a generic Eco Advice post, interspersed with adverts.  That same content is all over the web, including one site lovingly titled “ArticleSnatch”.

I sort of feel like writing back to the web host and saying the owner is doing it again, but this time they’re using generic content designed to convince search engines to send traffic their way, but frankly, I can’t be bothered.  Now they’re not stealing my personal content, I can’t work up the enthusiasm to say much, and I guess the text they have used doesn’t belong to anyone specific.

Funny old world the web these days.  A few years ago we were told you couldn’t run a site off ad revenue alone, and now some ‘enterprising’ individuals basically make a living delivering nothing of value with advertising thrown in.

NetFlix in the UK – going to be any good?

So, I know it’s early days, but NetFlix has just opened it’s doors in the UK and quite frankly, I thought they would have had a better line up to open with.

For the most part, the TV selection falls into four categories – old stuff of varying quality (Cracker, Fawlty Towers, 12 episodes of Men Behaving Badly), huge amounts of Kids TV and Animated stuff (Moschops, Thomas and Friends, X-Men), TV comedy shows like Saturday Night Live and a small, tiny amount of ‘new’ stuff (Dollhouse).

The movies don’t fare much better.  It’s not easy to list every movie, but here are the 2010-2011 movies on Netflix in various categories (this is every movie the interface returns, in that category for that year).

Action & Adventure 2010 & 2011

  • Blitz
  • Drive Angry
  • Delhi Belly
  • Thor: Tales of Asgard
  • Woochi
  • Shaolin
  • Red Hill
  • Fists of Rage
  • The Expendables
  • Little Big Soldier
  • Locked Down
  • Gotti the Mob Boss
  • Jackie Chan and the Kung Fu Kid
  • 71: Into the Fire
  • Nude Nuns With Big Guns
  • Rakht Charitra
  • Special Ops
  • Tees Maar Khan

Sci-Fi & Fantasy 2010 & 2011

  • Dead Space: Aftermath
  • Woochi
  • Area 51
  • Ticking Clock
  • The Disappearance of Haruhi Suzumiya
  • Hunter Prey

Thrillers 2010 & 2011

  • Blitz
  • Ticking Clock
  • Red Hill
  • Buried
  • My Soul to Take
  • The Disappearance of Haruhi Suzumiya
  • Winter in Wartime
  • Stone
  • Pimp
  • Raajneeti
  • House Under Siege

Maybe I’m missing something?  I mean, Blitz shows up on the main page which is a 2011 movie, but I can’t get it to show up on the basic lists, so perhaps I am missing something, but it still looks like a pretty lame opening gambit from NetFlix.   Unless they significantly improve the content very quickly and provide a more comprehensive way of listing movies (such as the simple LoveFilm A-Z view), I’ll be hard pushed to justify signing up.

Update: Aha, typically, after posting this I worked out what I was doing wrong.  You have to tell it to list things you’ve already seen in the summary lists.  I’ve added some items into the list, in italics, where they were missing first time.

Infinity update

BT Infinity has dropped down to 32Mbps download / 8Mbps upload, but it’s not disconnected now for about 4 days.  I assume therefore it’s been settling to a speed the line quality can handle, and has landed at pretty close to the original BT estimate (34Mbps / 10Mbps).

I’m still happy with that speed, and I’m glad the line has stabilised and stopped disconnecting once a day!

BT Infinity – a few days in

Firstly, let’s make this very, very clear.  I pretty much knew what I was getting into when I decided to move to BT Infinity.  When I first picked an ADSL provider I chose Nildram.  I did so because they had a reputation for not touching your traffic.  They were a data carrier, they didn’t try and intercept traffic or ‘offer value add services’.

Over the years, I got moved to other more ‘consumer grade’ ADSL services.  I knew when I chose to move to BT with BT Infinity that I would be at the mercy of BT policy.  I don’t like it, but I wanted to move from TalkTalk (who are no better) and at least Infinity is better, faster technology.

So how’s the move been?


Installation was a dream, literally.  This is our house, and it’s my network and I’m not happy with people coming in and messing with it, so I always get a bit bristly. My existing ADSL service stayed live until the BT Engineer called from the cabinet.  He said, “I’m going to disconnect you and them come round”, the cabinet is a street away.  The line dropped, the phone line was working within 5 minutes and he turned up 5 minutes after that.

My ADSL router was a fair distance from the master socket, connected via an rj11 cable.  Normally, ADSL providers hate you doing that claiming shocking performance reduction and instability, but it had been fine for years.  I knew that Infinity needed a cable modem (essentially), and the BT Home Hub.  I thought the cable modem had to be near the master socket, but the Home Hub could be further away, and I was ready for a ‘discussion’ with the engineer to make that happen.

Turns out, the cable modem sits on an rj11 cable to the socket – and the engineer was more than happy to place it exactly where my old ADSL router had been.  Win!  No cable changes required.  The Home Hub sits just in front of the modem.  This was a huge relief for me, I had visions of trying to run cabling everywhere and I was really pleased the engineer took the time to look at what I had and work with it.

Total time from engineer call to BT Infinity installed and working – 27 minutes.

He said it sometimes takes longer if there’s a lot of cabling to do – but I was pretty impressed.


I have to say, performance exceeds all my expectations, at present.  The line runs around 34-37Mbps download and 8-9Mbps upload consistently.  There’s some variation and I’m not sure if that’s the line negotiating a different speed, contention or just network throughput.  Either way – I’m super happy.


I have a minor issue at the moment with reliability.  The connection is dropping once a day at the moment, late at night or early in the morning, for about a minute.  This might seem trivial, but it bugs the hell out of me, and it’s obvious it’s happened for two reasons.  Firstly, I don’t run a ‘normal’ consumer style network config here, I’ve got a lot of stuff going on with permanent ‘net connections so I can see it’s dropped.  Secondly, because the IP address is also changing on each reset, TweetDeck is getting its knickers in a twist with SSL certificates and moaning.  This might be a bug in TweetDeck being exposed by the IP change, but it’s annoying none-the-less.


These were issues I was expecting, I’m listing them here in case you might not, or in case you run the kind of stuff I do.

  • Non-fixed IP address: I knew it would change, and I’m pleased to say the Home Hub has built in support for which helps, but I hoped it would remain reasonably static for long periods.  That’s not the case at the moment because of the daily dropouts.  I’m surprised it changes every time it reconnects, but wonder if there’s something else at play since the subnet is changing completely.  We’ll see how it works over time.
  • Outbound Mail: BT provide SMTP servers for your local mail clients, but you can only use them to relay mail with a from field set to your BT Internet e-mail address.  You can ‘pre-register’ a number of additional addresses via the BT Web Mail page if you want.  I knew that BT’s SMTP servers wouldn’t be as forgiving as the Nildram ones, so I’d already been planning options for this.  I send mail from a number of UNIX boxes here, only 3 or 4 a day, but the from address can vary quite a bit.  I’ve solved this by using my own mail relay on a VPS I run.  It might impact you if you want to keep using an old non-BT e-mail address with Outlook or Thunderbird, because you’ll need to pre-register that address before it’ll work.
  • DNS Hijacking: I wasn’t expecting this, but I’m not surprised it’s there.  It seems the BT DNS servers return ‘helpful’ addresses if the URL you type in can’t be found.  This can be opted out of, but I’m not sure if that’s per browser (is it a cookie?) or per connection?  I’ll just avoid this by not using the DNS servers presented from the BT Home Hub and instead using Google DNS.
  • Deep Packet Inspect / Traffic Shaping / Traffic Inspection: I expect that BT will implement one or all of these technologies over time, and that I will have to do something about them, but I’ll cross those bridges when I get to them.  Internet service to the home is changing all the time, and as more organisations deliver fibre to the home, I’ll be able to choose an ISP who just offers to carry my data and not mess with it.


I’m really pleased overall with BT Infinity.  The speed is higher presently than the estimate, it’s consistent at present, and the installation was significantly less complex than I thought it would be.  The issues aren’t unexpected, and for most home users won’t be a problem (the e-mail one is the one that will get most folk who don’t use webmail).