Compiling analog 6.0 on Cygwin

Again, mostly for my own use later.  I had a need to run analog on my machine, and I didn’t want to download the Windows binary, because everything else I would be doing with the log files would be via Cygwin.  After a few unsuccessful attempts at compiling analog, I finally RTFM (read the flipping makefile) and made the following two changes.   In src/Makefile,

DEFS = -DHAVE_GD

LIBS = -lm -lz -ljpeg -lgd

That tells the analog makefile to use your pre-existing GD, JPEG and ZLIB libraries, rather than compiling the ones it comes with (it was those libraries which were giving me errors).  Once I’d done that, make clean and make worked fine and analog behaves as you would expect.

For reference, the errors I was getting before this change were,

libpng/pngwrite.o:pngwrite.c:(.text+0x1ec): undefined reference to `__imp__png_libpng_ver'
libpng/pngwrite.o:pngwrite.c:(.text+0x1f8): undefined reference to `__imp__png_libpng_ver'
libpng/pngwutil.o:pngwutil.c:(.text+0x45c): undefined reference to `__imp__png_IHDR'
libpng/pngwutil.o:pngwutil.c:(.text+0x6a5): undefined reference to `__imp__png_PLTE'
libpng/pngwutil.o:pngwutil.c:(.text+0x73d): undefined reference to `__imp__png_IDAT'
libpng/pngwutil.o:pngwutil.c:(.text+0x77e): undefined reference to `__imp__png_IEND'
/usr/lib/gcc/i686-pc-cygwin/4.5.3/../../../../i686-pc-cygwin/bin/ld: libpng/pngwutil.o: bad reloc address 0x12c in section `.rdata'
/usr/lib/gcc/i686-pc-cygwin/4.5.3/../../../../i686-pc-cygwin/bin/ld: final link failed: Invalid operation
collect2: ld returned 1 exit status
Makefile:76: recipe for target `analog' failed
make: *** [analog] Error 1

Compiling rrdtool on Cygwin

This post is half aide-mémoire and half public service announcement!  I use nmon at work to gather performance data, and I use customised version of nmon2web to graph it.  nmon2web relies on rrdtool, and I do a lot of my UNIX stuff these days using Cygwin.  Rather than use the Windows rrdtool binaries, I wanted to compile the latest version to run directly under Cygwin.

I’ve done this a few times now, with different Cygwin installs on both Windows XP and Windows 7, and every time I have to fight against various compilation issues.  I’ll cut to the chase, I hack at options and compiler settings and configure flags until it works, and I turn off most of the additional stuff to get it in.  But, it does work.

Get the Pre-requisites

All of the libraries and pre-requisites you need can be installed directly within Cygwin.  You’ll obviously need the regular development stuff (gcc, make, etc.) and you’ll also need the various libraries used by rrdtool.  Rather than worry about what you do and don’t need, I just whack on everything.  This is what I’ve got installed for each of the pre-reqs.

  • cairo (libcairo-devel, libcairo2)
  • glib (glib, glib-devel, glib2, glib2-devel, libglib1.2-devel, libglib1.2_0, libglib2.0-devel, libglib2.0_0)
  • libpng (libpng, libpng12, libpng14, libpng14-devel)
  • libxml2 (libxml2, libxml2-devel)
  • pango (libpango1.0-devel, libpango1.0_0, pango, pango-devel)
  • zlib (zlib, zlib-devel)
  • fontconfig (fontconfig, libfontconfig-devel, libfontconfig1)
  • freetype (libfreetype-devel, libfreetype6)
  • expat (expat, libexpat1, libexpat1-devel)

I’m certain that’s overkill, I had some of those installed already and installed a few extra libraries to get the compile working, but better safe than sorry!

configure options

The next step is working out what options to pass configure.  Some of these are required (on Windows 7 there are issues if you don’t use -no-undefined)

configure doesn’t seem to find the pango and cairo libraries under Cygwin unless I add these.

export CPPFLAGS="-I /usr/include/pango-1.0/pango/ -I /usr/include/cairo/cairo/"

and as I said, you need to prevent any undefined symbols in the libraries,

export LDFLAGS=-no-undefined

and then I basically turn off all the additional modules (perl, tcl, python, ruby) as well as mmap which doesn’t seem to work well under Cygwin anyway.  –prefix here is optional, it defaults to /opt, but I prefer everything under /usr/local.

./configure --disable-mmap --prefix=/usr/local/ --disable-tcl --disable-perl --disable-ruby --disable-python

Once that’s done, you can go for the compile.

Compiling

Run the usual make, you’ll get a whole bunch of warnings.  Such as,

warning: ‘optarg’ redeclared without dllimport attribute: previous dllimport ignored

No idea what they mean, but they don’t seem to break anything.

You’ll get a bunch of these,

*** Warning: linker path does not have real file for library -lstdc++.
*** I have the capability to make that library automatically link in when
*** you link to this library. But I can only do this if you have a
*** shared version of the library, which you do not appear to have
*** because I did check the linker path looking for a file starting
*** with libstdc++ and none of the candidates passed a file format test
*** using a file magic. Last file checked: /usr/lib/libpthread.a

because of the -no-undefined flag.  But again, doesn’t seem to break anything.

On Windows 7 (can’t remember if I got this on Windows XP) you’ll also get,

CCLD rrdupdate.exe
../libtool: line 8354: ./rrdupdate.exe: Permission denied
CC rrdcached-rrd_daemon.o

and you won’t be able to use rrdupdate.exe.  Running it gives the same permission denied error.  I’m not sure why yet, but I don’t use rrdupdate so it’s not been a big issue yet ((this only appears to be an issue with 64 bit Windows 7)).

Installing it all

Finally, after it all scrolls by, you can do a make install.

In /usr/local/bin you should end up with,

-rwxr-xr-x 1 User None 332K Jan 19 23:39 rrdcached.exe*
-rwxr-xr-x 1 User None 428K Jan 19 23:39 rrdcgi.exe*
-rwxr-xr-x 1 User None 641K Jan 19 23:39 rrdtool.exe*
-rwxr-xr-x 1 User None  14K Jan 19 23:39 rrdupdate.exe*

and rrdtool should work quite happily.

Finished!

So, know a better way?  Know why some bits still don’t work?  Know a sure fire way of fixing the warnings (if necessary), or getting rrdupdate to work, or being able to compile the additional modules?  Let me know!

SSH tunnelling made easy (part three)

In the previous two parts of this series, I covered simple tunnels to access services you couldn’t reach, and tunnels which let you hop from one server to another on an otherwise unreachable network.  In this article I’ll cover a powerful feature of SSH, the ability to provide port forwarding via the SOCKS mechanism.

SOCKS is a standard method to allow clients to connect to services via a proxy server.  SSH can turn any computer you can connect to (over SSH) into a proxy server for you, and you alone (so it’s secure).

Example 3 – using SOCKS proxy to access multiple services on a network via a secure server

There are several different reasons why you may need to employ SSH to deliver a SOCKS proxy.  Two common reasons are if you’re connected to a public network you don’t trust (like a cafe Wi-Fi network), or if you want to get to a range of services inside a secured network to which you only have SSH access.

Since the process is identical in both cases, I won’t cover them separately.

The diagram below shows a shared workstation (maybe in a library) connected to a public Wi-Fi network.  You can’t trust the network, anyone could be intercepting unencrypted traffic on it.

There is however a sever somewhere to which you have SSH access (and which in theory, you control and so trust).  What you would like to do is browse several websites or connect to some other SOCKS supporting services, without anyone on the public Wi-Fi being able to intercept that traffic.  If you were only connecting to a single service you could use simple tunnelling as per the previous two examples, but this time, you want to browse a few websites, and it’s not sensible to try and create a tunnel for each one.  In this instance, you use SSH to set up a dynamic tunnel, which provides a SOCKS proxy.

The command is even easier.

ssh -D 127.0.0.1:9090 fred@shell.example.net

Similar to the previous commands, but you’ll notice there is no target destination, only a listening address and port.  The -D tells SSH to listen on 127.0.0.1 port 9090 in this case, and operate as a SOCKS proxy, starting at the server you’ve connected to.

In PuTTY you would configure this as below,

Note that the destination address is left blank.

In order to use this tunnel, you need to do a little more work than previously.  Assuming we’re going to use it primarily for web browsing, you would need to tell your web client to use a SOCKS proxy.  In Firefox, you would configure it like this,

Now, when you try and browse anything in Firefox, it sends the requests to what it believes is a SOCKS proxy server (127.0.0.1, port 9090).  That’s really your SSH connection to shell.example.net.  At the other end, your SSH connection sends the data on to the correct web server, receives it, and passes it back to your workstation and into Firefox.

The net result (pun intended) results in a diagram which looks like this.

So your browsing is secure as far as the Public Wi-Fi is concerned.  SOCKS supports a number of different protocols, and different clients are configured in different ways.  But as long as your tool supports SOCKS, you can point it at the 127.0.0.1 9090 server, and it will work as above.

SOCKS via SSH is extremely powerful.  Here’s a further diagram of another situation where you may want to use it.

Your company has a number of web servers internally which provide time recording, project planning and other information.  While working away from the office you need to access those services.  There are too many to set up individual tunnels.  There is an SSH server in the company’s control which can be reached from the Internet.  Using the -D option, you can turn that server into your own SOCKS proxy and browse to the company web servers to complete your work.

While not intended as a replacement for a VPN (mainly because it only really supports a subset of network protocols), this SOCKS implementation is very useful.

SSH tunnelling made easy (part two)

In part one of this set of posts, I covered using SSH tunnelling to access a service on a server, from a particular machine that can SSH to the target server, but not access the service directly (due to firewalls or sensible security reasons).  In this post, I’ll cover a three computer scenario.

Example 2 – three computers – can’t access third server directly

This situation covers a few different scenarios.  Perhaps you can SSH to a server in a DMZ (i.e. firewalled from all sides), and from there you can SSH to another server, or perhaps access a website on another server, but you can’t get directly to that server from your computer (you always have to use the middle hop).  Maybe you want to interrogate a web management GUI on a network switch which is connected to a network you’re not on, but you can SSH to a machine on the same network.  There are plenty of reasons why you might want to get a a specific service, on Server 2, which you can’t access directly, but you can access from Server 1, which in turn you can SSH to from your local computer.

The process is identical to the steps followed in the first example, with the only significant difference being the details in the SSH command.  So let’s invent a couple of different scenarios.

Scenario 1 – remote MySQL access

In this example, your web server (www.example.net) provides web (port 80) and ssh (port 22) access to the outside world, so you can SSH to it.  In turn you have another server on the same network as your web server (mysql.example.net) which handles your MySQL database.  Because your sysadmin is sensible, mysql.example.net is behind a software firewall which blocks all remote access except for MySQL and SSH access from www.example.net.

So your workstation can’t SSH to mysql.example.net and hence you can’t use the simple example in the previous article.  You can SSH to www.example.net but you can’t run the GUI up on that computer.  So you need a way to tunnel through to the third machine.  I’ll show you the command first, and it will hopefully be obvious what’s going on.

ssh -L 127.0.0.1:3306:mysql.example.net:3306 fred@www.example.net

So as before, we open the tunnel by connecting to www.example.net as fred via SSH.  The tunnel we are creating starts on our local machine (127.0.0.1) on port 3306.  But this time, at the other end, traffic ejected from the tunnel is aimed at port 3306 on the machine mysq.example.net.  So rather than routing the traffic back into the machine we’d connected to via SSH, the SSH tunnel connects our local port, with the second server’s port using the middle server as a hop.  There’s nothing naughty going on here.  SSH is simply creating an outbound connection from www.example.net to mysql.example.net port 3306, and pushing into that connection traffic it is collecting from your local machine.

Once the tunnel is in place, you would start up the MySQL GUI exactly the same as previously, filling 127.0.0.1 as the ‘server’, and the correct credentials as held by mysql.example.net.  SSH will pick up the traffic, encrypt it, pass it over port 22 to www.example.net, un-encrypt it, and then pass it to port 3306 on mysql.example.net, and do the same in reverse.

The only difference between this and the example in part one, is the destination for our tunnel.  Rather than telling SSH to talk back to the local address on the server we connect to, we simply tell it which server we want to connect to elsewhere in the network.  It’s no more complex than that.

Here’s the setup for PuTTY.

Scenario 2 – network switch GUI

Maybe you support a set of servers which you can SSH to, but which also have their own private network running from a switch that itself isn’t connected to the public network.  One day you need to use the web GUI on the switch (perhaps management have asked for a screenshot and they don’t understand why you sent them an ssh log file first time around) which runs over port 80.

So, we can ssh as user fred to say, the server endor using ssh fred@endor.  We can’t connect to our network switch (192.168.0.1) from our own workstation, but we can from endor.  What we need to do is create a tunnel from our machine, which goes to endor, and then from endor into port 80 on the switch.  This time, we won’t use port 80 on our local machine (maybe we’re already running a local web server on port 80), we’ll use port 8000.  The command therefore is this,

ssh -L 127.0.0.1:8000:192.168.0.1:80 fred@endor

So, make SSH listen locally (127.0.0.1) on port 8000, anything it sees on that port should be sent over port 22 to endor, and from there, to port 80 on 192.168.0.1.  SSH will listen for return traffic and do the reverse operation.

This is how that looks in PuTTY.

Once we’ve connected to endor, and the tunnel is in place, we can start a web browser on our own local machine, and tell it to go to the url,

http://127.0.0.1:8000

At that point, SSH will see the traffic and send it to the network switch, which responds, and the tunnel is in place.

Once again, this process works for all simple network protocols such as POP3, SMTP, etc.

SSH tunnelling made easy (part one)

SSH tunnelling is powerful and useful.  If you can get your head around networking and ports it’s pretty easy to set up, but it’s one of those things that either sticks or doesn’t, and it’s easier to work out when you’ve got a specific problem to solve by using it.  I personally use Cygwin under Windows and so my tunnelling is done using the command line OpenSSH client, however I used to use PuTTY which will do tunnelling as well, and there are plenty of other options.  If you’re already on a UNIX-like setup with OpenSSH then the same command line options are valid as for the Cygwin version.

I wanted to run through some simple examples, and then show how the tunnelling is configured to support them and what actually happens.  But first, a general statement.  SSH tunnelling allows you to make a connection from your local computer, to a service on another computer than your local computer can’t get to directly, via a computer you can get to over SSH.  That includes a two machine situation where you want to get to service X on a computer but can’t because of say a firewall, but you can SSH to the very same machine.  It also includes a three computer scenario where you hop from a middle computer to a computer it can access but you can’t.

Example 1 – two computers – can’t access service directly

So in this example, we have your local computer (your laptop for example, but this could be any computer you are logged on to), and a remote web server.  The web server has MySQL installed but the sensible sysadmin has ensured it’s only listening to local connections so that evil people can’t connect to it and do bad things.  You want to use a nice MySQL GUI you’ve got (say MySQL Query Browser) but can’t connect.

We assume for this example that you have a shell account on your web server with the username of fred.  What you need to achieve, is to let software running on your workstation access a local port, which SSH then picks up, shoves across to the remote server, and dumps onto the local port at that end (i.e. a tunnel).  To keep things easy, we’ll use the same local port on our workstation that MySQL is listening on at the other (3306) end but you don’t have to.

In plan English then, we need to convince SSH to listen for stuff on our workstation arriving on port 3306, tunnel that across to our server, and pass it to the local port 3306 over there, and bring back any traffic in the opposite direction.  To achieve that, SSH has to make a connection over it’s own regular port first, and then it sets things up.

The OpenSSH command line to achieve this is,

ssh -L 127.0.0.1:3306:127.0.0.1:3306 fred@www.example.net

That’s the long hand version, you might see that written as,

ssh -L 3306:127.0.0.1:3306 fred@www.example.net

or

ssh -L 3306:localhost:3306 fred@www.example.net

They will all work and achieve the same thing, but the long hand version for me, is the easiest to take and apply elsewhere.  So reading it, you get the following.

Using PuTTY you would set up a normal SSH configuration to get to www.example.net, and then you would add the following to the Connection / SSH / Tunnels section,

and clicking Add makes it look like this,

You would then connect to the server using PuTTY.

Once all this has been configured, and you have connected to the remote computer and logged in over SSH normally, any traffic sent to 127.0.0.1:3306 (i.e. port 3306 on your own local computer) is spotted by SSH, tunnelled over to www.example.net and pushed out to 127.0.0.1:3306 from there (i.e. that server’s loopback network connection, onto port 3306 on which we hope, MySQL is listening).

From this point, you treat any application you run that wants to connect as if you were running the MySQL server locally, for example with Query Browser you would start it, and tell it to connect to the localhost on port 3306, and then fill in the credentials of the MySQL service running on your remote server.

This example covers all cases of trying to connect to simple services, running on remote servers where you can SSH to them, but not connect remotely to the service due to either a firewall or local configuration.

Maybe your server runs a POP3 service that you don’t want anyone connecting to remotely and you want to encrypt all your traffic to and from.  Configure the POP3 server to only listen to local connections and then use the following tunnel,

ssh -L 127.0.0.1:110:127.0.0.1:110 fred@www.example.net

Now you can point your local mail client at 127.0.0.1 port 110 to collect mail, and it will be tunnelled to the remote POP3 server in the background.

Cygwin terminals!

I use Cygwin both at work and home to give me a quick to access unix-like environment, where I can use tools that I use everyday in my sysadmin role.  I find it a lot easier to wc -l something than load it into OpenOffice and do a word count.  I’m happier with awk and grep and find than Windows GUI tools.

My only real pain was that the terminals under Cygwin were awkward.  The default terminal is terrible (essentially the windows command prompt).  I eventually configured rxvt and got it working pretty much to my liking, but the X-Windows style never really fit well with my other windows and some of the features annoyed me.

However, today I found the MinTTY package within Cygwin and I’m finally very pleased with my Cygwin terminal.  It looks and feels just like a PuTTY window (which it should, being based on the same code) which is handy since I use or used to use PuTTY all the time for remote access to the stuff I support.  It’s configurable (easy) and fast.  Very pleased, give MinTTY a shot if you’re a Cygwin user and didn’t know it was there.

Cygwin and rsync and all things nice

I wrote a little while ago that I was running Linux (Ubuntu in this case) inside a VirtualBox virtual machine, and it was all good.  Before that I’ve played with lots of methods of getting my favourite unix utilities (like rsync) working under Windows.  I’ve used Cygwin, and pre-compiled Windows versions and stripped-down Cygwin versions, and second machines running Linux and VM’s.

One of the main drivers for getting those things working is to back up my websites, held on my hosting account.  I can ssh into my hosting account, and that means if I can get rsync going locally, I can use it with ssh to copy all changes to my local machine.  It’s efficient (rsync only copies changes) and it’s easy.  The pain is always finding a decent compliant version of rsync.

Anyway, I already said that when I started using the Linux VM I ported my script across to that, and along with the VirtualBox shared folders, I could backup my websites and they were visible under XP.  It wasn’t pretty but it worked, and it meant I had to start up the VM.  At the start that wasn’t a problem because I was using it quite a bit but as the days went on and I stopped launching it, backups were less frequent.

And then today – random disaster.  I crashed the VirtualBox VM image, and after a couple of restarts it eventually stopped booting.  This wasn’t a great problem as I had snapshots of working images, so I just rolled back to one of those with two clicks.  Two clicks which took less time than the following thought took to get from one end of my brain to the other ‘I made the snapshots weeks ago, and since then I’ve written a lot of scripts and downloaded a lot of files and you just erased them all you idiot’.

So, I set about repatching Ubuntu and setting up various settings that I’d lost and made a few more snapshots.  But I needed a more permanent, reliable website backup solution.

Which means I’ve installed Cygwin again.  I know there are Windows binaries for rsync, and I know there are other apps which claim to do the same thing, but you can’t (in my view) beat the simplicity of Cygwin and the unix binaries.   Now I have a working cron daemon, ssh configured, rsync installed, and my little script which does all the work.  The rsync command is pretty simple,

rsync –recursive –links –safe-links –rsh=ssh –stats –human-readable me@mywebhost:/myhomedir/ /path/to/local/copy/

Then I just tar up the resulting files, compress them, make sure the filename has a date in it, and I can be confident I’ve got copies of everything I need.  Since most of my sites rely on mysql for their data, I also run some jobs on my webhost to mysqldump all the data into files three times a week, and I then back those files up locally.  I could mysqldump the content remotely, but it’s a hell of a lot quicker to do it on their system, compress them, and then rsync the compressed files.

Installing ssmtp lets me send mail from the Cygwin command line, so the script can send me a mail when it’s finished, and I’ll schedule it in cron to run once a week or something.  Much better.

Plus, I get all the fun of vi, grep and awk 🙂