Skip to content

Phun with PHEV

Well, that escalated quickly. We went from thinking in November/December that we needed a solution for carrying more kit around than would fit in the Panda, to driving away from Portsmouth on 11th March in a brand-new Mitsubishi Outlander PHEV, paid in full (mostly from some money I had sitting in Australia, hoping the $AUD would be worth something some day).

Our logic wasn’t entirely illogical – we recognised that if we did not get a clunky Transit van or similar, we needed at least an estate wagon or SUV. It did not take us long to realise that most SUVs on the market emphasise the ‘Sport’ over the ‘Utility’ part of ‘Vehicle’, and mostly were no bigger (and provided no additional storage) than a family sedan. In some cases they were smaller. The next step up – the Range Rovers, Honda CR-V, Land Rover Discovery, Mitsubishi Pajero/Shogun and similar are eye-wateringly expensive, inconveniently large for getting around London in, and on the whole have lousy mileage. The ideal would have been the SUV that is surely on Tesla’s roadmap, but that is still years away, unless Musk brings one back from Mars. Really for our use cases, crossed with our desire to have an electric or hybrid vehicle that was not a Prius, the Outlander PHEV was the only good choice, despite it being more than we had initially thought about paying.

So far our experience – driving back from Portsmouth, driving backwards and forwards to Coventry, and a little bit of noodling around Woolwich and Charlton – has been excellent. The car is capacious, comfortable, and quiet inside the cabin. Also, it is red.

The hybrid mechanism is quite a neat solution: the batteries have a range of 20-30 miles, and the rear wheels are driven entirely off electric motors. While coasting or braking, energy is regained and fed back to the batteries. When there is insufficient charge, the petrol motor kicks in to run a generator to feed the electrical system. When there is not enough energy in the system, the petrol motor kicks in to drive the front wheels, either to the exclusion of of charging the battery, or while still diverting some energy to the electrical motors. To give you some idea of how effective this is, over the whole 360-odd miles that we drove in the past few days, the petrol engine was active for somewhere well under 20% of the trip, mainly kicking in to pull us up the long rising slopes – going down the other side would then recover most of the energy spent going up.

The mechanism is going to require a slight adjustment to how I think about managing the efficiency of the vehicle. We were sort of able to ignore the efficiency of the Panda, because it was a 0.9 litre turbo engine in a car that we could pick up with one hand. As long as we were not carrying anything other than ourselves and the dog, it was costing us about £0.13/mile (which is what the PHEV has cost so far!), and we could drive it like a tiny sports car. Mitsubishi have made it clear – and made it transparent – that the key metric in managing efficiency is Energy.

To enable this, the car comes with enough computing power and instrumentation to take it to orbit and back. There’s an endless amount of stuff for the passenger to fiddle with on the central screen that provides the GPS display, and key metrics are echoed to the dashboard directly in front of the driver. For myself so far I’ve found it good to have the display of the energy flow up – it echoes nicely with the ‘power’ meter that supplants an expected rev-meter, and gives very good feedback on how my driving habit and technique is consuming, conserving or generating charge. And speaking of habits, I am definitely in love with the cruise control.

Despite the inability of most drivers on the motorway to maintain braking distance between vehicles, the cruise control worked quite nicely on the motorway. It’s not something I’ve used before, and I found it quite eerie to have no sense of the acceleration and deceleration that accompanies manual maintenance of speed. It felt like the car was coasting all the time as it maintained rock steady speed, and it was directly observable that this mode was conserving or generating energy far better than I could ever do. For the first time, I am convinced that we’re very close to being able to eliminate manual control of the vehicle for most circumstances.

The only downside so far is charging the PHEV. We’d taken note of how many charging stations there were around us and along the motorways on ZapMap and similar, and thought no further about it, relaxed that there were plenty of options. That did not prove to be the case.

To begin with, many charging stations on the maps are not working as it turns out, and generally it’s not proving to be worth the effort of whoever had them installed in the first place to get or keep them maintained. Googling started to show up forums like this that reveal a common pattern of premises installing chargers, advertising their existence, and then abandoning responsibility for them. A good example of this is the two ‘Pod Point’ units in Woolwich Arsenal where we live. There is one literally outside our front door that some idiot backed into and took out of commission, and another further up the road that is not responding to the RFID card it requires. These were installed at some point in the past at the behest of Berkeley Homes by one company, and then later rebranded to Pod Point at a later time. We’ve been endeavouring to get these fixed, but spent a week on a four cornered quest between Pod Point, Berkeley Homes, Greenwich Council and Rendall and Rittner facilities management trying to find someone to take responsibility for getting it sorted: initially each player disavowed responsibility, or avowed they were waiting on one of the others, before eventually Pod Point were able to guarantee an engineer would get it sorted during this week (And I will wax very wroth if that does not happen).

The other part of the charging station problem is that the ‘free’ units around are installed by and/or run by a bewildering array of providers – ZapMap lists almost a dozen on their site, and many more in their app, as does the SpeakEV forum. Some of these networks have units that accept RFID cards from other networks, some have an app that works on some of their units but not others, and there is at least one (In the carpark of Wickes at Charlton) that nobody seems to know who to gain access from. The absolutely infuriating thing we found on the first day when we pulled in on the motorway to charge is that the free service required an RFID card from the network. That could only be obtained by filling in an application form on their website, and discovering that it could be up to 10 working days before the RFID card would be sent out.

This. Is. Insane.

As someone on the forum said:

“Can I ask a simple question… Why are there networks so stupid. As in Why insist on a charge card. It’s like going to buy petrol at BP and them saying, sorry you must join our member scheme or you can’t have petrol.

Giving we all carry debit cards isn’t it better to charge say 50p a go or even charge 20p kWh etc, rather than making you join a scheme and carry yet another “free” card. Is there something special about EV charging that means we need to be tracked. The only thing I can think is the cost of the electric is quite low so the cost of preceding card payments would be higher than running a cards scheme?

Even a coin operating parking meter style would be preferable to a charge card. Pay by the minute (though not £7.50 for half hour like chargemaster are doing on CCS).”

This feels very much like a market that is on the verge of transitioning to a single consistent access and payment model, and which will see the myriad suppliers whittled down to 2 or 3 competitors. As it stands, the charging points in place appear to be a mix of quasi-public venues that shopping centers, chain stores and local councils have thought it would be good PR to install, alongside a few providers (Ecotricity and Pod Point being prime examples) hoping to get market dominance in specific geographical locations. And there are a few chancers who are hoping to make a quick and ugly pound out of this mess. POLAR have got the cheek to be aiming for a monthly subscription for access and quite expensive charging fees, including a £1.20 ‘administration’ fee each time you plug in.

So as it stands, I’ve tried to figure out which RFID cards I need to buy – mostly at £20 a pop, and have so far ordered the following:

  • Ecotricity
  • Source London
  • Pod Point
  • Charge Your Car
  • POLAR
  • Elektromotive

However I expect that for some trips, I will need to check 10 business days out and see what other pissant regional scheme I have to sign up to.

Addendum:

This Telegraph Article talks about the source of some of the chaos in London around all this:

Ownership of the sites is split between London boroughs, manufacturers of the equipment, private businesses and landlords of commercial property sites. The fundamental stumbling block is defining responsibility and finding funds for maintenance of broken charging points.

Source London says it currently has no jurisdiction to repair broken points, because maintenance is the responsibility of charger manufacturers, some of whom are not cooperating. But that view is disputed by the two main charger manufacturers, Chargemaster, with 647 sites, and Pod Point, with 276.

Pod Point chief executive Erik Fairbairn said: “As part of the purchase of Source London, Bolloré purchased a commitment to fund maintenance agreements of every charge point in the Source London network. We have not yet seen evidence of them doing that.”

Although TfL set up and sold off the network, it is unable or unwilling to clear up where exactly the responsibility lies. It simply said: “Enforcement of these [maintenance] contracts remains the responsibility of the individual consortium partners,” without clarifying whether this means boroughs, Source London or charge point manufacturers.

A man is not dead while his name is still spoken.

Something I have been meaning to do for quite a time is to take up the idea of keeping PTerry’s name alive by adding the X-Clacks-Overhead header to parts of this site. Even if it is only in the overhead:

GNU Terry Pratchett.

Maven releases with Git

I’ve started to put various snippets of code up into GitHub, partly because they may be useful to other people, partly so that they are more accessible when I do not have my personal laptop with me. Yes, Virginia, I could put it all on a USB stick (and I probably will), but that poses another problem of keeping that content up to date. And I’m not keen on sticking my stick into random and unpredictably unhygienic places.

The model that I’m looking at is chosen because I’m comfortable and familiar with it, not necessarily because it’s the ‘best’ nor bleeding edge:

  • Version control is managed with Git, using the general semantics of pushing clean code to GitHub and in-progress code locally;
  • Code is modified through the Eclipse IDE;
  • Dependency management is done with Maven;
  • Builds, tests and code compliance checks are run via Maven – on-the-fly through the Eclipse IDE while code is fluid, from the command line when it washes up on islands of stability;
  • Maven collaborates with Git to prepare and tag a version for release;
  • Maven pushes to my personal Artifactory instance.

As I’ve written about before, in this world I’m keen on placing declarative road blocks on the build road to ensure that basic CheckStyle and code coverage expectations are met. I firmly believe that these kind of checks are equivalent to spelling and grammar checkers for written natural language. They do not guarantee good or correct code, but they do assist in picking up silly mistakes and promoting consistent style.

One of the things I dislike about Maven is the poor and hard-to-find documentation around plugins. Even the ‘official’ core plugins are poorly documented, with a strong emphasis on the ‘what’ instead of the ‘how’ and ‘why’. Please, when writing documentation, don’t simply catalogue your API or interface: the result is like giving someone a dictionary when they want to learn to speak English.

As a result of the poor documentation, a lot of the time we need to rely on samizdat and hope to find a blog or similar written by somebody who has already figured out the documentation. Case in point here is this rather nice piece by Axel Fontaine on how to integrate Maven and Git. There are still a few missing links in that document, so let me try to fill in the blanks.

There are three key bits that need to go into your pom.xml to get this working. First you need to include a <scm/> section, which I like to put up at the top of the pom.xml along with the other general project meta-data:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>net.parttimepolymath.cache</groupId>
    <artifactId>SimpleLRU</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>SimpleLRU</name>
    <description>Simple least-recently-used cache.</description>
    <url>https://github.com/TheBellman/simplelru</url>

    <scm>
        <connection>scm:git:git@github.com:TheBellman/simplelru.git</connection>
    </scm>

I can never remember this stuff, and rely on cloning it from project to project. It is rather calming, like a religious ceremony or meditation. The important part here is the <scm/> tag, which tells Maven where it can pull and push code from and to. You also need to tell Maven where it will store released artifacts, using the rather poorly named <distributionManagement/> segment. I usually put this just below the <scm/> tag:

<distributionManagement>
    <repository>
        <id>central</id>
        <name>ip-172-31-6-67-releases</name>
        <url>http://54.209.160.169:8081/artifactory/libs-release-local</url>
    </repository>

    <snapshotRepository>
        <id>snapshots</id>
        <name>ip-172-31-6-67-snapshots</name>
        <url>http://54.209.160.169:8081/artifactory/libs-snapshot-local</url>
    </snapshotRepository>
</distributionManagement>

By including a <snapshotRepository/> it is possible to share snapshot or beta builds using the maven deploy operation, which I wont cover off here. One of the annoying things to trip over is access control to the destination repository, which needs to go into the settings.xml in the local user’s .m2 Maven directory:

<servers>
    <server>
        <username>robert</username>
        <password>...</password>
        <id>central</id>
    </server>
    <server>
        <username>robert</username>
        <password>...</password>
        <id>snapshots</id>
    </server>
</servers>

The documentation is opaque around this, and it is not obvious that the <id/> in the <repository/> for the distribution management is used to look up the login credentials in the <servers/> section of the settings.xml. While I appreciate the benefits of not wiring credentials into the pom.xml directly, it is easy for these two pieces of information to get out of synch, and easy to forget the existence of the settings.xml when your release falls over with cryptic errors because it can’t login to Artifactory.

The final bit of wiring goes into the <plugins/> section:

<build>
  <plugins>
      <plugin>
          <groupId>org.codehaus.mojo</groupId>
          <artifactId>versions-maven-plugin</artifactId>
          <version>2.2</version>
      </plugin>

      <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-scm-plugin</artifactId>
          <version>1.9.4</version>
          <configuration>
              <connectionType>connection</connectionType>
              <tag>${project.artifactId}-${project.version}</tag>
          </configuration>
      </plugin>

The first of these, versions-maven-plugin, is used during the release process to fiddle with the version of your released artifact, and maven-scm-plugin wires the release process back to the source code repository defined in <scm/>. Note that there are a couple of different ways to define the source code repository, and there are corresponding and roughly similarly named things that go in this configuration connectionType. The documentation can more-or-less help you here.

Assuming that you have committed and are in the required branch, then the process becomes pretty simple:

  1. mvn versions:set
  2. mvn deploy
  3. mvn scm:tag
  4. mvn versions:set
  5. commit and push to Git

The mvn versions:set as I’ve used it above is interactive, but if you have a look at the documentation you will find a variety of different automagic ways of using it without interaction – the article by Axel Fontaine for instance is a good explanation of how to wire this process into Jenkins/Hudson.

In the set of steps I’ve just outlined, steps 4 and 5 are post-release stages, where I can set the version in the pom.xml back to a snapshot, and get the new snapshot version preserved in Git. I recommend doing this at the time that the release is being done, rather than doing it the next time that work is done on the project, for two reasons. First, this echoes the behaviour of the maven-release-plugin. Secondly it reduces the chance of forgetting to set the version at a later time.

On “fencing”…

“So you do fencing then?”

Sigh. Yes, that question again. How best to explain what I actually do? Let us set a scene. It is important that you, the reader, try to place yourself into the first person view here, and enter this scene. After all it is 2016, and VR is The Next Big Thing. Immerse yourself.

So. You are in a public place, and you realise that the bloke across the room is spoiling for a fight. You get that gut-tightening feeling that comes from knowing that conflict is coming, a rumbling of thunder on the horizon that presages the storm. His mates are egging him on, priming him for the fight.

Oh crap. He’s got a rapier in his hand.

This is not a fantasy foil, a dainty needle languidly waved by some fool in a feathered hat. This is the real thing. A meter of sharpened steel, shaped to a needle point. Not a tool, or a self defence aid, but a murder weapon. This person is coming toward you to murder you. They are going to take that meter of steel and try to shove it through your face, or your throat, or your chest, or your gut. This is happening.

Widen the view. You have a rapier too. Your hand is shaking but you hold it out in front of you, trying to hide behind it, pointing it across the space and screaming inside your head “keep the hell away from me!”. They are still coming, fast. In less than half a second they are going to be in the distance to shove that giant skewer through you.

Tick. Tock. Think quick.

You have two choices. Stand there and be struck, or hit them before they hit you. Stick them with the pointy end. It’s the only way to stop them. They are going to kill you. Tick. Hesitate and you are dead. Tock. Can you really hit your opponent without being hit in turn? Tick. If you can reach them, they can reach you. Tock. Can you move aside?

Tick. Too late.

Widen the view. You’re wearing a fencing mask, and a padded jacket and plastron. The rapier has been blunted, with a rubber stop on the end. The bloke is your friend, and after this bout you are going to call it a day, and go for a pint and talk about technique and historical context and fitness and whether the images in Capo Ferro are intended to be lifelike or a Platonic ideal. Maybe two pints.

Real technique, counterfeit intent, safe (or at least safer) swords. This is HEMA rapier fencing, an attempt to explore or rediscover or recreate the physicality of serious martial practice that has been obsolete for 300 years. We try to find the balance point between safety and fun, and entering the mindset of someone doing something awful in anger or self-defence. This play is a game requiring fitness, speed and stamina, but it’s also a test of intellect, mindfullness and perception.

It is possible to discern historical links between rapier play and modern sport fencing, but the two forms are not comparable, just as the swords in play are definitely not comparable. Measurements varied over the period where something identifiable as a rapier was in use, but you can loosely say that a rapier is a sword optimised for thrusting with a narrow blade a little under a meter long, and weighing around one kilogram. By comparison a modern foil is 90cm, weighing around 350 grams.

HEMA rapier play is not “better” than sports fencing, or vice versa. The two are not meaningfully comparable, as the technique, weapon and context are dramatically different. HEMA is also not re-enactment, or LARP, or theatrical stage combat… but none of this is exclusive. Some of the best HEMA rapier fencers are involved in re-enactment or LARP. The learnings and skills from all these very different contexts can inform and enrich other contexts. We are blind men standing around the elephant that is the historical context of the rapier, doing our best to perceive the whole beast from our disparate positions.

12744756_1099764583377220_1687051014109950110_n

(Photo by David Rawlings, myself on the right fencing left handed with Joseph Sherlock)

CSS3 Oops.

Revising my resumé as part of an overall overhaul of my site, I realised that the presentation on mobile devices was not very good. Fortunately since I last did anything major, CSS3 has become widely implemented, so Media Queries are now an option for degrading onto smaller screens. To my pleasure it did (eventually) just work, but I’m embarrassed to say that I spent a good hour wondering why it was not initially working. It would have helped if I’d remembered that CSS files are read from the top down…

On a side note, I’m quite disappointed in the behaviour of the Safari ‘responsive design mode’. While it does allow quick switching of window size, as far as I can tell apart from tinkering with the user agent string it does not register as a mobile device from the point of view of CSS. I’m hoping to find a better way of designing against mobile, because it’s definitely suboptimal to push changes to a server just so that I can test them on the phone.

Robots. They are coming to take your content.

I am in the process of revising my site, and discovered for whatever reason that I had an empty robots.txt file present. I know it is only a voluntary ‘standard’, but as far as I know all the major players do respect it. As the overwhelming proportion of users use a search engine that respects the standard, it does form a useful way of shaping what shows up in the general public eye.

I can never remember the syntax though, so for your reference and my recollection – http://www.robotstxt.org

Addendum: I was not familiar with the semi-standard for site maps so I’ve added that as well to see what the effect will be.

The Australian Government Spill

A very quick primer for people outside Australia who may not know who the players are in the current demolition of the Federal Government in Australia.

To begin with, here is the current Prime Minister, Tony Abbot:

sideshowbob

 

Here is his challenger, and most probable next Prime Minister, Malcolm Turnbull:

 

mrburns

 

And the leader of the opposition, who I think is called Bill Shorten:

 

ralph

(Mobile) Weapons of Choice

Like any other code-worrier, I have a ton of applications on my (i)Phone, ranging from “things that look shiny but are useless”, through “things that I use once a year”, up to “indispensable and every-day”. Out of interest I’ve tried to work out what apps are the once that fall into the latter category, apps that are essential to getting my work done and which contribute strongly to the sense of never being out of the office.

First and obviously, mobile Safari and Mail. Nothing really interesting to say there, apart from mentioning that the relationship between Apple’s Mail and Google Mail using IMAP remains flaky and annoying, lending an aggravating persistence to messages you just want to erase forever. I find at worst I have to log into the web interface to really purge the stream of mail from servers and services and mail lists – all the things that are absolutely read-once.

The Google Authenticator for two-factor authentication works really well, particularly where you have a friendly system administrator who knows how to get a QCode up on screen that you can scan. An undocumented feature that is very handy is touching any token copies it to the clipboard, ready for pasting into another app.

The PagerDuty mobile app is superb, and Just Works. It perfectly fits the use case of being annoying at 3:00 AM and providing big friendly buttons that can be stabbed with a thumb while you are trying to wake up. They have obviously thought very hard about the use cases, and the features that are up front emphasise “are there any issues?” and “acknowledge this alert”. 10 out of 10.

I would also call out the AWS Console app. I actually find this somewhat simpler to use than the web interface for being able to quickly scan system metrics and statuses. The information delivered through this is super rich, and there are handy management features available (such as modifying DynamoDB provisioning) when you’re away from the keyboard. It’s got bullet-proof two-factor authentication, which fits nicely with the Authenticator. It’s a relatively painless cycle to jump into 1Password for the login password, paste into the AWS Console, flip out to the Authenticator, and flip back to paste the authentication token. The authentication lasts a reasonable amount of time too, so there’s not the pain of having to do the dance several times an hour.

1Password is indispensible. I can entirely rely on it being a secure repository for anything I need to hold securely, and because it’s dataset is distributed across all my devices, I am comfortable updating passwords frequently. On an older phone I did find it a bit annoying how frequently I would need to unlock it, but on my current phone (and I am guessing all future phones) I can use my thumbprint.

Slack has similarly thought well about the use case of their service on the phone – the experience mirrors the web and desktop interfaces nicely, and the delivered UI makes it pleasant to use (much more so than Skype by the way). I would not like to hold an enormous conversation across Slack on the phone, but for on-the-go messaging it’s best of breed.

Finally, I could not live without Things from Cultured Code. I’m not a subscriber to the GTD religion, but can see that the shape of the app is closely aligned with those ideas. For me it’s trivially easy to create new to-do items, and categorise existing items into “do today”, “do soon” and “god knows when i can do this”. I swing like a pendulum between being calmed by the ability to just focus on one or two immediate tasks, to freaking out at the length of the backlog of things not done. I’ve been using Things in various incarnations since it was an early beta, and love it to bits.

Actually, not finally. Three lesser stars. Agile Cards I only use every two weeks during sprint planning, but it just nicely does what it says on the box. There are dozens of these apps, this is the one that I have. Not indispensable, but handy.

Evernote should be indispensable, and I wish it was, but I cannot quite get comfortable with it. I need to school myself to use it more on both desktop and mobile, as I think that it should work as a general “dump stuff to remember here” – when I remember to use it, the snippets that I drop in there are useful, but I often find that I remember stuff by trying to keep the browser tab open, or pushing them to the Safari Reading List, rather than tossing the bookmark or a snippet into Evernote. The same problem exists with the Apple Notes – it’s a very handy place to drop small snippets of text and reminders, and synchs everywhere, but tends to be write-only.

ORM?

It’s rather annoying that in 2015 the ORM (Object-Relational-Mapping) problem is still tedious to deal with. While in general terms it is a solved problem – JPA and Hibernate and similar frameworks do the heavy lifting of doing the SQL queries for you and getting stuff in and out of the JDBC transport objects – there does not seem to be any way to remove the grinding grunt work of making a bunch of beans to transport things from the data layer up to the “display” layer. It remains an annoying fact that database tables tend to be wide, so you wind up with beans with potentially dozens of attributes, and even with the best aid of the IDE you wind up fiddling with a brain-numbing set of getters, setters, hash and equals methods and more-or-less identical tests.

I would love to suggest an alternative – or build an alternative – but this remains a space where it feels like for non-trivial use there are enough niggling edge cases that the best tool is a human brain.

Doing More With Less (Part 1 of N)

In recent weeks I have been massively overhauling the monitoring and alerting infrastructure. Most of the low-level box checks are easily handled by CloudWatch, and some of the more sophisticated trip-wires can be handled by looking for patterns in our logs, collated by LogStash and exported to Loggly. In either case, I have trip wires handing off to PagerDuty to do the actual alerting. This appeals to my preference for strong separation of concerns – LogStash/Loggly are good at collating logs, CloudWatch is good at triggering events off metrics, and PagerDuty knows how to navigate escalation paths and how to send outgoing messages to which poor benighted bastard – generally and almost always me – has to be woken at 1:00 AM.

One hole in the new scheme was a simple reachability test for some of our web end points. These are mostly simple enough that a positive response is a reliable indicator that the service is working, so sophisticated monitoring is not needed (yet). I looked around at the various offerings akin to Pingdom, and wondered if there was a cheaper way of doing it. Half an hour with the (excellent) API documentation from PagerDuty, and I’ve got a series of tiny shell scripts being executed via RunDeck.

#!/bin/bash
if [ $(curl -sL -w "%{http_code}\\n" "http://some.host.com/api/status" -o /dev/null) -ne 200 ] 
then
   echo "Service not responding, raising PagerDuty alert"

    curl -H "Content-type: application/json" -X POST \
        -d '{
          "service_key": "66c69479d8b4a00c609245f656d443f1",
          "event_type": "trigger",
          "description": "Service on http://some.host.com/api/status is not responding with HTTP 200",
          "client": "Infra RunDeck",
          "client_url": "http://our.rundeck.com"
        }' https://events.pagerduty.com/generic/2010-04-15/create_event.json
fi

This weekend I hope to replace the remaining staff with a series of cunning shell scripts. Meanwhile the above script saves us potentially hundreds of pounds a year in monitoring costs.