London Marathon

So the good news that came through my letterbox on Friday is that I now have a balloted place in the London Marathon 2016. I only started running a few years ago with Karly and I both doing the wonderful Couch25K programme by the NHS. That’s not to say that before I was a couch potato, but that I hadn’t found a way to ‘get into’ these longer runs.

I'm in the marathon

I’m in the marathon

Well thanks to #C25K, Karly is now a regular runner, and has completed a half marathon already, whilst my longest ‘competitive’ distance has been a couple of 10k races. The good news is that we’ve both been training for a half marathon for two weeks time, so our base level of fitness should mean the marathon doesn’t present an insurmountable challenge.

Karly and I had a pact, that if one of us gets in then both of us would fundraise for the other entering via a charity entry. There are so many worthy causes that we could run for, but until we can get a place locked in we’ll just focus on our training.

I’ll try and keep up writing about the training experience, and will attempt at using technology to keep us motivated. Karly’s already got the forerunner 225 – and my 920XT is now on order.

Linked Data

A few weeks ago I wrote about how I’d linked up my EDF Energy Monitor to save the minutely electricity usage in my home.  It’s been a useful little project, and saved me a fair bit of cash by making sure that we were on top of the electricity consumption.

I recently came across – a community of like-minded people who are able to share their data through ‘topics’ on the site – and publish any unstructured data to the site for others to consume.  Whilst my electricity usage isn’t necessarily the most informative of datasets, it does enable others to see the data structures of the EDF Energy Monitor, and write their software so.

This has led me to a bit of angst, which I also believe to have already been solved by the advocates of NoSQL.  Is my data, if unstructured, actually useful?

In my view, Tim Berners Lee gave a bit of an underwhelming talk, on taking the web to the next level.  Underwhelming in the sense that it’s simple, yet seemed to contain some top-down logic that implementing would require a documented structure for each type of data.  The three rules are:

1) We need to use a URI for a single product/event
2) If i lookup a URI, I will get data back in a standard format
3) When i get that information, it has relationships to other data, based on their own URIs.

Taking my ElectricityUsage topic on as an example – I work out how far I am being compatible with TBL’s vision.

1) Yes. The device itself has a unique URI and using the API, it’s possible to generate a unique URI for each event (albeit using querystrings).

2) Yes, The data should be returned in a standard format.  This is the bit that confused me.  I though a standard format would be a prescriptive format that explained the structure of the response.  However, I now realise that this would be fairly impossible to police and manage.  Therefore as long as the format of the data is of a standard (eg, json, xml) then the structure is a completely different beast.

3) No – this is the bit that I need to work on.  For my original app, the output of the sensor (xml) was saved to a MySQL database table, and I then wrote another API get data out of that table and into json for my Angular JavaScript app.  I think the question is that to be more useful to others with the same hardware, I should upload the raw xml to – so that the datastructures are of a de facto standard, rather than my own parsing of the xml.  That said, the raw xml also does not contain any URIs to other relational data, so making this change would still end up with point 3 not being met.

Please take time to watch the talk, and let me know your views on how this impacts the internet of things.

Klik n play

As a child, I remember having a neighbour come over to ours and start showing me and my brother this new ‘game’ he’d just created on his computer. He wanted to test it out on our computer as it was a bit faster than his, being a 486 rather than his 386, with a copious 4MB of RAM.

The game he’s created was a simple platform game on a program called ‘Klik n play.’. Evolving into the game factory, klik n play turned out to be my first introduction to programming, and a really obvious way of showing how complex relationships can be when adding yet another item to your game. knplogo

My brother couldn’t get enough of it, even leading to a conversation with Dad about whether he could buy the pro license to get his game released commercially. Unfortunately for him, the sensible decision was made and the license didn’t get bought, as it was pretty obvious in the early 90s that games programming was taking off in a big way, and though games factory was good at platform games, the industry had just released doom and our little cottage platform games probably wouldn’t cut it commercially.

Moving forward to how I find this relevant today, is when I look at the mobile apps market.  With the wonderful Ionic Framework & Google’s places api, I was able to create a simple ‘find the nearest pub’ app in a couple of hours. Sure its not the highest fidelity app you’ll ever see – but thanks to open source and open data – functionally it does what I need it to do.

So here’s the rub. A whole eco system of digital agencies have built up around app creation and the central role of individual ownership of these apps.  However, the real value that the holders of the apps can offer the world is not to release the app itself (being a prescribed interface data that said company would want to make public) – but to open up the data via simple APIs in order to realise Tim Berners-Lee’s vision of proper LinkedData.

It’s not always a simple proposition, as considerations need to be made as to how competitors have the ability to use your data – but seeing whether others develop this data is an indicator of how valuable your data is – and if your business model is to sit on data and not release it – your business probably needs to rethink the model.

python-social-auth + Office365

I’ve been playing with a little side-project over the last couple of days, and wanted to try and enable office365 login for a third party site. Does Office365 offer OAuth services?

If they don’t – it does look like a little project I might pickup myself to write the appropriate backend to tie in with python-social-auth.

EDIT: It looks like Office365 requires you log in with a (paid) dev account and hit:

The process looks the same as the LiveOAuth2 backend, so that is possibly 99% of the way towards implementing Office365 connectivity also.  Hopefully will be able to register a dev account tomorrow and check this out.

Bike Upgrade

After taking a tumble on my bike the other week, I decided to look at the options for upgrading my bike caliper’s. I’ve got a Trek 1.5 H2 2012, bought as Olympic fever was gripping Britain, and ridden exponentially more miles than any other bike I have owned before.

The options on the Trek 1.5 are limited, due to the fact that it requires ‘long reach’ brake calipers (it leaves plenty of room for mudguards). I ended up settling on the Shimano R650s – not too expensive at ~£28 each, and arrived within a couple of days from the wonderful Wiggle.


Needless to say, they were a cinch to fit, and after making a few minor adjustments to the brake shoes and centering the brakes – have made one massive difference.

With my previous stock brakes, I’d replaced the shoes a few times and maintained the brakes by checking whether they were centered – but the consistency of braking with the new calipers is much more like the brakes on my Dad’s Madone 3.1 SRAM setup. I can ‘feel’ the road a lot better, and even attempted to lock-up on my way into work this morning on the wet road – but was able to feel my way far better than before.

That said, it may all be psychological and the old calipers have just worn from thousands of miles of wear and tear. Do let me know if you’ve performed a similar upgrade, or have recommendations on what I should upgrade next. By all accounts, it looks like the wheels will be the next upgrade – but that’s probably a birthday present to myself in March – once the weather has improved.

The power of data to reduce my electricity bills

Over the past 18 months, I’ve kept a record, pretty much every minute, of my electricity usage. I bought an EDF energy monitor, then used the data-cable to link it up to my home PC to catalogue the data that it’s been producing.

Though there’s an overhead in running some electricity to find out what my electricity consumption figure are, the results have been quite exciting. I’ve yet to really dig deep into the statistics, but have been using Google Charts to play around with the stats to work out what the most efficient combinations are to use the least electricity in my house.

Screenshot 2014-11-24 17.54.44

I have the advantage that the entire house runs off electricity, and with only a few hours of downtime in the last 18 months, I’ve now got a useful dataset to play with. I’m now writing a simple API to pull the data out in a way that makes sense to me, and will be consuming that data with Google Charts and D3JS to do year-on-year comparisons & more granular data analysis.

So far this year though, even with the simple graph I have here at we’ve managed to save approx 15% on our electricity bills through fine-tuning the water heater and underfoot heating to behave in the most energy-efficient way. I’d love to take this further and start developing on top of this.

If anyone’s interested in helping consume the data, please get in touch and I’ll give you access to a simple API. If you also have an EDF energy meter, I’ll also pass on instructions to get the program setup to start cataloguing your own home electricity usage.