Linked Data

March 1st, 2015 by andylockran No comments »

A few weeks ago I wrote about how I’d linked up my EDF Energy Monitor to save the minutely electricity usage in my home.  It’s been a useful little project, and saved me a fair bit of cash by making sure that we were on top of the electricity consumption.

I recently came across – a community of like-minded people who are able to share their data through ‘topics’ on the site – and publish any unstructured data to the site for others to consume.  Whilst my electricity usage isn’t necessarily the most informative of datasets, it does enable others to see the data structures of the EDF Energy Monitor, and write their software so.

This has led me to a bit of angst, which I also believe to have already been solved by the advocates of NoSQL.  Is my data, if unstructured, actually useful?

In my view, Tim Berners Lee gave a bit of an underwhelming talk, on taking the web to the next level.  Underwhelming in the sense that it’s simple, yet seemed to contain some top-down logic that implementing would require a documented structure for each type of data.  The three rules are:

1) We need to use a URI for a single product/event
2) If i lookup a URI, I will get data back in a standard format
3) When i get that information, it has relationships to other data, based on their own URIs.

Taking my ElectricityUsage topic on as an example – I work out how far I am being compatible with TBL’s vision.

1) Yes. The device itself has a unique URI and using the API, it’s possible to generate a unique URI for each event (albeit using querystrings).

2) Yes, The data should be returned in a standard format.  This is the bit that confused me.  I though a standard format would be a prescriptive format that explained the structure of the response.  However, I now realise that this would be fairly impossible to police and manage.  Therefore as long as the format of the data is of a standard (eg, json, xml) then the structure is a completely different beast.

3) No – this is the bit that I need to work on.  For my original app, the output of the sensor (xml) was saved to a MySQL database table, and I then wrote another API get data out of that table and into json for my Angular JavaScript app.  I think the question is that to be more useful to others with the same hardware, I should upload the raw xml to – so that the datastructures are of a de facto standard, rather than my own parsing of the xml.  That said, the raw xml also does not contain any URIs to other relational data, so making this change would still end up with point 3 not being met.

Please take time to watch the talk, and let me know your views on how this impacts the internet of things.

Klik n play

February 26th, 2015 by andylockran No comments »

As a child, I remember having a neighbour come over to ours and start showing me and my brother this new ‘game’ he’d just created on his computer. He wanted to test it out on our computer as it was a bit faster than his, being a 486 rather than his 386, with a copious 4MB of RAM.

The game he’s created was a simple platform game on a program called ‘Klik n play.’. Evolving into the game factory, klik n play turned out to be my first introduction to programming, and a really obvious way of showing how complex relationships can be when adding yet another item to your game. knplogo

My brother couldn’t get enough of it, even leading to a conversation with Dad about whether he could buy the pro license to get his game released commercially. Unfortunately for him, the sensible decision was made and the license didn’t get bought, as it was pretty obvious in the early 90s that games programming was taking off in a big way, and though games factory was good at platform games, the industry had just released doom and our little cottage platform games probably wouldn’t cut it commercially.

Moving forward to how I find this relevant today, is when I look at the mobile apps market.  With the wonderful Ionic Framework & Google’s places api, I was able to create a simple ‘find the nearest pub’ app in a couple of hours. Sure its not the highest fidelity app you’ll ever see – but thanks to open source and open data – functionally it does what I need it to do.

So here’s the rub. A whole eco system of digital agencies have built up around app creation and the central role of individual ownership of these apps.  However, the real value that the holders of the apps can offer the world is not to release the app itself (being a prescribed interface data that said company would want to make public) – but to open up the data via simple APIs in order to realise Tim Berners-Lee’s vision of proper LinkedData.

It’s not always a simple proposition, as considerations need to be made as to how competitors have the ability to use your data – but seeing whether others develop this data is an indicator of how valuable your data is – and if your business model is to sit on data and not release it – your business probably needs to rethink the model.

python-social-auth + Office365

January 7th, 2015 by andylockran No comments »

I’ve been playing with a little side-project over the last couple of days, and wanted to try and enable office365 login for a third party site. Does Office365 offer OAuth services?

If they don’t – it does look like a little project I might pickup myself to write the appropriate backend to tie in with python-social-auth.

EDIT: It looks like Office365 requires you log in with a (paid) dev account and hit:

The process looks the same as the LiveOAuth2 backend, so that is possibly 99% of the way towards implementing Office365 connectivity also.  Hopefully will be able to register a dev account tomorrow and check this out.

Bike Upgrade

November 26th, 2014 by andylockran No comments »

After taking a tumble on my bike the other week, I decided to look at the options for upgrading my bike caliper’s. I’ve got a Trek 1.5 H2 2012, bought as Olympic fever was gripping Britain, and ridden exponentially more miles than any other bike I have owned before.

The options on the Trek 1.5 are limited, due to the fact that it requires ‘long reach’ brake calipers (it leaves plenty of room for mudguards). I ended up settling on the Shimano R650s – not too expensive at ~£28 each, and arrived within a couple of days from the wonderful Wiggle.


Needless to say, they were a cinch to fit, and after making a few minor adjustments to the brake shoes and centering the brakes – have made one massive difference.

With my previous stock brakes, I’d replaced the shoes a few times and maintained the brakes by checking whether they were centered – but the consistency of braking with the new calipers is much more like the brakes on my Dad’s Madone 3.1 SRAM setup. I can ‘feel’ the road a lot better, and even attempted to lock-up on my way into work this morning on the wet road – but was able to feel my way far better than before.

That said, it may all be psychological and the old calipers have just worn from thousands of miles of wear and tear. Do let me know if you’ve performed a similar upgrade, or have recommendations on what I should upgrade next. By all accounts, it looks like the wheels will be the next upgrade – but that’s probably a birthday present to myself in March – once the weather has improved.

The power of data to reduce my electricity bills

November 24th, 2014 by andylockran No comments »

Over the past 18 months, I’ve kept a record, pretty much every minute, of my electricity usage. I bought an EDF energy monitor, then used the data-cable to link it up to my home PC to catalogue the data that it’s been producing.

Though there’s an overhead in running some electricity to find out what my electricity consumption figure are, the results have been quite exciting. I’ve yet to really dig deep into the statistics, but have been using Google Charts to play around with the stats to work out what the most efficient combinations are to use the least electricity in my house.

Screenshot 2014-11-24 17.54.44

I have the advantage that the entire house runs off electricity, and with only a few hours of downtime in the last 18 months, I’ve now got a useful dataset to play with. I’m now writing a simple API to pull the data out in a way that makes sense to me, and will be consuming that data with Google Charts and D3JS to do year-on-year comparisons & more granular data analysis.

So far this year though, even with the simple graph I have here at we’ve managed to save approx 15% on our electricity bills through fine-tuning the water heater and underfoot heating to behave in the most energy-efficient way. I’d love to take this further and start developing on top of this.

If anyone’s interested in helping consume the data, please get in touch and I’ll give you access to a simple API. If you also have an EDF energy meter, I’ll also pass on instructions to get the program setup to start cataloguing your own home electricity usage.

Critique this poster.

October 31st, 2014 by andylockran 2 comments »


In the comments,  please help find as many reasons or anomalies why this is poor advice (and then please reply and critique the critiquing).

Centralising HealthData Storage

October 28th, 2014 by andylockran No comments »

Today the guys over at Google released their newest parry against Apple’s dominance by launching Google Fit.  I’m glad that they’ve made this move to combat the perceived potential monopoly of the Apple HealthKit – as it seems that many people are getting excited about the benefits of a centralised record of a person’s physical activities.

My favourite fitness app – Strava

As you’ll see from the little widget on the side of the blog, I am a big fan of Strava.  Yes, it’s got a bit of a bad reputation for male ‘KOMs’ (King of the Mountains) trying to better each other on dangerous stretches of road, and is a little macho-istic in its purpose – but the real benefit for me is to chart and see my progress over the time that I’ve been using it.  Only in the last few weeks have my commutes started becoming ‘achievement free’ – meaning that I’m no longer simply improving my performance on a weekly basis through chugging away on my commute – or that I’ve hit the ‘safe’ limit for speeding around central London through rush hour.

Strava & Veloviewer –  a match made in heaven.

It’s neat though, because I came across a tool called which can interact with the Strava API and pull out all my ride data, and then provide better analysis and graphing of my progress.  Want to see my performance on a certain segment over the past 2 years? Sure.  They even start by mirroring the privacy settings of what I’ve kept on Strava.  Perfect.

Twitter & Twitpic – the lesson.

However, they’re developing against a moving target.  There’s nothing stopping Strava from seeing how Veloviewer are using the Strava API, copying their efforts and then closing it down*.  Twitter’s API during it’s boom years was a major incentive for companies like TweetDeck, twitpic & Instagram to all grab onto the shirt-tails and using the twitter API boost their own revenues, however, when Twitter decided to expand and compete directly with them, it was easy to shut down the API and make more advanced functionality a ‘premium’ service.

So, Google Fit.

What Google Fit does, is bring together that fitness data under one, independent* source – that will allow a greater level of trust and compatibility between apps like Strava and Veloviewer.  Garmin recently saw the light at the end of the tunnel and opened up their ‘Garmin Connect’ software to sync data in and out with some key mobile fitness apps – but separating the data from the devices has got to be the ultimate aim in usability.  Sites like veloviewer can leverage the functionality of the GoogleFit REST API, whereas Android apps themselves can use the ‘Android’ interface to get data in and out of the datastore.

What’s the problem?

The problem is that it’s another bit of data that is fairly personal that I’m giving to Google.  Now, Google give me a load of stuff for ‘free’ – and in return I get neat functionality.  Want to track whether my girlfriend is going to beat me home to make dinner – yep; check.  I can do it.  The problem I’ve got is the ‘visibility’ of what Google store on me.  When Google Latitude first came out, it was immediately obvious where the data points were gathered from, and though the interfaces were pretty ugly, you could navigate and quite easily see things like your location history.  The difference in the most modern iteration is that the ‘gateway’ for signing yourself up into this data collection exercise is no longer contextualised.  You buy an Android phone and turn on location services (otherwise people have actually got to use orienteering skills to use Google maps, rather than just follow the blue dot!), and Boom!  Now google did react quite positively to concerns about this, so when you google ‘Location History’ you can see the following link: – but many people are still freaked out by this.

So, when an interface like Google Fit comes along, it scares me a bit

Google Fit


It’s so bloody simple it doesn’t really tell me what’s being stored.  At the moment I’ll put this down to ‘freshness’ and hopefully Google will take the same path that they did with location history – but seeing my data stash (being able to import/export from it too) is key functionality that I expect to be developed either by a third party using the Google API – or by Google themselves.

The positive is that they’ve published a paragraph called ‘Responsible use of Google Fit‘ – so at least they’re giving it plenty of thought from the off.

I look forward to seeing the direction this takes, and hopefully more apps like Veloviewer will be able to grow based on today’s announcement.  It’ll be interesting to see how Google manage to compete with Apple’s HealthKit – with Apple clearly taking the initiative on NFC (despite being years late to the party) by proactively curating a network of corporate agreements to get their payment systems integrated with the new ApplePay, versus Google’s passive actions waiting for Contactless, and then NFC to take off organically.

*I have no idea what the incentives of the Strava developers are re: veloviewer, but at the moment it’s all looking rosy from the outside.

Cycling – The joy of braking

October 27th, 2014 by andylockran No comments »

Cycling around the lanes of East Sussex just south of Tunbridge Wells last weekend, I had a bit of a tumble. It wasn’t a wild crash, but due to the unfortunate mix of a newly serviced bike and a few wet leaves on the floor, my confidence in the grip of my tyres and the efficacy of my brakes turned out to be a little too high. I was cycling from Brightling to Burwash when my overconfidence led to a 10 meter skid and me left dangling upside down in a hedge after flying straight over the top of my handlebars. Luckily there were no cars coming the other way, but three lovely motorists did decide to stop to make sure I was ok.

I’ve loved my Trek 1.5 since getting in as a birthday present back in March 2012. A sleek white racer, it was the first ‘road bike’ I’ve owned. I did once borrow my Granddad’s Dawes roadies for a few months back in 2003, but technology appears to have massively improved since the 70s. I love the flat, I love going up hills, but sometimes going downhill can feel a little.. hairy.

About 14 months ago I was training for the etape pennines up north of Buxton, Derbyshire. I’d spent quite a bit of my youth up near Edale and Mam Tor, and with my sister up at University in Buxton, figured it would be a great place to get some hill training in before the etape. It was – but cycling down Winnat’s Pass left me feeling pretty uncomfortable about descending at speed. Whenever I applied the brakes it simply felt like the bike was squirming and bending underneath me; not wanting to follow the direction I was guiding it, but behaving like an angry bull trying to throw the rider. It was not a pleasant experience.

A few weeks later I was doing the BHF’s Heart of England ride with my family, when I asked my Dad if I could borrow his Madone 3.1 for the ride. A carbon fibre bike, it was definitely lighter than mine, but I didn’t expect the braking so be so much more comfortable. The Madone comes with a mix of 105 and SRAM components, and it was an absolute dream to ride. Gone was the horrible binary braking experienced on my 1.5, and suddenly I felt in control and far more confident about how the bike would behave at speed downhill.

I begrudgingly gave the Madone back to Dad once I’d finished the etape (3 months later, but who’s counting…) – and since then have been looking around at other bikes to work out whether it’s time to take the plunge and upgrade the 1.5, or whether some small componentry upgrades would help ‘fix’ my problem. It’s taken a while, but for now I’ve decided to upgrade the non-branded brake calipers on the bike to some simple Tiagra ones. I have no frame of reference or comparison as to the difference they may make – but I hope to feedback what I find out.

Technical Sales

January 11th, 2014 by andylockran No comments »

Disclaimer:  Forgive me for the ramble.  I started this blog off on quite a separate topic, and in doing so found myself on quite a different train of thought..  If you do end up staying with me to the end, I’d really appreciate reading your comments so I can start to formulate a more cohesive account of what we are trying to say.

One of the most challenging jobs I consider is the role of technical Sales. I’ve been involved in quite a few software businesses where the salesperson has to have a full grasp of not just the company’s immediate offering, but how it may strategically evolve to become the right choice for the customer.

Ironically there is a parallel here with car industry. Companies like Lucas used to develop car headlamps (their premium models branded “King of the Road”) and other specialist car parts. I have a vague recollection of being on holiday in the West of France with my family when the Vauxhall Frontera was relaunched with a fundamentally different rear suspension system, dramatically improving the car. From the AA:

“Semi-elliptic rear leaf springs gave these early models an unrefined ride, but things changed for the better in April 1995 with the introduction of a coil-sprung rear axle, plus better brakes and improved rear door opening with lifting glass.”

The real crux here is that when there is a major leap forward in technology or innovation, there is a transitional period where the customers who don’t understand the technology need to have a technical salesperson explain the benefits. Once the technology is bedded in, there is less of a need for the technical sales as the technology has become, if not ubiquitous, at least understood in the realm in which it is used.

For digital communications, this same phenomenon appears to have occurred. In the late 90s and early 00s digital was about having an online presence; it then became the focus of communications experts to align the online presence with the offline marketing plans. Following on from this we’ve seen the social media boom, and now marketing departments are starting to understand what social networks are, their advantages and drawbacks, and aligning all their messaging accordingly.

It seems to me that the next jump is going to be another technical one. With the imminent introduction of browser-to-browser communication, what are the innovations or restrictions that such technology may carry? Having recently read (and commented on) @Documentally’s piece on the ‘Perfect Prison‘ – what could the internet look like in another 5 years?

In the last 200 years the majority of the Western World has been fortunate. We’ve been able to align the progress of time with what feels like improvement. This story, perpetuated by the media and by ideas such as “Moore’s Law” has made us believe that through the simple passage of time things will get better*. However, could the real story be that, as a society, we are starting to regress?

I am a huge fan of Hans Rosling, though his talks on the wealth of nations in comparison to life expectancy over the last 200 years aligns more with the first story than the latter.  In them, however, you’ll see anomalies that don’t match the overall story are ignored.  In the disclaimer there’s also the admission that due to the pure volume and scarcity of data, some of it has been ‘normalised’ and ‘interpolated’ to enable its use in the chart.

One of the most intriguing articles I read at the end of 2013 (it was actually published in Feb 2012) was on against TED.  I’ve always had a soft spot for TED.  I used to spend afternoons at college with a friend pinging each other TED talks.  They were a ‘cool’ glimpse into what would be possible in the future.  Intelligent role models taking time to share their ideas in a way that we could easily digest.  The most intriguing part of the criticism for me was the following paragraph:

At TED, “everyone is Steve Jobs” and every idea is treated like an iPad. The conferences have come to resemble religious meetings and the TED talks techno-spiritual sermons, pushing an evangelical, cultish attitude toward “the new ideas that will change the world.” Everything becomes “magical” and “inspirational.” In just the top-ten most-viewed TED talks, we get the messages of “inspiration,” “astonishment,” “insight,” “mathmagic” and the “thrilling potential of SixthSense technology”! The ideas most popular are those that pander to a metaphysical, magical portrayal of the role of technology in the world.

Technology is what we make of it.  As a technologist myself, I’m sometimes the awe of my friends when they come round and see that I’ve got my heating system graphing hourly electricity usage, and I can set my alarm in the morning to not bother turning the hot water on because I know I’ll be showering at work after cycling in.  This isn’t mystical, nor is it a ‘great leap forward’ – it’s actually using five year old technology in a way that the original inventor did not intend.

Behind all the technology that we’re currently using is an inventor that has set the technology up and is manipulating it in some way.  Sometimes it’s obvious and we are fully aware of the manipulation and carry on; other times it’s more subtle.  I think the big change over the next few years will be algorithms that are not used simply to manipulate, but to identify where this manipulation fails and find ways of making it work.  We will be made redundant from our roles as technical salespeople, as people think they understand how the technology works and can make the decisions themselves, but with the oversimplification of the technologies so ‘everyone can understand it’ comes a price.  The price, in this case, is the freedom to choose.

Hooking the user

January 10th, 2014 by andylockran No comments »

Last year, I directed the concept development of a rowing app that has gone on to get quite a reasonably sized organic following. Its success was very much unpredicted, but post-rationalising and having read ‘the Hook model’ by Nir Eyal, I noticed that we had ended up incorporating quite a few design patterns from the book.

Hopefully we’ll get the budget to develop the app further, as there are plenty of more ideas learned from the book that we could incorporate. The slideshow below is a nice summary of the concepts, but I recommend supporting Nir and getting the book on amazon here: