Evil geniuses and world domination are 2 of our goals... we also like Dr Who

Archive for the ‘ Programming ’ Category

Selfietorium version 1 enclosure

no comment

Version 1 selfietorium enclosure presented by titaniumshed R&D engineer Alan Hingley

Titaniumshed has now produced an initial version of the selfietorium enclosure.

Making the magic work

no comment

You’re a wizard, Harry… if only you could just get continuous integration to work

I’ve been focusing my efforts on the selfietorium lately and in particular how to combine all the various support systems with GitHub, where the source is stored. This blog post details making the magic work: to get Continuous Integration, build, package and release uploads working.

Continuous Integration is the foundation from which the other support services will hang.  There’s no point in performing code analysis on code that doesn’t build or pass its tests.  So, let’s get started.

In previous projects – like Snail Tales – we have created Jenkins installs and created build scripts for all of this to work.  For the selfietorium project we are using Travis-CI.

Selfietorium is a raspberry pi based Python project, and there is great support in Travis-CI for Python – some languages such as c# are not 100% supported, so Travis-CI may not be suitable for all uses.  Before you start looking at using Travis-CI for your solution, you should probably check that the language is supported by taking a look at the getting started page in the Travis-CI Docs.

Techies amongst you might be thinking

Mike – what are you going to build?  Python is an interpreted language – there is no compiler for Python

And that’s true enough.  I aim to use the Travis-CI build system to run my unit tests (when I write some) and package my Python code into a Debian .deb file to allow easy installation onto a Raspberry Pi.

So let’s get cracking

To start with, you’ll need an account on Travis-CI.  Travis-CI uses GitHub for authentication, so that’s not too difficult to set up – just sign in with your GitHub account.

Now that you have an account what do you do next?  There are a couple of things you need to do to make your project build:  Create your project within Travis-CI and create a .travis.yml file

The .travis.yml file contains all of the steps to build and process your project, and it can be somewhat complicated.  What is amazingly simple though is setting up a GitHub repository to build.  Travis-CI presents me with all of the repositories that are capable of being built.  From here I picked the TitaniumBunker/Selfietorium repository, and that was pretty much it.

Picking which repository to build is probably the simplest part of this set up.

Once your repository is set up it needs to be configured – the docs are an absolute must here. There is no IDE to manage your configuration – all that stands between build success and multiple frustrating build failures is you and your ability to write a decent .travis.yml file.

Nothing will build until you next push something to your GitHub repository.  Push something to your repository and Travis-CI will spring into life, and potentially fail with an error, probably looking something like this:

Worker information
hostname: ip-10-12-2-57:94955ffd-d111-46f9-ae1e-934bb94a5b20
version: v2.5.0-8-g19ea9c2 https://github.com/travis-ci/worker/tree/19ea9c20425c78100500c7cc935892b47024922c
instance: ad8e75d:travis:ruby
startup: 653.84368ms
Could not find .travis.yml, using standard configuration.
Build system information
Build language: ruby
Build group: stable
Build dist: precise
Build id: 185930222
Job id: 185930223
travis-build version: 7cac7d393
Build image provisioning date and time
Thu Feb  5 15:09:33 UTC 2015
Operating System Details
Distributor ID:	Ubuntu
Description:	Ubuntu 12.04.5 LTS
Release:	12.04
Codename:	precise

There’s a lot of cruft in there – but the lines that are interesting are:

  • version – The version line hints that the Travis-CI worker code is on GitHub.  It is.
  • Could not find .travis.yml, using standard configuration. – The build fails to find a .travis.yml file and defaults to building a Ruby project.
  • Description: Ubuntu 12.04.5 LTS – the build workers seem to be Ubuntu based.
  • Cookbooks Version a68419e – Travis cookbooks are used with chef to set up workers

.travis.yml file

The .travis.yml file is effectively a script that executes as the build life cycle executes. The Customizing the Build page says that a build is made up of 2 main steps

  1. install: install any dependencies required
  2. script: run the build script

These 2 sections are essential for any .travis.yml file. There can be more than just these 2 sections, and the Customizing the Build page details a whole bunch of extra steps that can be added to your .travis.yml file.

The. travis.yml file for selfietorium looks like this:

dist: trusty
sudo: required

      secure: '$SONARQUBE_API_KEY'

language: python

 - "2.7"
# uncomment and edit the following line if your project needs to run something other than `rake`:

  - sudo apt-get -qq update
  - sudo apt-get install -y build-essential devscripts ubuntu-dev-tools debhelper dh-make diffutils patch cdbs
  - sudo apt-get install -y dh-python python-all python-setuptools python3-all python3-setuptools
  - sudo apt-get install -y python-cairo python-lxml python-rsvg python-twitter
install: true
   - sonar-scanner
   - sudo dpkg-buildpackage -us -uc 

  cp ../python-selfietorium_1_all.deb python-selfietorium_1_all.deb
tyle="white-space: pre-wrap; word-wrap: break-word;"
  provider: releases
  api_key: '$GITHUB_API_KEY'
  file: 'python-selfietorium_1_all.deb'
  skip_cleanup: true
    branch: master
    tags: true

So let’s break down what this script does.

dist: trusty
sudo: required

      secure: '$SONARQUBE_API_KEY'

language: python

 - "2.7"

This section is the pre-requisites section. It tells Travis-CI that the worker that is going to run this script should be an Ubuntu 14.04 LTS (Trusty Tahr) based machine.  Travis-CI will build on either a Virtual Machine environment (with sudo enabled), or a Container – which is I believe based on Docker.  The issue with docker is that while it takes seconds to provision a container based environment, it currently doesn’t have sudo available to it, meaning that performing activities using sudo (for example, installing build dependencies) is not possible in a container based environment.  The Travis blog does state that:

If you require sudo, for instance to install Ubuntu packages, a workaround is to use precompiled binaries, uploading them to S3 and downloading them as part of your build, installing them into a non-root directory.

Now I still have some work to do around dependency resolution – I think it is possible to trim the number of dependencies right down. At the moment the build system installs all of the runtime dependencies which might potentially be overkill for the packaging – however they still might be needed for unit testing. Further work is required to look into that. If these dependencies can be removed, then the build could potentially be done in a container, speeding up the whole process. I can almost hear the other fellow techies…

But Mike, why don’t you use Python distutils, and use pypi to install your dependencies?

A fair question.  Using pypi would mean that I could potentially install the dependencies without needing sudo access – the issue is that python-rsvg doesn’t seem to be available on pypi, and only seems to be available as a Linux package.

In this section I’m also telling Travis-CI that I would like to use SonarQube to perform analysis on the solution [1] and that the solution language is Python 2.7.  I think the general opinion of developers out there is:

Use Python 3 – because Python 2 is the old way, and Python 3 is the newest

I’d like to use the new and shiny Python 3, but I fear that there may be libraries that I am using that have no Python 3 implementation – and that fear has led me back into the warm embrace that is Python 2.7.  I plan to perform an audit and determine whether the project can be ported to Python 3.

  - sudo apt-get -qq update
  - sudo apt-get install -y build-essential devscripts ubuntu-dev-tools debhelper dh-make diffutils patch cdbs
  - sudo apt-get install -y dh-python python-all python-setuptools python3-all python3-setuptools
  - sudo apt-get install -y python-cairo python-lxml python-rsvg python-twitter

In this section I am installing the various Linux packages required to perform the build.  These are standard commands for installing packages onto an environment.

install: true

And here the installation does nothing.  As I said at the top of this article, there is no build for Python programs.

   - sonar-scanner
   - sudo dpkg-buildpackage -us -uc

Right about here is where I would be running any unit tests – but I don’t have any yet.  This script then sends the code to SonarQube – a topic for a future post – and then calls dpkg-buildpackage to create the binary package.  At the end of this step we have a deb file that could potentially be deployed.

  cp ../python-selfietorium_1_all.deb python-selfietorium_1_all.deb

Before I deploy the deb file, I need to copy it, so I copy the generated deb file into the current working directory.

  provider: releases
  api_key: '$GITHUB_API_KEY'
  file: 'python-selfietorium_1_all.deb'
  skip_cleanup: true
    branch: master
    tags: true

Finally, we deploy the file.  The provider: releases line tells Travis-CI to use the GitHub Releases provider to push the build artifact to a GitHub release.

It uses a secret API key to gain access to the project releases.  The file is the name of the generated file, and the skip_cleanup prevents Travis-CI from resetting the working directory and deleting all changes made during the build.  The on section controls when files are deployed.  With this setting, only changes made to the master branch, that are tags trigger the deployment.  GitHub releases are actually tags, so creating a release creates a tag on a branch.  For selfietorium we create releases on the master branch.  The release deployment then pushes the build artifact to GitHub effectively offering it as the binary file for that release tag – and uploading it to the GitHub Release.

Keeping Secrets.

In order for Travis-CI to upload your build artifact, it will need to identify itself, and to do that we create a Personal Access Token.  Using this Token, the GitHub Releases provider can now communicate with GitHub as if it was you.  We can’t just add the GitHub token to our .travis.yml file.  Well, I suppose we could, but then we shouldn’t be surprised if other files start appearing in our releases.  The .travis.yml file is publicly available on GitHub – so we need a way of storing the token securely, and injecting it into the script during execution.  Travis-CI offers an ability to store environment variables within a project.  These variables are hidden when the log files are produced if you clear the display value in build log check box.  To use that variable in your .travis.yml file you’d refer to it like this:


Travis-CI Environment Variables

Grabbing a Status badge

Within the Travis-CI project settings screen, clicking the status badge offers the ability to generate suitable markdown for GitHub.

Adding a spiffy status badge to your GitHub ReadMe.md markdown could not be easier

Quick Recap.

So what we’ve done so far is:

  • Configured Travis-CI to build when we push to the repository.
    • Eventually this will allow for unit tests to be run – but at the moment there are no unit tests for selfietorium.
  • Configured Travis-CI to package the application into a .deb file when a release is created.
    • Releases are effectively tags within a git repository.
  • Configured Travis-CI to deploy out build artifact back to GitHub using a Personal Access Token.
    • Personal Access Token is securely stored in a Travis-CI environment variable.
  • We’ve created some spiffy markdown for a status badge that we can incorporate into our repository markdown.

Debian file built using Travis-CI and deployed to GitHub

Here’s what it looks like when you run the installer under Ubuntu:

Selfietorium installation through Ubuntu Software Centre

In a forthcoming as yet un-written post, I’ll document how to set up the packaging folder so that selfietorium is installable and executable on a Linux system.  It will probably borrow quite heavily from this page I wrote about content packaging.

Useful tools

Travis WebLint – check your Travis configuration for obvious errors.


  • [1] I’ll return to that in a later post when I’ll talk about Continuous Inspection

WCF… Windows Calamity Framework? something like that

no comment

I’m currently doing research and revision for my 70-487 exam – Developing Microsoft Azure and Web Services.  I was doing some reading and I encountered some information about hosting WCF Data Services and OData.  It wasn’t something I had encountered in so far – so it has been an interesting and exciting prospect to look into.

I found a wonderful step by step tutorial into creating and hosting a WCF Data Service on MSDN, so I went through it.

First problem : I need some data.  I’m quite getting into LocalDB at the moment – I’m thinking about the possibility of creating a developer database through migration scripts – so that a developer could clone a Github repo and run the project.  The Database would be automatically created for them, and populated with sample data (if appropriate) – meaning that developers could run this project without a dependency on a database or fancy storage like that.

I managed to get the SQL Scripts for Northwind from codeplex – but with almost everything that Microsoft does these days being on github – going back to codeplex seemed old and outdated.

I followed through the instructions and ran the project.

OData seems to be very similar to REST – except that the url used are representative of the entity structure(rather than hiding behind controllers), and query like operations can be passed through to the server – giving maximum flexibility in terms of usage.   So – anyway… I ran my project from Visual Studio 2015.

Metadata from my WCF Data Service

Metadata from my WCF Data Service

Yay – I have metadata about my service.Next step was to view the contents of the customers service by subtly altering the URL to add Customers to the end.

And then this happened :

WCF Data Service 2

Customers can’t be downloaded?

 A pop up from Edge saying that Customers couldn’t be downloaded?  That can’t be right. Let’s have a look at the same thing on Chrome.
WCF Data Service 3

Loading the Same OData service on Chrome reveals lots of lovely data.

So what’s going on here – my service works fineunder Chrome – but fails under Edge?  Only Edge?  How about IE
WCF Data Service 4

Acessing the OData Service on IE does present data – which IE interprets as an RSS feed.

It does seem to show the data under Internet Explorer – so it just appears to be the Edge Browser which is causing the problem. Next up – let’s run the network tab, and we’ll see what’s shaking.
WCF Data Service 5

Requesting the OData Service is stuck at Pending

 So – notice that the result is still pending?  In comparison navigating to http://localhost:50739/NorthwindCustomers.svc/ returns the following:
WCF Data Service 6

Accessing the OData Metadata on MS Edge does seem to return (response code 200)

Currently I’m working under the  theory that Edge just doesn’t understand an element of the communication received.   Given that the same service is being used for all browsers, then the issue is down to how Edge interprets some header received from the server.
Next up : I’ll record the headers that are returned from the service and see if I can determine a difference between the browsers – until I learn more though, I’ll have to work under the assumption that Edge just won’t work with this stuff

Unit testing Javascript

no comment

In previous posts I talked about unit testing an MVC controller for a home project I am working on.  A couple of days ago I started trying to get JavaScript unit testing implemented within the build system so that I could unit test any JavaScript that I included in my project.  
Read more..

Unit Testing your controllers

1 comment
Thomas thought that testing the controller would be a good thing.

Thomas thought that testing the controller would be a good thing.

I’m working on a home project at the moment – it’s an MVC/Entity Framework based project, and I have been stumped for the past 3 weeks on something – How can I test it?

Read more..

Project: Snail Tales – DONE!

no comment

Yes, finally after nearly 3 years of development and work, The Snail Tales project is finished. I had actually finished it late last year but decided to get christmas and new year out of the way before releasing to Snail Tales.

Here’s the finished film:

I will be collating all the character and background files and creating a public repository for them

On paper it seems an awful long time to make a piece of animation. But as well as the games I made as part of my job I moved house, got engaged, had to learn hot to use Synfig, and get S-Cargo and the continuous integration system working.

I recorded my presentation at OggCamp late last year – I will upload that shortly.  In the meantime, here’s the presentation I did the year before, detailing how Synfig Stage and continuous integration will work:

Will the real OWIN please stand up!

no comment
Not OWIN, rather sportswear and beat poetry enthusiast Mr Slim Shady

Not OWIN, rather sportswear and beat poetry enthusiast Mr Slim Shady

I’ve recently become singularly interested in unit testing and continuous integration.  I’ve recently set up a project and I now take great delight in the system automatically building and telling me that my test were no good, and that the project is unstable.  It’s good because it drives you to make sure that your code changes don’t break existing functionality – and that’s only something that can be done if there are repeatable and technically inexpensive tests that can be executed when code is checked in.

This project is a .NET MVC based project – but also has a rather interesting REST based interface, allowing potential integration from any number of clients.  I have tests for the controller, and I can sort of test the REST interface (by calling the controller in a Nunit test) – however I recently started thinking that I don’t really have a way to test that REST interface from a Javascript client perspective in the same way that I would expect to test it from a controller perspective.

I suppose what I wanted to do was to start a web service process when I start my testing, and tear it down at the end of testing.  The point here is that testing can occur before deployment, protecting the environment from errors or changes in functionality that now break unit tests.

Cassini might be a possibility, and another option might be OWIN.

OWIN (Open Web Interface for .NET) – and in particular Microsoft.Owin.Testing.TestServer allows a web server to be instantiated in code.  This means that a unit test can create an instance of a web application and then perform an activity against that executing web site.

As part of my experiments, I wrote some tests for a project I am working on – the App-Utility-Store.

Web Sservices

I wrote a simple Values Controller API which creates a simple REST interface.  My idea was to test this using a simple rest client.  To accomplish this I wrote a simple Test class which firstly stands up an OWIN server, then performs a call against that server.


        public void testRestCall()
            const int port = 8086;
            using (WebApp.Start<APIHost>("http://localhost:"+port))
                var client = new HttpClient { BaseAddress = new Uri("http://localhost:" + port )};
                var response = client.GetAsync("/api/Values").Result;
                var body = response.Content.ReadAsStringAsync().Result;


So what is this code doing?

This code creates a web server running on port 8086, and it uses a class called APIHost to configure it.

    public class APIHost
        public void Configuration(IAppBuilder app)
            // Configure Web API for self-host. 
            var config = new HttpConfiguration();
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { controller="API", id = RouteParameter.Optional }



Once the server is up we create a HTTPClient, and perform a GET request to the Values Controller.  Debugging the API controller confirmed that the website was indeed firing.  Adding OWIN testing to a project can be done through NuGet’s Microsoft.Owin.Testing package.

I hate to say I told you so… but….

no comment

cooper says soWell – like the title says : I hate to say I told you so, but I did so tell you so.  Way back in the Volkswagen problems post, I suggested that the ground work was probably  under way to attempt to divert the responsibility for the software issue away from the board, and towards software engineers.

Artur’s first point about software group size could – if I were more cynical – be an attempt to create a narrative around this.  Something along the lines of “It was a few rogue programmers that released this code”


Michael HornVolkswagen’s US boss said to US Congress :

My understanding is that it was a couple of software engineers who put these in

I’m really concerned with Volkswagen – with the quality of their processes. According to Michael Horn, 3 people were able to get software onto millions of cars world wide with no quality or compliance checks? 3 people?

The assertion that the board had no knowledge of this seems to suggest that the board had no idea of what was going on in their own company – so are they actually admitting that the board was incompetent?  This seems like deflection – particularly if the reports from CNBC that the board were informed in 2007 and 2011 by Bosch, and their own technicians are to be believed.

A worst-case scenario for Volkswagen would be a steady drips of new revelation. And, indeed, new reports published by several German newspapers, including the weekend Frankfurter Allgemeine Sonntagszeitung, indicate the Volkswagen AG supervisory board was warned of the diesel cheating scam by both a key supplier and some of the company’s own engineers.

A letter dated 2007 shows that the automotive mega-supplier Bosch pointed to illegal modifications to its control software, the reports said.

And VW’s own technicians flagged the issue for the automaker’s board in 2011, they said.

I also think that it’s troubling that the potential fix for this is the installation of a urea treatment tank (on certain models).  So I think the decision was made based on a manufacturing hardware decision – it’s certainly cheaper to manufacture the same car for European and US markets – and to get it through the tests a software patch was needed.  The decision will therefore be blamed on the last person involved – which will be the software department, rather then the originator of this scheme.

I think the point I’m trying to make here is that there is more than software at fault – so with that in mind I’m going to suggest that Volkswagen start moving away from cars, and instead work on public transport infrastructure.  Here’s a Bus design idea that I really think that Volkswagen should attempt to implement

Suggestion for the new Volkswagen Bus

Suggestion for the new Volkswagen Bus

At least there wouldn’t be the amount of carnage that I suspect there will be when Volkswagen start throwing people under the bus.

Jenkins Upload class

no comment

I have been working on the Java code for uploading a video to YouTube, and I have the following video demonstrating it in action



So.. Google service accounts can’t access YouTube? oAuth-ful

no comment

Previously I have been working on using Jenkins to build a video file, and decided that I would need to investigate the ability to push the resulting video file from the build process to YouTube, allowing the continuous build process to make the results available for viewing.  A quick trip to the Google Developer Console  led to a page, detailing the YouTube Data API.  Looking at the opening paragraph – it certainly seems to offer the ability we’re after.


Add YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.


So – let’s go through uploading a video by a script. A page discussing the upload video functionality  can be found here, and the code can be downloaded from Github.  My first thoughts were to implement this as a python script – after all it’s the same mechanism that we use to build the film in the first instance – so let’s give it a whirl.

Installing the Client Library

I’m developing on Ubuntu, so I’ve become accustomed to apt-get installing most of my applications, and I’ve written in the past about the benefits of something like software centre.  So I was a bit disappointed to see nothing in the instructions that there was no option to install the library from software centre – especially considering that Ubuntu is/was Google’s desktop of choice.  Anyway, the preferred option was to use Pip so I’d better install pip.

sudo apt-get python-pip

With that installed I was able to carry on looking at the python samples, but to do that I’d need to satisfy the other dependencies for the Client Library – primarily a google account, and setting up a project.  I already had a google account – in fact I had a couple of accounts, so the first part of those requirements were already fulfilled, and to be honest I don’t think that creating a google account requires a write up here but if you need to there’s a video here.

Creating a Google Account / Application

The sample code page says that the samples use the Google APIs Client Library for Python, so these samples needed that.  Creating a project or script that interacts with a Google API requires a developer to create a credential for that application within the Google Developers Console.  This means that google have the opportunity to see what application is sending the request to Google Services, and to provide a monetization capability.  Requests to the Google Services are limited, and large-scale users will end up burning through their daily allowance.  This allowance is not unsubstantial – the YouTube API allows 50,000,000 units/day and limit that to 3000 requests/second/users.  Not All requests are priced equally:

  • Simple read operation costs 1 unit
  • Write operation has a cost of approx 50 units
  • Video upload has a cost of approx 1600 units

these charges are approximate as the pricing is based on the number of ‘units’ returned – a search results could return a number of units per item.

Google suggest that the following operations would be achievable within the 50,000,000 units/day threshold.

  • 1,000,000 read operations that each return two resource parts.
  • 50,000 write operations and 450,000 additional read operations that each retrieve two resource parts.
  • 2000 video uploads, 7000 write operations, and 200,000 read operations that each retrieve three resource parts.

Google support a number of different types of authentication styles and there are 2 main types supported Public API and oAuth.  On the face of if the best option seems to be Public API as it allows a service to communicate with the server without user interaction, but Service accounts are not permitted to log into YouTube – so I’ll have to use an oAuth account.  The way that oAuth accounts work is as follows :

  • Application loads data from client_secrets.json – which allows the client application to identify itself against the google authentication services – Google now knows which application is being called.
  • User is presented with a browser -either directly by launching a URL, or through instructing the user through the command line to visit a particular site.
  • User then confirms that the application is allowed to access their youtube account
  • Google send back an authorisation token

This is all well and good for services that have a user front end – what I need is to do this in a system that runs on a back end, and then on a system that isn’t necessarily the system that runs the code (for example – through a client web browser).  There are difficulties related to storing and distributing these secrets based on the current scargo project.  Putting the clients_secret into the project would be  difficult, as any application would be able to masquerade as the Jenkins Video Upload application.  Storing the oAuth token would also be an issue, as anyone would theoretically be able to upload to my YouTube account.  Ideally I would have place holders into which your YouTube oAuth files could be copied – but that could prove problematic.  Pulling the latest code from GitHub would build, but wouldn’t deploy to the YouTube server without replacing these place holders with real data.  If the upload returned a fail status code, then the jobs would always fail.  If the placeholders were replaced from GitHub (and they might) then it would make setting up a new project more difficult.

What needs to happen is that the deployment needs to be separated from the build process.  This could be accomplished through creating a separate build job – a deployment job and running that on the basis of a successful build – however I made the decision that it might be better to create a Jenkins plugin.

You can find my current efforts here.