Titaniumshed has now produced an initial version of the selfietorium enclosure.
Archive for the ‘ Programming ’ Category
I’ve been focusing my efforts on the selfietorium lately and in particular how to combine all the various support systems with GitHub, where the source is stored. This blog post details making the magic work: to get Continuous Integration, build, package and release uploads working.
Continuous Integration is the foundation from which the other support services will hang. There’s no point in performing code analysis on code that doesn’t build or pass its tests. So, let’s get started.
Selfietorium is a raspberry pi based Python project, and there is great support in Travis-CI for Python – some languages such as c# are not 100% supported, so Travis-CI may not be suitable for all uses. Before you start looking at using Travis-CI for your solution, you should probably check that the language is supported by taking a look at the getting started page in the Travis-CI Docs.
Techies amongst you might be thinking
Mike – what are you going to build? Python is an interpreted language – there is no compiler for Python
And that’s true enough. I aim to use the Travis-CI build system to run my unit tests (when I write some) and package my Python code into a Debian .deb file to allow easy installation onto a Raspberry Pi.
So let’s get cracking
To start with, you’ll need an account on Travis-CI. Travis-CI uses GitHub for authentication, so that’s not too difficult to set up – just sign in with your GitHub account.
Now that you have an account what do you do next? There are a couple of things you need to do to make your project build: Create your project within Travis-CI and create a .travis.yml file
The .travis.yml file contains all of the steps to build and process your project, and it can be somewhat complicated. What is amazingly simple though is setting up a GitHub repository to build. Travis-CI presents me with all of the repositories that are capable of being built. From here I picked the TitaniumBunker/Selfietorium repository, and that was pretty much it.
Once your repository is set up it needs to be configured – the docs are an absolute must here. There is no IDE to manage your configuration – all that stands between build success and multiple frustrating build failures is you and your ability to write a decent .travis.yml file.
Nothing will build until you next push something to your GitHub repository. Push something to your repository and Travis-CI will spring into life, and potentially fail with an error, probably looking something like this:
Worker information hostname: ip-10-12-2-57:94955ffd-d111-46f9-ae1e-934bb94a5b20 version: v2.5.0-8-g19ea9c2 https://github.com/travis-ci/worker/tree/19ea9c20425c78100500c7cc935892b47024922c instance: ad8e75d:travis:ruby startup: 653.84368ms Could not find .travis.yml, using standard configuration. Build system information Build language: ruby Build group: stable Build dist: precise Build id: 185930222 Job id: 185930223 travis-build version: 7cac7d393 Build image provisioning date and time Thu Feb 5 15:09:33 UTC 2015 Operating System Details Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise ... ... ...
There’s a lot of cruft in there – but the lines that are interesting are:
- version – The version line hints that the Travis-CI worker code is on GitHub. It is.
- Could not find .travis.yml, using standard configuration. – The build fails to find a .travis.yml file and defaults to building a Ruby project.
- Description: Ubuntu 12.04.5 LTS – the build workers seem to be Ubuntu based.
- Cookbooks Version a68419e – Travis cookbooks are used with chef to set up workers
The .travis.yml file is effectively a script that executes as the build life cycle executes. The Customizing the Build page says that a build is made up of 2 main steps
- install: install any dependencies required
- script: run the build script
These 2 sections are essential for any .travis.yml file. There can be more than just these 2 sections, and the Customizing the Build page details a whole bunch of extra steps that can be added to your .travis.yml file.
dist: trusty sudo: required addons: sonarqube: token: secure: '$SONARQUBE_API_KEY' language: python python: - "2.7" # uncomment and edit the following line if your project needs to run something other than `rake`: before_install: - sudo apt-get -qq update - sudo apt-get install -y build-essential devscripts ubuntu-dev-tools debhelper dh-make diffutils patch cdbs - sudo apt-get install -y dh-python python-all python-setuptools python3-all python3-setuptools - sudo apt-get install -y python-cairo python-lxml python-rsvg python-twitter install: true script: - sonar-scanner - sudo dpkg-buildpackage -us -uc before_deploy: cp ../python-selfietorium_1_all.deb python-selfietorium_1_all.deb tyle="white-space: pre-wrap; word-wrap: break-word;" deploy: provider: releases api_key: '$GITHUB_API_KEY' file: 'python-selfietorium_1_all.deb' skip_cleanup: true on: branch: master tags: true
So let’s break down what this script does.
dist: trusty sudo: required addons: sonarqube: token: secure: '$SONARQUBE_API_KEY' language: python python: - "2.7"
This section is the pre-requisites section. It tells Travis-CI that the worker that is going to run this script should be an Ubuntu 14.04 LTS (Trusty Tahr) based machine. Travis-CI will build on either a Virtual Machine environment (with sudo enabled), or a Container – which is I believe based on Docker. The issue with docker is that while it takes seconds to provision a container based environment, it currently doesn’t have sudo available to it, meaning that performing activities using sudo (for example, installing build dependencies) is not possible in a container based environment. The Travis blog does state that:
If you require sudo, for instance to install Ubuntu packages, a workaround is to use precompiled binaries, uploading them to S3 and downloading them as part of your build, installing them into a non-root directory.
Now I still have some work to do around dependency resolution – I think it is possible to trim the number of dependencies right down. At the moment the build system installs all of the runtime dependencies which might potentially be overkill for the packaging – however they still might be needed for unit testing. Further work is required to look into that. If these dependencies can be removed, then the build could potentially be done in a container, speeding up the whole process. I can almost hear the other fellow techies…
But Mike, why don’t you use Python distutils, and use pypi to install your dependencies?
A fair question. Using pypi would mean that I could potentially install the dependencies without needing sudo access – the issue is that python-rsvg doesn’t seem to be available on pypi, and only seems to be available as a Linux package.
In this section I’m also telling Travis-CI that I would like to use SonarQube to perform analysis on the solution  and that the solution language is Python 2.7. I think the general opinion of developers out there is:
Use Python 3 – because Python 2 is the old way, and Python 3 is the newest
I’d like to use the new and shiny Python 3, but I fear that there may be libraries that I am using that have no Python 3 implementation – and that fear has led me back into the warm embrace that is Python 2.7. I plan to perform an audit and determine whether the project can be ported to Python 3.
before_install: - sudo apt-get -qq update - sudo apt-get install -y build-essential devscripts ubuntu-dev-tools debhelper dh-make diffutils patch cdbs - sudo apt-get install -y dh-python python-all python-setuptools python3-all python3-setuptools - sudo apt-get install -y python-cairo python-lxml python-rsvg python-twitter
In this section I am installing the various Linux packages required to perform the build. These are standard commands for installing packages onto an environment.
And here the installation does nothing. As I said at the top of this article, there is no build for Python programs.
script: - sonar-scanner - sudo dpkg-buildpackage -us -uc
Right about here is where I would be running any unit tests – but I don’t have any yet. This script then sends the code to SonarQube – a topic for a future post – and then calls dpkg-buildpackage to create the binary package. At the end of this step we have a deb file that could potentially be deployed.
before_deploy: cp ../python-selfietorium_1_all.deb python-selfietorium_1_all.deb
Before I deploy the deb file, I need to copy it, so I copy the generated deb file into the current working directory.
deploy: provider: releases api_key: '$GITHUB_API_KEY' file: 'python-selfietorium_1_all.deb' skip_cleanup: true on: branch: master
It uses a secret API key to gain access to the project releases. The file is the name of the generated file, and the skip_cleanup prevents Travis-CI from resetting the working directory and deleting all changes made during the build. The on section controls when files are deployed. With this setting, only changes made to the master branch, that are tags trigger the deployment. GitHub releases are actually tags, so creating a release creates a tag on a branch. For selfietorium we create releases on the master branch. The release deployment then pushes the build artifact to GitHub effectively offering it as the binary file for that release tag – and uploading it to the GitHub Release.
In order for Travis-CI to upload your build artifact, it will need to identify itself, and to do that we create a Personal Access Token. Using this Token, the GitHub Releases provider can now communicate with GitHub as if it was you. We can’t just add the GitHub token to our .travis.yml file. Well, I suppose we could, but then we shouldn’t be surprised if other files start appearing in our releases. The .travis.yml file is publicly available on GitHub – so we need a way of storing the token securely, and injecting it into the script during execution. Travis-CI offers an ability to store environment variables within a project. These variables are hidden when the log files are produced if you clear the display value in build log check box. To use that variable in your .travis.yml file you’d refer to it like this:
Grabbing a Status badge
Within the Travis-CI project settings screen, clicking the status badge offers the ability to generate suitable markdown for GitHub.
So what we’ve done so far is:
- Configured Travis-CI to build when we push to the repository.
- Eventually this will allow for unit tests to be run – but at the moment there are no unit tests for selfietorium.
- Configured Travis-CI to package the application into a .deb file when a release is created.
- Releases are effectively tags within a git repository.
- Configured Travis-CI to deploy out build artifact back to GitHub using a Personal Access Token.
- Personal Access Token is securely stored in a Travis-CI environment variable.
- We’ve created some spiffy markdown for a status badge that we can incorporate into our repository markdown.
In a forthcoming as yet un-written post, I’ll document how to set up the packaging folder so that selfietorium is installable and executable on a Linux system. It will probably borrow quite heavily from this page I wrote about content packaging.
Travis WebLint – check your Travis configuration for obvious errors.
-  I’ll return to that in a later post when I’ll talk about Continuous Inspection
I’m currently doing research and revision for my 70-487 exam – Developing Microsoft Azure and Web Services. I was doing some reading and I encountered some information about hosting WCF Data Services and OData. It wasn’t something I had encountered in so far – so it has been an interesting and exciting prospect to look into.
I found a wonderful step by step tutorial into creating and hosting a WCF Data Service on MSDN, so I went through it.
First problem : I need some data. I’m quite getting into LocalDB at the moment – I’m thinking about the possibility of creating a developer database through migration scripts – so that a developer could clone a Github repo and run the project. The Database would be automatically created for them, and populated with sample data (if appropriate) – meaning that developers could run this project without a dependency on a database or fancy storage like that.
I followed through the instructions and ran the project.
OData seems to be very similar to REST – except that the url used are representative of the entity structure(rather than hiding behind controllers), and query like operations can be passed through to the server – giving maximum flexibility in terms of usage. So – anyway… I ran my project from Visual Studio 2015.
Yay – I have metadata about my service.Next step was to view the contents of the customers service by subtly altering the URL to add Customers to the end.
And then this happened :
I’m working on a home project at the moment – it’s an MVC/Entity Framework based project, and I have been stumped for the past 3 weeks on something – How can I test it?
Yes, finally after nearly 3 years of development and work, The Snail Tales project is finished. I had actually finished it late last year but decided to get christmas and new year out of the way before releasing to Snail Tales.
Here’s the finished film:
I will be collating all the character and background files and creating a public repository for them
On paper it seems an awful long time to make a piece of animation. But as well as the games I made as part of my job I moved house, got engaged, had to learn hot to use Synfig, and get S-Cargo and the continuous integration system working.
I recorded my presentation at OggCamp late last year – I will upload that shortly. In the meantime, here’s the presentation I did the year before, detailing how Synfig Stage and continuous integration will work:
I’ve recently become singularly interested in unit testing and continuous integration. I’ve recently set up a project and I now take great delight in the system automatically building and telling me that my test were no good, and that the project is unstable. It’s good because it drives you to make sure that your code changes don’t break existing functionality – and that’s only something that can be done if there are repeatable and technically inexpensive tests that can be executed when code is checked in.
I suppose what I wanted to do was to start a web service process when I start my testing, and tear it down at the end of testing. The point here is that testing can occur before deployment, protecting the environment from errors or changes in functionality that now break unit tests.
Cassini might be a possibility, and another option might be OWIN.
OWIN (Open Web Interface for .NET) – and in particular Microsoft.Owin.Testing.TestServer allows a web server to be instantiated in code. This means that a unit test can create an instance of a web application and then perform an activity against that executing web site.
As part of my experiments, I wrote some tests for a project I am working on – the App-Utility-Store.
I wrote a simple Values Controller API which creates a simple REST interface. My idea was to test this using a simple rest client. To accomplish this I wrote a simple Test class which firstly stands up an OWIN server, then performs a call against that server.
So what is this code doing?
This code creates a web server running on port 8086, and it uses a class called APIHost to configure it.
Once the server is up we create a HTTPClient, and perform a GET request to the Values Controller. Debugging the API controller confirmed that the website was indeed firing. Adding OWIN testing to a project can be done through NuGet’s Microsoft.Owin.Testing package.
Well – like the title says : I hate to say I told you so, but I did so tell you so. Way back in the Volkswagen problems post, I suggested that the ground work was probably under way to attempt to divert the responsibility for the software issue away from the board, and towards software engineers.
Artur’s first point about software group size could – if I were more cynical – be an attempt to create a narrative around this. Something along the lines of “It was a few rogue programmers that released this code”
My understanding is that it was a couple of software engineers who put these in
I’m really concerned with Volkswagen – with the quality of their processes. According to Michael Horn, 3 people were able to get software onto millions of cars world wide with no quality or compliance checks? 3 people?
The assertion that the board had no knowledge of this seems to suggest that the board had no idea of what was going on in their own company – so are they actually admitting that the board was incompetent? This seems like deflection – particularly if the reports from CNBC that the board were informed in 2007 and 2011 by Bosch, and their own technicians are to be believed.
A worst-case scenario for Volkswagen would be a steady drips of new revelation. And, indeed, new reports published by several German newspapers, including the weekend Frankfurter Allgemeine Sonntagszeitung, indicate the Volkswagen AG supervisory board was warned of the diesel cheating scam by both a key supplier and some of the company’s own engineers.
A letter dated 2007 shows that the automotive mega-supplier Bosch pointed to illegal modifications to its control software, the reports said.
And VW’s own technicians flagged the issue for the automaker’s board in 2011, they said.
I also think that it’s troubling that the potential fix for this is the installation of a urea treatment tank (on certain models). So I think the decision was made based on a manufacturing hardware decision – it’s certainly cheaper to manufacture the same car for European and US markets – and to get it through the tests a software patch was needed. The decision will therefore be blamed on the last person involved – which will be the software department, rather then the originator of this scheme.
I think the point I’m trying to make here is that there is more than software at fault – so with that in mind I’m going to suggest that Volkswagen start moving away from cars, and instead work on public transport infrastructure. Here’s a Bus design idea that I really think that Volkswagen should attempt to implement
At least there wouldn’t be the amount of carnage that I suspect there will be when Volkswagen start throwing people under the bus.
I have been working on the Java code for uploading a video to YouTube, and I have the following video demonstrating it in action
Previously I have been working on using Jenkins to build a video file, and decided that I would need to investigate the ability to push the resulting video file from the build process to YouTube, allowing the continuous build process to make the results available for viewing. A quick trip to the Google Developer Console led to a page, detailing the YouTube Data API. Looking at the opening paragraph – it certainly seems to offer the ability we’re after.
Add YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.
So – let’s go through uploading a video by a script. A page discussing the upload video functionality can be found here, and the code can be downloaded from Github. My first thoughts were to implement this as a python script – after all it’s the same mechanism that we use to build the film in the first instance – so let’s give it a whirl.
Installing the Client Library
I’m developing on Ubuntu, so I’ve become accustomed to apt-get installing most of my applications, and I’ve written in the past about the benefits of something like software centre. So I was a bit disappointed to see nothing in the instructions that there was no option to install the library from software centre – especially considering that Ubuntu is/was Google’s desktop of choice. Anyway, the preferred option was to use Pip so I’d better install pip.
sudo apt-get python-pip
With that installed I was able to carry on looking at the python samples, but to do that I’d need to satisfy the other dependencies for the Client Library – primarily a google account, and setting up a project. I already had a google account – in fact I had a couple of accounts, so the first part of those requirements were already fulfilled, and to be honest I don’t think that creating a google account requires a write up here but if you need to there’s a video here.
Creating a Google Account / Application
The sample code page says that the samples use the Google APIs Client Library for Python, so these samples needed that. Creating a project or script that interacts with a Google API requires a developer to create a credential for that application within the Google Developers Console. This means that google have the opportunity to see what application is sending the request to Google Services, and to provide a monetization capability. Requests to the Google Services are limited, and large-scale users will end up burning through their daily allowance. This allowance is not unsubstantial – the YouTube API allows 50,000,000 units/day and limit that to 3000 requests/second/users. Not All requests are priced equally:
- Simple read operation costs 1 unit
- Write operation has a cost of approx 50 units
- Video upload has a cost of approx 1600 units
these charges are approximate as the pricing is based on the number of ‘units’ returned – a search results could return a number of units per item.
Google suggest that the following operations would be achievable within the 50,000,000 units/day threshold.
- 1,000,000 read operations that each return two resource parts.
- 50,000 write operations and 450,000 additional read operations that each retrieve two resource parts.
- 2000 video uploads, 7000 write operations, and 200,000 read operations that each retrieve three resource parts.
Google support a number of different types of authentication styles and there are 2 main types supported Public API and oAuth. On the face of if the best option seems to be Public API as it allows a service to communicate with the server without user interaction, but Service accounts are not permitted to log into YouTube – so I’ll have to use an oAuth account. The way that oAuth accounts work is as follows :
- Application loads data from client_secrets.json – which allows the client application to identify itself against the google authentication services – Google now knows which application is being called.
- User is presented with a browser -either directly by launching a URL, or through instructing the user through the command line to visit a particular site.
- User then confirms that the application is allowed to access their youtube account
- Google send back an authorisation token
This is all well and good for services that have a user front end – what I need is to do this in a system that runs on a back end, and then on a system that isn’t necessarily the system that runs the code (for example – through a client web browser). There are difficulties related to storing and distributing these secrets based on the current scargo project. Putting the clients_secret into the project would be difficult, as any application would be able to masquerade as the Jenkins Video Upload application. Storing the oAuth token would also be an issue, as anyone would theoretically be able to upload to my YouTube account. Ideally I would have place holders into which your YouTube oAuth files could be copied – but that could prove problematic. Pulling the latest code from GitHub would build, but wouldn’t deploy to the YouTube server without replacing these place holders with real data. If the upload returned a fail status code, then the jobs would always fail. If the placeholders were replaced from GitHub (and they might) then it would make setting up a new project more difficult.
What needs to happen is that the deployment needs to be separated from the build process. This could be accomplished through creating a separate build job – a deployment job and running that on the basis of a successful build – however I made the decision that it might be better to create a Jenkins plugin.
You can find my current efforts here.
- Assemble Avengers
- Content Packaging
- Dr who
- Open Source
- Open University
- Planet Ubuntu
- Quickly Ebook Template
- s book
- Snail Tales
- This Modern Life
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010