Who needs to log stuff anyway…I wonder what that other process is?
Hi Everyone, I know its been a while since I last posted, but there are reasons – I will post more – promise
In the meantime, I was going through some of the draft blog posts I had started but not finished. and came across this one, So I dusted it off and finished it off. I don’t know if the proposed legislation has been implemented or if it’s been altered since I started it. In any case here it is:
On 16th March 2016 , the Chancellor of the exchequer, George Osbourne unveiled a new budget,chief among the new policy was a levy on sugary soft drinks – The nation is grip of an obesity epidemic and fizzy, sugary drinks have been identified as the culprit. according to other press agencies, Jamie Oliver did a little victory dance outside the Houses of Parliament as it was announced.
At (real) work, we’re planning on using SonarQube to measure code statistics – its a tool that will tell you whether your variable names match coding standards, or whether your code is duplicated, or has unused references etc. But I found out that SonarSource host an instance of SonarQube that can be used to analyse open source projects. As selfietorium is an open source project, I signed up.
For those looking to a more private solution, then it is possible to run your own sonarqube server, and that might be a topic for a future post – but for now I’m going to set up SonarQube.com to analyse selfietorium.
Logging on to SonarQube :
SonarQube uses github authentication, so connecting is easy. Once you are logged in you’ll need to create a security token. SonarQube works by having code pushed to it – so in the previous blogpost we used Travis CI to push the code to SonarQube. To make this happen we need a security token.
To create a security token :
Once you have authenticated with SonarQube and logged on, click your name in the top right of the page, and select “My Account”.
- From here click “Security”
- Click Generate to generate a token – give your token a suitable name. Once your token is generated make a note of it – you are going to need it later.
Back in Travis CI, the sonar-scanner line instructs a plug in to push the code to sonarqube using the configuration section and a new file that needs to be added to the project root:
We can now use the token we got from SonarQube to tell Travis-CI what to authenticate itself as. This is stored in the environment variables section, using the techniques I touched on in previous post, in particular the section on “Keeping Secrets”
Next time you build your project, it will be pushed to SonaqQube (along with the sonar-project.properties), and analysis performed against the code.
SonarQube is a great tool, but it doesn’t give us what we really want – a nice graphic we can add to our project read me – after all, that’s what’s important right?
Like SonarQube, Codacy uses github for authentication. To set up a project for analysis, it’ just a case of clicking your project and clicking go. It’s a much simpler setup that SonarQube. Getting that all important badge is also a breeze. Click on the Settings button from the dashboard.
From here you can generate markup for different documentation systems including html, rst, and markdown. Just copy the Markdown and paste it into the appropriate document on your github repository and now you’ll get a badge rewarding you for making the code better.
It has been bought to my attention that the initial version of this post incorrectly identified the titanium bunker department responsible for the development of the version 1 selfietorium enclosure as titanium shed. This was obviously incorrect, as titanium shed was the original project code for the department now known as titanium workshop. Accordingly the credit for this work should be titanium workshop.
Titaniumshed has now produced an initial version of the selfietorium enclosure.
I’ve been focusing my efforts on the selfietorium lately and in particular how to combine all the various support systems with GitHub, where the source is stored. This blog post details making the magic work: to get Continuous Integration, build, package and release uploads working.
Continuous Integration is the foundation from which the other support services will hang. There’s no point in performing code analysis on code that doesn’t build or pass its tests. So, let’s get started.
Selfietorium is a raspberry pi based Python project, and there is great support in Travis-CI for Python – some languages such as c# are not 100% supported, so Travis-CI may not be suitable for all uses. Before you start looking at using Travis-CI for your solution, you should probably check that the language is supported by taking a look at the getting started page in the Travis-CI Docs.
Techies amongst you might be thinking
Mike – what are you going to build? Python is an interpreted language – there is no compiler for Python
And that’s true enough. I aim to use the Travis-CI build system to run my unit tests (when I write some) and package my Python code into a Debian .deb file to allow easy installation onto a Raspberry Pi.
So let’s get cracking
To start with, you’ll need an account on Travis-CI. Travis-CI uses GitHub for authentication, so that’s not too difficult to set up – just sign in with your GitHub account.
Now that you have an account what do you do next? There are a couple of things you need to do to make your project build: Create your project within Travis-CI and create a .travis.yml file
The .travis.yml file contains all of the steps to build and process your project, and it can be somewhat complicated. What is amazingly simple though is setting up a GitHub repository to build. Travis-CI presents me with all of the repositories that are capable of being built. From here I picked the TitaniumBunker/Selfietorium repository, and that was pretty much it.
Once your repository is set up it needs to be configured – the docs are an absolute must here. There is no IDE to manage your configuration – all that stands between build success and multiple frustrating build failures is you and your ability to write a decent .travis.yml file.
Nothing will build until you next push something to your GitHub repository. Push something to your repository and Travis-CI will spring into life, and potentially fail with an error, probably looking something like this:
Worker information hostname: ip-10-12-2-57:94955ffd-d111-46f9-ae1e-934bb94a5b20 version: v2.5.0-8-g19ea9c2 https://github.com/travis-ci/worker/tree/19ea9c20425c78100500c7cc935892b47024922c instance: ad8e75d:travis:ruby startup: 653.84368ms Could not find .travis.yml, using standard configuration. Build system information Build language: ruby Build group: stable Build dist: precise Build id: 185930222 Job id: 185930223 travis-build version: 7cac7d393 Build image provisioning date and time Thu Feb 5 15:09:33 UTC 2015 Operating System Details Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise ... ... ...
There’s a lot of cruft in there – but the lines that are interesting are:
- version – The version line hints that the Travis-CI worker code is on GitHub. It is.
- Could not find .travis.yml, using standard configuration. – The build fails to find a .travis.yml file and defaults to building a Ruby project.
- Description: Ubuntu 12.04.5 LTS – the build workers seem to be Ubuntu based.
- Cookbooks Version a68419e – Travis cookbooks are used with chef to set up workers
The .travis.yml file is effectively a script that executes as the build life cycle executes. The Customizing the Build page says that a build is made up of 2 main steps
- install: install any dependencies required
- script: run the build script
These 2 sections are essential for any .travis.yml file. There can be more than just these 2 sections, and the Customizing the Build page details a whole bunch of extra steps that can be added to your .travis.yml file.
dist: trusty sudo: required addons: sonarqube: token: secure: '$SONARQUBE_API_KEY' language: python python: - "2.7" # uncomment and edit the following line if your project needs to run something other than `rake`: before_install: - sudo apt-get -qq update - sudo apt-get install -y build-essential devscripts ubuntu-dev-tools debhelper dh-make diffutils patch cdbs - sudo apt-get install -y dh-python python-all python-setuptools python3-all python3-setuptools - sudo apt-get install -y python-cairo python-lxml python-rsvg python-twitter install: true script: - sonar-scanner - sudo dpkg-buildpackage -us -uc before_deploy: cp ../python-selfietorium_1_all.deb python-selfietorium_1_all.deb tyle="white-space: pre-wrap; word-wrap: break-word;" deploy: provider: releases api_key: '$GITHUB_API_KEY' file: 'python-selfietorium_1_all.deb' skip_cleanup: true on: branch: master tags: true
So let’s break down what this script does.
dist: trusty sudo: required addons: sonarqube: token: secure: '$SONARQUBE_API_KEY' language: python python: - "2.7"
This section is the pre-requisites section. It tells Travis-CI that the worker that is going to run this script should be an Ubuntu 14.04 LTS (Trusty Tahr) based machine. Travis-CI will build on either a Virtual Machine environment (with sudo enabled), or a Container – which is I believe based on Docker. The issue with docker is that while it takes seconds to provision a container based environment, it currently doesn’t have sudo available to it, meaning that performing activities using sudo (for example, installing build dependencies) is not possible in a container based environment. The Travis blog does state that:
If you require sudo, for instance to install Ubuntu packages, a workaround is to use precompiled binaries, uploading them to S3 and downloading them as part of your build, installing them into a non-root directory.
Now I still have some work to do around dependency resolution – I think it is possible to trim the number of dependencies right down. At the moment the build system installs all of the runtime dependencies which might potentially be overkill for the packaging – however they still might be needed for unit testing. Further work is required to look into that. If these dependencies can be removed, then the build could potentially be done in a container, speeding up the whole process. I can almost hear the other fellow techies…
But Mike, why don’t you use Python distutils, and use pypi to install your dependencies?
A fair question. Using pypi would mean that I could potentially install the dependencies without needing sudo access – the issue is that python-rsvg doesn’t seem to be available on pypi, and only seems to be available as a Linux package.
In this section I’m also telling Travis-CI that I would like to use SonarQube to perform analysis on the solution  and that the solution language is Python 2.7. I think the general opinion of developers out there is:
Use Python 3 – because Python 2 is the old way, and Python 3 is the newest
I’d like to use the new and shiny Python 3, but I fear that there may be libraries that I am using that have no Python 3 implementation – and that fear has led me back into the warm embrace that is Python 2.7. I plan to perform an audit and determine whether the project can be ported to Python 3.
before_install: - sudo apt-get -qq update - sudo apt-get install -y build-essential devscripts ubuntu-dev-tools debhelper dh-make diffutils patch cdbs - sudo apt-get install -y dh-python python-all python-setuptools python3-all python3-setuptools - sudo apt-get install -y python-cairo python-lxml python-rsvg python-twitter
In this section I am installing the various Linux packages required to perform the build. These are standard commands for installing packages onto an environment.
And here the installation does nothing. As I said at the top of this article, there is no build for Python programs.
script: - sonar-scanner - sudo dpkg-buildpackage -us -uc
Right about here is where I would be running any unit tests – but I don’t have any yet. This script then sends the code to SonarQube – a topic for a future post – and then calls dpkg-buildpackage to create the binary package. At the end of this step we have a deb file that could potentially be deployed.
before_deploy: cp ../python-selfietorium_1_all.deb python-selfietorium_1_all.deb
Before I deploy the deb file, I need to copy it, so I copy the generated deb file into the current working directory.
deploy: provider: releases api_key: '$GITHUB_API_KEY' file: 'python-selfietorium_1_all.deb' skip_cleanup: true on: branch: master
It uses a secret API key to gain access to the project releases. The file is the name of the generated file, and the skip_cleanup prevents Travis-CI from resetting the working directory and deleting all changes made during the build. The on section controls when files are deployed. With this setting, only changes made to the master branch, that are tags trigger the deployment. GitHub releases are actually tags, so creating a release creates a tag on a branch. For selfietorium we create releases on the master branch. The release deployment then pushes the build artifact to GitHub effectively offering it as the binary file for that release tag – and uploading it to the GitHub Release.
In order for Travis-CI to upload your build artifact, it will need to identify itself, and to do that we create a Personal Access Token. Using this Token, the GitHub Releases provider can now communicate with GitHub as if it was you. We can’t just add the GitHub token to our .travis.yml file. Well, I suppose we could, but then we shouldn’t be surprised if other files start appearing in our releases. The .travis.yml file is publicly available on GitHub – so we need a way of storing the token securely, and injecting it into the script during execution. Travis-CI offers an ability to store environment variables within a project. These variables are hidden when the log files are produced if you clear the display value in build log check box. To use that variable in your .travis.yml file you’d refer to it like this:
Grabbing a Status badge
Within the Travis-CI project settings screen, clicking the status badge offers the ability to generate suitable markdown for GitHub.
So what we’ve done so far is:
- Configured Travis-CI to build when we push to the repository.
- Eventually this will allow for unit tests to be run – but at the moment there are no unit tests for selfietorium.
- Configured Travis-CI to package the application into a .deb file when a release is created.
- Releases are effectively tags within a git repository.
- Configured Travis-CI to deploy out build artifact back to GitHub using a Personal Access Token.
- Personal Access Token is securely stored in a Travis-CI environment variable.
- We’ve created some spiffy markdown for a status badge that we can incorporate into our repository markdown.
In a forthcoming as yet un-written post, I’ll document how to set up the packaging folder so that selfietorium is installable and executable on a Linux system. It will probably borrow quite heavily from this page I wrote about content packaging.
Travis WebLint – check your Travis configuration for obvious errors.
-  I’ll return to that in a later post when I’ll talk about Continuous Inspection
I know it’s sad times for staples UK – I spent many a happy time in staples, refreshing my manilla folders for my family research – but I can’t help it’s a little early for it all to start to fall apart.
My office chair – actually bought from Staples only a few years ago is starting to look its age, and I thought about replacing it – so I clicked on the “See all Deals” button under “Big Chair Event” and presented with a list of manager and executive chairs.
Now I’m not really a manager type – I like to get my hands dirty (in as much as I don’t like to get my hands dirty – that’s why I work with code) so I was thinking about a mesh chair. So I clicked on Mesh Seating :
Also missing are Draughtman Chairs. Interestingly I can find a mesh seating section – http://www.staples.co.uk/mesh-seating/cbk/670.html
So what’s happening?
Well – comparing the draughtsman, mesh seating and ergonomic chairs links – against the working links, it seems that the culprit seems to be : cm_sp.
For example – here is the failing Mesh Seating link :
And a slightly modified (and now working) mesh seating link :
The highlighted Na-_-Na looks suspiciously like Not Applicable, or potentially “NaN” truncated to fit.
Thanks to Stuart Baldwin for pointing this one out : searching for anything on fightingknives.info for anything breaks the site, returning the message :
A potentially dangerous Request.Path value was detected from the client (&).
Looking at the favicon it appears to be a DotNet Nuke site – wow… that’s old – so old that I think this was originally running on the .NET 2 framework,
Anyway – the reason for this is the search url that the site navigates to when searching :
From the stack trace it seems that this site is running under .NET framework v4, and there were changes made to the v4 framework that extended request validation from only .aspx requests, to all requests.
To ‘fix’ this the site owner can add :
<httpRuntime requestValidationMode="2.0" />
To their web.config file, to prevent this from happening – or alter their application pool to use the older .NET frameworks (should be fine in version 2, may be fine in version 3 and 3.5) I say’fix’ because really they should be perhaps looking to update to a newer version, or re-writing their search facility to not pass potentially dangerous characters into their own requests.
My self and the good lady wife are currently holidaying on there island of Madeira, and we’re having a great time. While out for an evening stroll we spotted these wonderful balancing stones – which I photographed this morning.
And it got me thinking about application architecture.Take this pile of stones.
At first glance it looks pretty cool right, and it certainly is a great of engineering. But it’s pretty hard to replace the top layer. Put on a layer that had a different weight distribution and the whole stack becomes unstable. And the lower down the stack you attempt to replace a layer, the greater the difficulty involved, as that layer and every layer above it is affected by a change.
From a software point of view what does it mean then?
Well each layer is built depending on the layer(s) below it. In software terms it would be like the business layer opening and holding a SQL connection and transaction and then calling multiple data layer calls using that connection and transaction. The business layer has knowledge and a dependency on the data layer. A better approach would be to handle idbconnection and idbtransaction objects, but what about a web service layer?
I’m not an architecture expert, so this is something I’ll have to think about, but I think it might make an interesting article for the internal newsletter at work.
- Assemble Avengers
- Content Packaging
- Dr who
- Open Source
- Open University
- Planet Ubuntu
- Quickly Ebook Template
- s book
- Snail Tales
- This Modern Life
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010