Archive for the ‘ Open Source ’ Category
Yes, finally after nearly 3 years of development and work, The Snail Tales project is finished. I had actually finished it late last year but decided to get christmas and new year out of the way before releasing to Snail Tales.
Here’s the finished film:
I will be collating all the character and background files and creating a public repository for them
On paper it seems an awful long time to make a piece of animation. But as well as the games I made as part of my job I moved house, got engaged, had to learn hot to use Synfig, and get S-Cargo and the continuous integration system working.
I recorded my presentation at OggCamp late last year – I will upload that shortly. In the meantime, here’s the presentation I did the year before, detailing how Synfig Stage and continuous integration will work:
This morning I read this on the twittter feed of Scott Hanselman :
This was exciting because I was a Windows Live Writer user – so I decided to give it a whirl, and after a few attempts managed to write this blog post using it.
I’ll try and write up some more about how the application works, and the issues I had installing (which weren’t many – but I would suggest are potentially confusing for people wanting to use this software to write blog posts).
mwha ha ha
I’m sure that it hasn’t escaped your attention, but Volkswagen has been caught doing something underhand and sneaky. Volkswagen is accused of implementing software code within a diesel car’s engine management computer to detect the presence of emissions sensing equipment, and modify the flow of fuel through the engine to attain lower emission ratings and therefore pass the emissions test. The effect of this is that Volkswagen had an unfair advantage over other diesel manufacturers, and at the same time the emissions of these cars are actually up to 40% more than under test conditions.
The fall out of this scandal has forced the Chief Executive -Martin Winterkorn- to resign, the share price to plummet and leaves Volkswagen with its reputation in tatters and facing a potential $18 Billion fine.
Artur Fischer (Joint CEO of the Berlin Stock Exchange) – was interviewed on BBC Radio 4 and had the following to say about the scandal, and interestingly about software:
“But I really like your listeners to remember that software changes can be done by small groups of people and can be deployed in millions and the real question I have, from a distance is, How about software quality assurance? How about compliance? How big was that problem inside the company? and for that to analyse you need to have a fresh start”
Overall I’d agree with Artur’s first point – that software changes can be made by small groups of people – however the rest of this statement left me feeling uncomfortable. Artur’s first point about software group size could – if I were more cynical – be an attempt to create a narrative around this. Something along the lines of “It was a few rogue programmers that released this code”, and the “Fresh Start” that he talks about could be an attempt to prevent too much scrutiny of the processes around software development. Fresh Start was also a phrase used by the outgoing Markin Winterkorn. I’m not sure what analysis you can do if you implement a fresh start – and it again cynically may look like an attempt to bury other systemic failures within the VW group.
It’s a fact of life that software is more and more prevalent in the things we buy and consume today, and with the future Internet of Things materialising around us, I think we need to be concious of the issues that can arise from software lurking in things that we may not traditionally associate with running software..
At OggCamp a few years ago I heard Karen Sandler talk about the pacemaker she has fitted, and the issues that she struggled with around the problem of bugs in medical devices that are implanted into your body – like pacemakers and insulin pumps – how these can be hacked or manipulated, and how the code for these devices is unavailable.
We place a huge amount of trust in out cars – and underpinning this trust is code. How can we be sure that the code in my car won’t detect a test condition, and lower the fuel consumption? That could leave me without power while driving, and therefore potentially in danger.
So how do we mitigate the issue that software is going to be ever present in more and more things ?
Well for some devices like My Friend Cayla, or garage door openers security researchers have done the research to identify issues with those devices. Some manufacturers may be able to issue patches to affected devices. I’m less sure how a patch could be distributed to my car, or a pacemaker. The EFF believe that the Volkswagen emission test issue could have been uncovered if there was access to the source code – I’m betting that Martin Winterkorn is probably wishing that their software was accessible through some mechanism.
Title : Villain – Wikipedia, the free encyclopedia
Source : https://en.wikipedia.org/wiki/Villain#/media/File:Villainc.svg
license : Attribution-ShareAlike 3.0 Unported
Previously I have been working on using Jenkins to build a video file, and decided that I would need to investigate the ability to push the resulting video file from the build process to YouTube, allowing the continuous build process to make the results available for viewing. A quick trip to the Google Developer Console led to a page, detailing the YouTube Data API. Looking at the opening paragraph – it certainly seems to offer the ability we’re after.
Add YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.
So – let’s go through uploading a video by a script. A page discussing the upload video functionality can be found here, and the code can be downloaded from Github. My first thoughts were to implement this as a python script – after all it’s the same mechanism that we use to build the film in the first instance – so let’s give it a whirl.
Installing the Client Library
I’m developing on Ubuntu, so I’ve become accustomed to apt-get installing most of my applications, and I’ve written in the past about the benefits of something like software centre. So I was a bit disappointed to see nothing in the instructions that there was no option to install the library from software centre – especially considering that Ubuntu is/was Google’s desktop of choice. Anyway, the preferred option was to use Pip so I’d better install pip.
sudo apt-get python-pip
With that installed I was able to carry on looking at the python samples, but to do that I’d need to satisfy the other dependencies for the Client Library – primarily a google account, and setting up a project. I already had a google account – in fact I had a couple of accounts, so the first part of those requirements were already fulfilled, and to be honest I don’t think that creating a google account requires a write up here but if you need to there’s a video here.
Creating a Google Account / Application
The sample code page says that the samples use the Google APIs Client Library for Python, so these samples needed that. Creating a project or script that interacts with a Google API requires a developer to create a credential for that application within the Google Developers Console. This means that google have the opportunity to see what application is sending the request to Google Services, and to provide a monetization capability. Requests to the Google Services are limited, and large-scale users will end up burning through their daily allowance. This allowance is not unsubstantial – the YouTube API allows 50,000,000 units/day and limit that to 3000 requests/second/users. Not All requests are priced equally:
- Simple read operation costs 1 unit
- Write operation has a cost of approx 50 units
- Video upload has a cost of approx 1600 units
these charges are approximate as the pricing is based on the number of ‘units’ returned – a search results could return a number of units per item.
Google suggest that the following operations would be achievable within the 50,000,000 units/day threshold.
- 1,000,000 read operations that each return two resource parts.
- 50,000 write operations and 450,000 additional read operations that each retrieve two resource parts.
- 2000 video uploads, 7000 write operations, and 200,000 read operations that each retrieve three resource parts.
Google support a number of different types of authentication styles and there are 2 main types supported Public API and oAuth. On the face of if the best option seems to be Public API as it allows a service to communicate with the server without user interaction, but Service accounts are not permitted to log into YouTube – so I’ll have to use an oAuth account. The way that oAuth accounts work is as follows :
- Application loads data from client_secrets.json – which allows the client application to identify itself against the google authentication services – Google now knows which application is being called.
- User is presented with a browser -either directly by launching a URL, or through instructing the user through the command line to visit a particular site.
- User then confirms that the application is allowed to access their youtube account
- Google send back an authorisation token
This is all well and good for services that have a user front end – what I need is to do this in a system that runs on a back end, and then on a system that isn’t necessarily the system that runs the code (for example – through a client web browser). There are difficulties related to storing and distributing these secrets based on the current scargo project. Putting the clients_secret into the project would be difficult, as any application would be able to masquerade as the Jenkins Video Upload application. Storing the oAuth token would also be an issue, as anyone would theoretically be able to upload to my YouTube account. Ideally I would have place holders into which your YouTube oAuth files could be copied – but that could prove problematic. Pulling the latest code from GitHub would build, but wouldn’t deploy to the YouTube server without replacing these place holders with real data. If the upload returned a fail status code, then the jobs would always fail. If the placeholders were replaced from GitHub (and they might) then it would make setting up a new project more difficult.
What needs to happen is that the deployment needs to be separated from the build process. This could be accomplished through creating a separate build job – a deployment job and running that on the basis of a successful build – however I made the decision that it might be better to create a Jenkins plugin.
You can find my current efforts here.
Well the BBC have recently announced a new initiative to get children to code so take a second to think how would you accomplish this?
The BBC have made their own small computer called the micro:bit which comes with a number of sensors built-in, can be run from a couple of batteries and more importantly will be given away to year 7 children
So far this all sounds good. So how does one code for this device?
Well, all you have to do is attach the micro:bit to your iPad, android tablet or your PC and use the IDE app to write code before publishing the finished code to the micro:bit.
Sorry, say that bit again?
Well, all you have to do is attach the micro:bit to your iPad, android tablet or your PC and use the IDE app to write code before publishing the finished code to the micro:bit.
That’s right, in order to teach children how to code, you connect this device up to another computer to publish some code for it
So what er actually have here, is not a computer but something more akin to an Arduino?
Here’s the problem I have with this. If the whole concept of the micro:bit is that it is something that children can learn to program with, then the concept is flawed. It’s flawed because in order to use the micro:bit you must have another computer to program it on. So little Billy who doesn’t have a smartphone, and whose single parent mother is working hard to put food on the table, not iPads in their hand is
shit out of luck will be somewhat disadvantaged.
Oh, but he could develop at school right? Well, when I was at school there was 1 BBC micro for a class of children. This is assuming the infrastructure is there, that it is available for Billy to use after hours and that is really only an hour or two Monday to Friday.
Yeah, but don’t kids get given iPads at school these days?
Do they? I don’t seem much evidence of cash-strapped LEA’s doing this. It’s entirely possible that LEA’s with bigger budgets might do this, but this can lead to a two-tiered education for out children.
But let’s for a second assume that your LEA has bottomless pockets and have rolled out IPads to all students. Are we sure we want to teach kids to program but Only within an Apple ecosystem? for that matter the IDE and platform that is developed by Microsoft. but more on that later.
How about smartphones? loads of kids have smartphones right?
sure a lot of kids do have smartphones and it is a problem that schools are having a lot of problems with – some teachers will tell you that smartphones offer too much of a distraction while others love the concept of BYOD (bring your own device) in a classroom environment.
It’s true that smartphone uptake amongst year 7 is probably very high, but I would think that it’s more likely that year 7 smartphone usage will be using apps like Crossy road, Angry Birds and Snapchat. I very much doubt that your average year 7 will happily whip out their phone and start coding for micro:bit. How many of you here have written more than a text on a mobile?
The problem I have with BYOD in the classroom is that there is no standard platform. which means that some of the kids with zippier, newer phones will have an advantage over kids with a slower phone or an older platform. that is assuming that your platform is supported in the first place.
This program has the same flaw as 3D TV – you need an accompanying piece of not necessarily commonplace technology to use it. The Raspberry Pi costs £25 requires a monitor a keyboard and a mouse. The monitor can be a TV and the mouse and keyboard can be obtained relatively cheaply, say £5 bought online? so you can be computing for £30
So far the cost of entry with the raspberry pi is£5 what’s the cost of entry for the micro:bit. What’s the cheapest computer I could get to run on this? surprise it’s a raspberry PI, so the cost of entry to use the micro:bit is £30!
so discounting the micro:bit, I can already be programming in Scratch or Python or Java on Raspberry PI with micro:bit there will be a web-based IDE which hasn’t been publicised much though word is that there will be a drag and drop solution that will then download to the micro:bit
Another problem this is a free giveaway to year 7 pupils FOR ONE YEAR ONLY!
which means that should the program be deemed a failure, then the micro:bit will disappear faster than the crowd at an opening night party for a Broadway play when the first bad review comes in,
Should it be a success, then it becomes a purchase for either the school or for parents to take care of and right now we still don’t know the price. If the micro bit costs £10 then the initial outlay to get a development platform is £40!
Remember when I mentioned that Microsoft are behind the hardware and software? Here’s another point to consider. The main selling point of the micro:bit is that it is a way of doing the “internet of things” in a way that school children can understand. The problem is there is already hardware and software platforms that do this called Arduino . It has already been used in numerous projects and both the hardware and software is open source.
Micro:bit currently isn’t although this will happen, but just not yet.
This means there’s now yet another platform offering IoT functionality that further muddies the water. I am sure that industry professionals will continue to use existing platforms which seems to be mostly Arduino meaning that unless there are follow-up classes for pupils to learn about these other platforms they will enter industry unable to make simple IOT projects – which kind of defeats the object of the micro:bit in the first place right?
Right now, apart from the board, there are scant details on how this will all work. I don’t want to be a negative Nelly about this, but the raspberry PI is an easier sell than a small piece of circuit board.
I had a chat with Mike, this is what he said
–Mike’s Prediction —
I predict that – unfortunately – the micro:bit will be a massive failure. Children that are interested in coding will already be working with technologies such as the Pi. Those that had little interest in embedded computing will do the minimum required to pass the course, and it will then sit in a drawer. I think that the official programming language will do little, as there will be little to no commercial uptake of the micro:bit – as technology companies won’t see the first practitioners reach the job market for a few years, and when they do you can almost guarantee that the embedded computing platforms of tomorrow won’t be the micro:bit. I think that to improve adoption there needs to be a more engaged attitude from pupils, and I have the opinion that most of the students today care about angry birds, Facebook and not much more than that. I also believe that this project will need a wide variety of projects that can be done using the technology. Ideally, I believe that these technologies projects should support and be supportive of other subject areas. For example: how about combining the embedded micro:bit along with a drama course to provide automatic sound and lighting queues. Now – this is a silly example : the computer that you are running to program the micro:bit is more powerful than the micro:bit itself, but the idea that you can trigger events based on a simple interface to play sound effects or run lighting etc from a small box might be a project to get the principles across to pupils. I think that however such joined up thinking, combining multiple disciplines will be difficult for schools to implement and I, therefore, predict that it will become a boring and inaccessible technology failure.
Chuck Norris is watching you build code – careful now!
For the last few days, I have been playing with the Jenkins Continuous Integration server and python, and I have reached the following conclusion: Writing python code without an effective IDE makes the job of software development harder than it needs to be. I’ve been developing a lot on Ubuntu as well lately, so I’ve found the joy that is Wingware IDE
So – I think a bit of a recap is in order.
For those of you in the know, for the past couple of years I have been working on an animated short film. It’s a long process to make a short animation and with lots of assets to keep track of. I use a production chart to keep tabs on everything. here is a snapshot of the production chart as it stands:
Now the eagle-eyed amongst you will notice that it’s a spreadsheet In the past I have tutted and rolled my eyes when people have complained that when they use a spreadsheet to catalog their DVD collection, they couldn’t per pixel scroll, it would snap to the nearest cell. And then a patiently explain that a spreadsheet is not designed to catalog a collection of DVD’s, A spreadsheet is really good a totaling columns of numbers and/or applying formulae on them. A DVD catalog is best done with a database.
Yes I know I should use a Database to store the production chart. It is a more effective way to store this information. Each scene is a record that can have a series of fields applied to them. we could poll the database for complete scenes and get an accurate percentage of how much of the film is animated or rendered or needs work(etc)
thing is I am, to my own surprise, a little bit old school. I learnt to breakdown sound using a mixing desk, and large sheets of paper, jogging through soundtracks, listening for the pops and whistles and decoding them into the phonemes that made up the characters speech. and this is a digital equivalent of the old school way of creating a production chart – It’s a digital analogue of an Analogue er.. analogue
today , kids examine wave forms or use software tools to provide easier breakdowns, and whilst I like them and do use them a fair bit, sometimes , I think that younger animators, fresh into the field, are lacking some of these old school skills.
part of my old school curmudgeon-ness is the creation of dope sheets and production charts. there was something exciting about transferring your sound breakdown to a dopesheet ready to animate, it was a prelude to the storm of creation that leads to the initial pencil tests. I loved the way the Production charts would fill up with checks and notes becoming more full as the deadline approached.
working on this project has been great fun. the biggest problem has been scheduling the time to make the animation and learning to use the software. Part of that has been learning some of the limitations of the software and the creation of new software tools to allow me to work with the software the way I want to work with it. I was using synfig stage last night and it struck me I have talked a lot about it at Oggcamp and other tech shows without really showing it. I started using it and it worked straight away (more or less) and so showing it working didn’t feel important because it actually was working. I suppose I should make a video demonstrating the tool and the problem it solves.
Priority though is on the film. Right now with about 16 scenes left to animate there’s a definite feeling its starting to come together as a film and part of me will be glad to get it finished, to move on to the next thing. part of me also misses my old school beginnings. and I hope maybe one day in the future, I will do a proper old school 2D short using an actual pencil on real paper.
In the previous post I talked about setting up Jenkins – I have also been able to apply the same instructions to an Azure hosted linux sever.
The instructions are identical – except that the ports need to be connected
Configure ports for Jenkins
Using the endpoint configuration I was able to map the 8080 to port 80. I needed to set up the security so that anonymous users could see the job status but would be unable to run or add jobs.
I create a user and set up the security like this :
Configure Jenkins security
I’m going to start this blog post by saying : I’m not an animator. I know nothing about animation. Well that’s not strictly true – I did learn something about animation from this tutorial video :
I have been working for a long time on a short piece of animation. It’s been made longer by my insistence on using purely Open source software to make the animation with.
I decided to use Synfig Studio and set about creating characters over a year ago.
Of course real life got in the way of this. There was a house move, and of course there’s been my work on Elite Dangerous and Zoo Tycoon as well as the work I continue to do for Hoo on Who. but here are some early tests and work in progress shots for the short.
In the original story board the opening shot was much shorter and it was static but I have decided to make it a longer shot to better establish the kingdom in this early test, there are no peasants, I will add a couple of peasants to the fields and village.
the scarecrow I added on a whim. I had watched the Wizard of Oz a couple of days previous and trying to think what i could put in the environment to help set the scene, I thought it would be fun to animate a scarcrow. I built and rigged a scarecrow and animated him making a little wave and turning his head to look to camera.
A lot of his animation is masked by the tree currently. I might put the tree on the first hill so it is well out of the way .
In this shot, the Cat detective is being telephoned by the Queen to come find her stolen money. I wanted to give some contrast to the cat detective’s office, so I designed it to look more Noir-ish and forced the perspective. I always though it could be funny to have the cat play with a ball of wool while taking the call, the idea to put the framed picture of a ball of wool happened as I was designing the background. Originally there was going to be a newtons cradle in the foreground, but the scene was busy enough and I didn’t want to pull focus from the cat.
In this shot, the cat detective is looking through his magnifying glass. I used two copies of the cat detective, and used the shape of the magnifying glass to mask out the larger cat
this shot is tricky and is in the process of being re animated. The cat detective is climbing up a spirtal staircase, following the dragon prints. The Background was a 3d model I rigged and animated in 3DS Max- still frames were rendered out and the cat detective was then animated to look like he is climbing the stairs. I changed the camera angle to better show the dragon prints. so the cat Detective needs reanimating.
Using open source software has not been easy. There are certain thing I like to do when animating that Synfig forces to you abandon. In order to combat this, Mike and I wrote Synfig Stage, This software will allow us to compile new scenes with copies of the existing characters in the production. expect a video of it in action soon.
Synfig also has a very steep learning curve, but as with all things, the first time you try anything it will always be hard. Subsequent attempts are quicker and some of the newer chacters have taken less time to rig and in some cases these rigs are more complex than others. I hope to have a few more shots done soon. in which case ther might be other posts about this (which is why its optimistically called part 1).