Previously I have been working on using Jenkins to build a video file, and decided that I would need to investigate the ability to push the resulting video file from the build process to YouTube, allowing the continuous build process to make the results available for viewing. A quick trip to the Google Developer Console led to a page, detailing the YouTube Data API. Looking at the opening paragraph – it certainly seems to offer the ability we’re after.
Add YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.
So – let’s go through uploading a video by a script. A page discussing the upload video functionality can be found here, and the code can be downloaded from Github. My first thoughts were to implement this as a python script – after all it’s the same mechanism that we use to build the film in the first instance – so let’s give it a whirl.
Installing the Client Library
I’m developing on Ubuntu, so I’ve become accustomed to apt-get installing most of my applications, and I’ve written in the past about the benefits of something like software centre. So I was a bit disappointed to see nothing in the instructions that there was no option to install the library from software centre – especially considering that Ubuntu is/was Google’s desktop of choice. Anyway, the preferred option was to use Pip so I’d better install pip.
sudo apt-get python-pip
With that installed I was able to carry on looking at the python samples, but to do that I’d need to satisfy the other dependencies for the Client Library – primarily a google account, and setting up a project. I already had a google account – in fact I had a couple of accounts, so the first part of those requirements were already fulfilled, and to be honest I don’t think that creating a google account requires a write up here but if you need to there’s a video here.
Creating a Google Account / Application
The sample code page says that the samples use the Google APIs Client Library for Python, so these samples needed that. Creating a project or script that interacts with a Google API requires a developer to create a credential for that application within the Google Developers Console. This means that google have the opportunity to see what application is sending the request to Google Services, and to provide a monetization capability. Requests to the Google Services are limited, and large-scale users will end up burning through their daily allowance. This allowance is not unsubstantial – the YouTube API allows 50,000,000 units/day and limit that to 3000 requests/second/users. Not All requests are priced equally:
- Simple read operation costs 1 unit
- Write operation has a cost of approx 50 units
- Video upload has a cost of approx 1600 units
these charges are approximate as the pricing is based on the number of ‘units’ returned – a search results could return a number of units per item.
Google suggest that the following operations would be achievable within the 50,000,000 units/day threshold.
- 1,000,000 read operations that each return two resource parts.
- 50,000 write operations and 450,000 additional read operations that each retrieve two resource parts.
- 2000 video uploads, 7000 write operations, and 200,000 read operations that each retrieve three resource parts.
Google support a number of different types of authentication styles and there are 2 main types supported Public API and oAuth. On the face of if the best option seems to be Public API as it allows a service to communicate with the server without user interaction, but Service accounts are not permitted to log into YouTube – so I’ll have to use an oAuth account. The way that oAuth accounts work is as follows :
- Application loads data from client_secrets.json – which allows the client application to identify itself against the google authentication services – Google now knows which application is being called.
- User is presented with a browser -either directly by launching a URL, or through instructing the user through the command line to visit a particular site.
- User then confirms that the application is allowed to access their youtube account
- Google send back an authorisation token
This is all well and good for services that have a user front end – what I need is to do this in a system that runs on a back end, and then on a system that isn’t necessarily the system that runs the code (for example – through a client web browser). There are difficulties related to storing and distributing these secrets based on the current scargo project. Putting the clients_secret into the project would be difficult, as any application would be able to masquerade as the Jenkins Video Upload application. Storing the oAuth token would also be an issue, as anyone would theoretically be able to upload to my YouTube account. Ideally I would have place holders into which your YouTube oAuth files could be copied – but that could prove problematic. Pulling the latest code from GitHub would build, but wouldn’t deploy to the YouTube server without replacing these place holders with real data. If the upload returned a fail status code, then the jobs would always fail. If the placeholders were replaced from GitHub (and they might) then it would make setting up a new project more difficult.
What needs to happen is that the deployment needs to be separated from the build process. This could be accomplished through creating a separate build job – a deployment job and running that on the basis of a successful build – however I made the decision that it might be better to create a Jenkins plugin.
You can find my current efforts here.
In previous articles I mentioned the possibility that we could end up losing assets placed on the internet, where we are reliant on a 3rd party maintaining them – here is a practical example of an asset that hasn’t even lasted 5 years.
This is an article on Stack Overflow about copying projects within a VS2010 post build event., and was asked in 2012. The content was originally hosted on Imgur, but has now been replaced. Consider that the content that content on Imgur could now be no longer safe for work, and I think this highlights the issues that we can have in relying on a third party to store our content.
Wrong Graphic in Question
you might have read Mikes last entry about the preservation of culture here. and he does raise interesting points and whilst I agree with him, that cultural artifacts need to be preserved, I kind of disagree that culture is defined by artifacts
In my view culture is an all-encompassing set of tools we use to consume culture and interact with other members of our cultural group. Having a great play or book is fine, but if you don’t understand the language spoken or even the concepts of reading and writing then it becomes meaningless squiggles on a page.
how about this for an example of loss of cultural knowledge
Well – a while back I was looking at how I could extend the hosting of deb files onto a word press site. My idea was to create a plugin that would allow the server to automatically extract the latest version of a file stored in a deb file. The idea was that if you were an author, publishing your book, that you publish you book once – to launchpad, and access the binary e-pub file from you word press website.
I’ve been working on the plugin off and on, and today I am sorry to say that I’ve not made much progress on it. The plugin currently pulls back the latest deb file – from an address you specify in the short code, and I can create a copy of that file. Deb files are Unix .ar files under the covers, and unfortunately I have been unable to extract specific files from the archive using php.
So it looks for the immediate term that the ability to link your published e-book to your word press site isn’t going to be possible – unless anyone out there can offer any guidance?
I am not disheartened – perhaps I need to look at other options for publishing you book from launchpad to your website, but for the moment debsplorer will have to remain incomplete.
Way back, in this post, I looked at the possibility of distributing albums using the Ubuntu packaging process. Let’s assume that you have recorded all of your tracks using Ardour. you should be able to apt-get source ALBUM to retrieve the ardour source files for the album, and when you check in the files the package is rebuilt and the binary packages are recompiled. So – how do you recompile an album?
Well – I think we need to qualify what is a album. An album is a collection of songs – but it could be argued that we should be supporting a single song rather than a whole album. This would allow users to create a mix tape album by creating a metapackage that refers to the other album tracks. So – our smallest element of work is a song, and a song is an ardour project. Therefore altering a song should rebuild that song. How do you run Ardour on the server if there is no sound card on the server, and can you be sure that the referred plugins will be available?
- Configure Ardour to run with OSC enabled
- Configure Ardour to work against a Dummy sound card
- Configure Ardour to work with a virtual X11 buffer
- Once this infrastructure is set up, use the EXPORT OSC command to export the current session to the file in question, filling in all metadata (however this is probably saved in the session).
RESULT : An invisible Ardour session, and an exported MP3 file (or ogg)
I’ll update this blog when I find out how successful that could be.
I have been working this evening on the sharing process for ebooks. Tonight I managed to get a book to upload to launchpad and to make it available. Currently it is a nonsense collection of pages, examining aspects of page layouts, and a copy of a 1 star rating for a review of a Steven King follow up to The Shining – Dr Sleep
There is still work to do on the process – currently there needs to be a link made between the installation location (/usr/share/books) and the current user’s directory – but I have some scripts that should accomplish this that were developed as part of the content packaging for Severed Fifth’s Nightmares by design album.
There were a few dependencies that I had forgotten about, but a couple of retries and the process ran successfully.
Build time for the book took 2 minutes.
test2 ebook – as seen in the ubuntu Software Centre
If you want to install the book yourself – add ppa:computa-mike/testbook to your ubuntu system’s software sources.
There’s a guide to adding a PPA to your software sources here.
The raring deb file can be found here.
I have also found a very interesting python library for epub, and I seem to be replicating much of the work that Aleksandar Erkalovi? is doing with this library – so I plan to replace my library with his, adding in any missing features back to the main library (if at all possible). I think this would be the best approach as it reduces duplicated effort.
I have been experimenting with the functionality of the Ubuntu ebook template. The idea is that if you are writing your book using the template, that the same content can be published to an ebook and an audio book. The new command to accomplish this is :quickly read.
Want to help?
Want to try Ubuntu Quickly for ebooks? then pop over to https://launchpad.net/quickly-ubuntu-ebook and get involved.
Here’s a first video demonstating the ubuntu-ebook template for quickly.