Archive for the ‘ Art ’ Category
Having previously created the landscape of LA and placed the model of the chinese theater. for a flythrough for the intro of Pied Pipers Production of Singin’ in the rain.Next thing I did was to make some rain
Makin’ it Rain
to make the rain effect, I created a simple particle flow system and altered its settings to look like it was falling like rain. I then built a simple raindrop and then made the particle system reference this one raindrop . I turned off random rotation so the raindrops are heading straight down. Particle systems are expensive in terms of redraw and update speed so I will hide it untill I need to render it.
A single raindrop
I then placed a camera in the scene, set up some controls for it and after a bit of trial and error , made the beginings of what could be an interesting camera move.Here’s roughly what I had:
I then went on a download spree over on 3D warehouse, looking at the collections of models in LA and Hollywood checking against the map to find buildings next to the Chinese theatre. Then I got some more generic buildings and come cars
Changin’ the Format
here’s the next problem. 3D ware house has files in sketchup format, which if you remember from last time, I don’t have sketchup. but 3D warehouse also has the option to download in google earths model format – .KMZ
I renamed each of the .kmz files to .zip before extracting to new folders. Each folder has a collada file and a folder of directories. Luckily 3DS Max will import .dae files
Except when it doesn’t. these files simply crashed 3DS Max. So I tried a different approach. I fired up a copy of Blender and imported the .dae file it loaded so I exported out to .fbx
El Capitan Theater in Blender
after a little of trial and error, I managed to get a model into 3DS Max ! behold the El capitan theatre!
The same El Capitan Theater model in 3DS Max
hmmm. Looks a little strange. lets take a look at the 3D model in the browser again:
The El Captian model in 3D warehouse
Yeah it looks like as part of the dae import, it welds/deletes faces and/or verts seems Dae importer is not very well supported in blender so, back to the drawing board. This time I tried to import the .dae into Maya and straight back out to fbx and back into 3DS Max
El Capitan Theatre, converted via Maya
Ok, that’s looking good. Let’s go ahead and put some buildings in there!
Buildin’ the town
I dropped in some generic looking buildings on Hollywood Boulevard, built some very simple buildings ( literally cubes with texture maps on them) in there and made a starry night sky dome.
Then I found a model T ford on 3D Warehouse. I downloaded , got rid of the open bonnet, and merged the objects to make 4 new object, the car body, the back wheels, and 2 front wheels
I then modelled 2 low poly cones and fitted them to the headlamps. These cones were given a gradient in their diffuse and opacity channels to make them fade out. In effect faking the light cone you see in headlamps in fog
Model T Ford with headlamps
I then patented all this to a dummy, I made copies of the dummy and animated them driving down Hollywood Boulevard.
Paintin’ the town
The opening is in black and white, So I decided that I would have to tweak all the textures, I made an action Photoshop to convert an image to grayscale and ran it as a batch on a copy of the textures (if I ever want to swap them back then it should be easy)
the only building I am going to worry about is Graumans chinese Theater itself. It’s current textures are promoting the film ‘300’ so I need to change it to promote the film in the script.
Lightin’ the lights
Lighting this set was hard, mainly because I suck at lighting. The first thing I did was make a large blue spotlight toward the top of the skydome . This represents the moonlight
I Created some large omni lights at the edges of the planes and set them to white with -1 intensity which sucked the colour out of the scene, darkening the edges of the map when the cameras moving you hopefully won’t notice the square edges of the landscape.
I then stuck a series of lights on each car ( in hindsight I should have lit the cars first then animated them) I put a couple of low intensity omni lights in front of the car and a free spot to represent the headlights.
I found a Streetlamp model from 3D warehouse and created a couple of lights an omni light for the glow and a spot pointing downwards. These were patented to a dummy. I was then able to mass duplicate 20 down Hollywood boulevard
A streetlamp lit rigged and ready to duplicate
A lot of the buildings in my section of Hollwood Boulevard have lit signs .I will mke those textures be self illuminated.
Next up, Balancing Textures and animating the camera move
I have been asked to create a series of video projections for The Pied Pipers Musical Theatre Group’s upcoming production of Singin’ in the Rain. Since I won the NODA national poster competition I have been pretty busy. At the moment, everyone wants me to design backgrounds that are to be projected for 3 different shows.
Singing in the Rain is first- after getting some information from BAWDS I was able to work out a frame size.
First challenge: The director wanted a fly-through through 1920’s LA to the Graumans Chinese theatre.
Well it looks like SketchUp Make could solve this problem. I could use it to generate terrain and place buildings accurately – or so I thought.
Thing is, right now, I don’t use windows at home. I have an ancient Mac mini, so I tried installing SketchUp Make on that. Latest version installed, only to tell me when starting that the OS was the wrong version and so it wouldn’t run- thanks apple
Linux doesn’t have a version, so I tried to install SketchUp under WINE. this installed and ran (sort of) but there was no 3D view. So in desperation I contacted Mike who had a windows laptop and was able to extract chunks of landscape and the Hollywood sign and Chinese theatre to a number of .3ds files.
First up – let’s make the land.
Opening up a copy of 3ds max one lunchtime, I was able to import the landscape. As part of that, mike had very acurately placed the hollywood sign for me. I also had 30 or so chunks of land
Terrain Data extracted from SketchUp
Opening Photoshop I loaded the textures into a large file and using the 3D view as a guide, arranged the textures together:
Then I went through each layer cropping the footer display to create a large texture that could be mapped onto all the planes at the same time.
Texture when cleaned up
Back in max, I combined all the landscape planes together and then made another plane above the land with roughly the same density as the merge plane object. I then applied a Conform spacewarp to the object and suddenly my new plane was fitting snugly over the Holly wood hills
Combining the landscape planes
I now had a single surface with no co planar polygons. I made a snapshot this mesh and deleted the faces and vertices that hadn’t been affected by the conform operation.
Single surface generated from the terrain data
Slap the text ure on and VOILA!
Texturing the landscape
It’s not 100% but its a lot quicker than manually editing and welding thousands of verts and performing countless STL checks
a quick render:
Rendering the Hollywood hills
And I was ready to swap out the HOLLYWOOD sign for the earlier HOLLWOODLAND sign. I fudged it a bit but it still looks pretty good:
Next Up I will add the Chinese Theatre, and then populate the Rest of LA with generic buildings
Yes, finally after nearly 3 years of development and work, The Snail Tales project is finished. I had actually finished it late last year but decided to get christmas and new year out of the way before releasing to Snail Tales.
Here’s the finished film:
I will be collating all the character and background files and creating a public repository for them
On paper it seems an awful long time to make a piece of animation. But as well as the games I made as part of my job I moved house, got engaged, had to learn hot to use Synfig, and get S-Cargo and the continuous integration system working.
I recorded my presentation at OggCamp late last year – I will upload that shortly. In the meantime, here’s the presentation I did the year before, detailing how Synfig Stage and continuous integration will work:
Last weekend, miss Vicki and I ventured firth to Leeds for the NODA AGM. It turns out that a poster I designed was in the running for the Thomson Leng trophy for the NODA national poster design competition. I ended up winning first place so I thought it might be a good idea to write about the poster and how I made it.
I was asked by the director to design a poster for his production of communicating doors. He asked that it be cartoony and a little comic book like
I had long admired the artwork by Adrian Salmon on the Big Finish Bernice Summerfield audio CDs. Here’s an example of one
I love the use of the black line and the fill colour in this case blue. I thought it was cool approach to colour I would try to apply to this poster. I spent a couple of hours noodling in Krita
and came up with a rough colour study
The character proportions wasn’t that great so I set about drawing a better layout on 12 field animation paper. I photographed it and imported into my computer
I inked up and coloured the artwork in Krita. Blocking off the bottom of the poster where the show information was going. The director wanted to add that information there himself,
Next up, Typography. I saved a flattened version of the post from Krita and used it as a template to create the curved text for the title of the play. I got a number of fonts I thought would work well for the title and ran them past the Director, We both decided that #3 was the font
I removed the template and saved off a png of the page before loading it into the layered krita Document and adjusting it position a little.
Finally I sent the Artwork off to the Director for final approval before he added the show information to the bottom of the poster.
Here’s the Final poster:
Finally here’s a picture of the Thomson Leng trophy.
I really enjoyed working on the poster. Thinking back it was one of the last things I drew on my laptop, It used to hang and crash krita a lot, but now I upgraded the RAM it might be time to revisit drawing on my laptop. The last couple of productions for Waterbeach Community players, I have used photo montages for poster designs, hopefully the next one I can draw again!
OK heres where we bust out some more shots from the smail tales project. At the moment there are less that 10 scenes of animation left to do, mainly groups shots.
the arems a broken in this one, but i am happy with the rest of the animation
in this shot, The Dragon makes his appearance, the narration said the knights went off to get help, I decided it would be funny to have them run off terrified
in the next shot we see the Cat Detective face off against the HUGE dragon!
Here’s the shot inside the safe of the Queen reacting all shocked. I will add some eyes to her
I have been working on the Java code for uploading a video to YouTube, and I have the following video demonstrating it in action
Previously I have been working on using Jenkins to build a video file, and decided that I would need to investigate the ability to push the resulting video file from the build process to YouTube, allowing the continuous build process to make the results available for viewing. A quick trip to the Google Developer Console led to a page, detailing the YouTube Data API. Looking at the opening paragraph – it certainly seems to offer the ability we’re after.
Add YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.
So – let’s go through uploading a video by a script. A page discussing the upload video functionality can be found here, and the code can be downloaded from Github. My first thoughts were to implement this as a python script – after all it’s the same mechanism that we use to build the film in the first instance – so let’s give it a whirl.
Installing the Client Library
I’m developing on Ubuntu, so I’ve become accustomed to apt-get installing most of my applications, and I’ve written in the past about the benefits of something like software centre. So I was a bit disappointed to see nothing in the instructions that there was no option to install the library from software centre – especially considering that Ubuntu is/was Google’s desktop of choice. Anyway, the preferred option was to use Pip so I’d better install pip.
sudo apt-get python-pip
With that installed I was able to carry on looking at the python samples, but to do that I’d need to satisfy the other dependencies for the Client Library – primarily a google account, and setting up a project. I already had a google account – in fact I had a couple of accounts, so the first part of those requirements were already fulfilled, and to be honest I don’t think that creating a google account requires a write up here but if you need to there’s a video here.
Creating a Google Account / Application
The sample code page says that the samples use the Google APIs Client Library for Python, so these samples needed that. Creating a project or script that interacts with a Google API requires a developer to create a credential for that application within the Google Developers Console. This means that google have the opportunity to see what application is sending the request to Google Services, and to provide a monetization capability. Requests to the Google Services are limited, and large-scale users will end up burning through their daily allowance. This allowance is not unsubstantial – the YouTube API allows 50,000,000 units/day and limit that to 3000 requests/second/users. Not All requests are priced equally:
- Simple read operation costs 1 unit
- Write operation has a cost of approx 50 units
- Video upload has a cost of approx 1600 units
these charges are approximate as the pricing is based on the number of ‘units’ returned – a search results could return a number of units per item.
Google suggest that the following operations would be achievable within the 50,000,000 units/day threshold.
- 1,000,000 read operations that each return two resource parts.
- 50,000 write operations and 450,000 additional read operations that each retrieve two resource parts.
- 2000 video uploads, 7000 write operations, and 200,000 read operations that each retrieve three resource parts.
Google support a number of different types of authentication styles and there are 2 main types supported Public API and oAuth. On the face of if the best option seems to be Public API as it allows a service to communicate with the server without user interaction, but Service accounts are not permitted to log into YouTube – so I’ll have to use an oAuth account. The way that oAuth accounts work is as follows :
- Application loads data from client_secrets.json – which allows the client application to identify itself against the google authentication services – Google now knows which application is being called.
- User is presented with a browser -either directly by launching a URL, or through instructing the user through the command line to visit a particular site.
- User then confirms that the application is allowed to access their youtube account
- Google send back an authorisation token
This is all well and good for services that have a user front end – what I need is to do this in a system that runs on a back end, and then on a system that isn’t necessarily the system that runs the code (for example – through a client web browser). There are difficulties related to storing and distributing these secrets based on the current scargo project. Putting the clients_secret into the project would be difficult, as any application would be able to masquerade as the Jenkins Video Upload application. Storing the oAuth token would also be an issue, as anyone would theoretically be able to upload to my YouTube account. Ideally I would have place holders into which your YouTube oAuth files could be copied – but that could prove problematic. Pulling the latest code from GitHub would build, but wouldn’t deploy to the YouTube server without replacing these place holders with real data. If the upload returned a fail status code, then the jobs would always fail. If the placeholders were replaced from GitHub (and they might) then it would make setting up a new project more difficult.
What needs to happen is that the deployment needs to be separated from the build process. This could be accomplished through creating a separate build job – a deployment job and running that on the basis of a successful build – however I made the decision that it might be better to create a Jenkins plugin.
You can find my current efforts here.
Chuck Norris is watching you build code – careful now!
For the last few days, I have been playing with the Jenkins Continuous Integration server and python, and I have reached the following conclusion: Writing python code without an effective IDE makes the job of software development harder than it needs to be. I’ve been developing a lot on Ubuntu as well lately, so I’ve found the joy that is Wingware IDE
So – I think a bit of a recap is in order.
For those of you in the know, for the past couple of years I have been working on an animated short film. It’s a long process to make a short animation and with lots of assets to keep track of. I use a production chart to keep tabs on everything. here is a snapshot of the production chart as it stands:
Now the eagle-eyed amongst you will notice that it’s a spreadsheet In the past I have tutted and rolled my eyes when people have complained that when they use a spreadsheet to catalog their DVD collection, they couldn’t per pixel scroll, it would snap to the nearest cell. And then a patiently explain that a spreadsheet is not designed to catalog a collection of DVD’s, A spreadsheet is really good a totaling columns of numbers and/or applying formulae on them. A DVD catalog is best done with a database.
Yes I know I should use a Database to store the production chart. It is a more effective way to store this information. Each scene is a record that can have a series of fields applied to them. we could poll the database for complete scenes and get an accurate percentage of how much of the film is animated or rendered or needs work(etc)
thing is I am, to my own surprise, a little bit old school. I learnt to breakdown sound using a mixing desk, and large sheets of paper, jogging through soundtracks, listening for the pops and whistles and decoding them into the phonemes that made up the characters speech. and this is a digital equivalent of the old school way of creating a production chart – It’s a digital analogue of an Analogue er.. analogue
today , kids examine wave forms or use software tools to provide easier breakdowns, and whilst I like them and do use them a fair bit, sometimes , I think that younger animators, fresh into the field, are lacking some of these old school skills.
part of my old school curmudgeon-ness is the creation of dope sheets and production charts. there was something exciting about transferring your sound breakdown to a dopesheet ready to animate, it was a prelude to the storm of creation that leads to the initial pencil tests. I loved the way the Production charts would fill up with checks and notes becoming more full as the deadline approached.
working on this project has been great fun. the biggest problem has been scheduling the time to make the animation and learning to use the software. Part of that has been learning some of the limitations of the software and the creation of new software tools to allow me to work with the software the way I want to work with it. I was using synfig stage last night and it struck me I have talked a lot about it at Oggcamp and other tech shows without really showing it. I started using it and it worked straight away (more or less) and so showing it working didn’t feel important because it actually was working. I suppose I should make a video demonstrating the tool and the problem it solves.
Priority though is on the film. Right now with about 16 scenes left to animate there’s a definite feeling its starting to come together as a film and part of me will be glad to get it finished, to move on to the next thing. part of me also misses my old school beginnings. and I hope maybe one day in the future, I will do a proper old school 2D short using an actual pencil on real paper.
Well it’s that time of year again, when we try to act all enthusiastic about a new year, make numbers resolutions to join a gym and try to get fit ( i give 3 months!) and where we here in the bunker take a second to reflect on the year that was, and see which of us predicted the most er stuff.
So let’s go through the predictions shall we?