Nuke Process

Before starting this project I had never really used nuke before. I had downloaded it over summer  and started to play around with it but nothing more than that. Because I didn’t know much about Nuke I really valued the lessons we received from Clement, from Escape Studios, as he gave very well structured and detailed lectures, right down from the basics up to more advance techniques of 3D camera tracking and plate cleaning. Alongside these lessons I also attended a course of the Ravensbourne shorts on Nuke and compositing. There were 3 sessions with increasing dificulty levels, taught by Alex, a past Ravensbourne tutor. These sessions really helped me to concrete down what I had learnt from clement, boosting my confidence within Nuke, allowing me more creative freedom.

The first thing I needed to do in nuke was to track the camera for shot 5. This was our hand held shot that we included so that we could have a chance to practice camera tracking. We didn’t note down the meta data from the camera whilst we were shooting as I knew that the Canon 5D recorded that data like f/stop and focal length into the image file. I was the able to extract this date using a piece of software called efixtool(-k). Its a simple script that just pulls all the meta data from a shot and displays it in the cmd prompt. I needed this data like the sensor size and focal length so that the camera track would be as accurate as possible.capture

As well as imputing the correct meta data, I also needed to correct the lens distortion. This could be easily done using the lens distort node. I knew how much to undistort by looking up the distortion factor of the specific lens we using a canon 24-105 has a distortion of 0.015 so we did the opposite of -0.015. After this the footage was then ready to be tracked.


Once the camera was tracked I then exported it to Maya using the writeGeo node and saving it as either a .abc(alembic) or as a .FBX. These files can then be imported into Maya and lined up to the geometry in our scene.

By our own mistake we didn’t get any measurements of the distance between our camera and the position of the crane, when we were filming. So, by a small stroke of genius, we were able to use Google Earth to quite accurately measure the distance, using its built in ruler earth.PNG

After everything was render out of Maya (which overall took more than a week of pretty much solid rendering!) we split our shots between us. I was to do shot 4 and 5 and Henry was doing shot 7. Because shot 4 contained no live footage and the shot consisted of mainly reflections, it didn’t require much compositing, only colour grading and a few alterations.

First off I apply a simple grade to boost the blacks and whites and then I am using a chroma key to extract the greens from the leaves and create and alpha to use as a mask to drive the next grade nodes where I adjust the, black white and gamma, colour levels to give it the yellow/brown hue. I am then using the depth pass, that I render out in my .exr file, to drive my zDefocus node. This node allows me to use a depth pass to get a depth of field effect like you would on a normal camera lens.

Next was shot 5, this was the shot where I really wanted to showcase a various range of skills. First off we were having some problems with using a ‘hold out’ to create the alpha for where the crane touches the water. It was rendering out the alpha correctly so we had to fix it in Nuke. To create the alpha I used a Keyer node to key out the darker portion at the bottom, caused by the refractions. I then merge this keyed alpha with the top portion of the original alpha to create this composite alpha.alpha.png

Then I added in the water marks along the bottom of the crane where it meets the water. This was to get a more realistic paper effect. I used a tracker node to track the motion of the crane, I then applied this tracking data to the transform node of my rotoscope, which was masking out the grade node, giving the effect of water marks. I also had to animate the mask on the neck so that it would follow it as it dipped into the water.water marks.png

To give the crane the nice rim light effect on its neck, I shuffled out the specular pass and increased its brightness. I then animated the roto node to follow it as it moved. This was then merged over the crane. As well as extracting the specular pass, I shuffled out the ambient occlusion and multiply merged it over the orgional crane, this then made my shadows much more pronounced, giving the model more depth.

ao specular.png

The next step was to grade this crane to fit the colour of the back plate. After this, I again am using the depth pass on the zDefocus to adjust the focus so that it is slightly blurred to follow suit with the back plate. I did this for all of the elements of this scene, the water and the lily pads.


We had some people walking around at the back of our scene and we wanted to remove them in case they drew attention away from the rest of the scene. I first used a tracker node to track the motion of the areas I wanted removed. Then, using the roto paint tool, I painted out the people with other areas of the scene. Tracking data is then copied into the transform tab to cover the people with the motion of the camera.

I was rather pleased with the way the colour grade went on this shot. Shooting with the 5D produced some really crisp footage with was a pleasure to work with. Here I am layering multiple grade and colour nodes to give myself full control over the look of the scene. I start off by just boost some of the levels. Then I am extracting the greens, using a greenscreen keyer, so that I can mask out the leaves and trees and apply the grades that give it the yellow/brown hue.

Working with particles has always been something I have enjoyed, whether it be in Maya, Houdini or After Effects. For this reason I wanted to experiment on working with particles in Nuke. It has a solid 3D application built in which meant I could create particles with the depth of the scene no problem. Here I am using the particle emit node to simulate the particles. To this is then connected; a sphere node to give the particles physical geometry; a cube node to emit and contain the particles; then a series of effect nodes, like wind and turbulence, to give the particles some natural movement.


For the rest of the shots I applied a similar grade to shot 5, to shots 1-3. Just extracting the greens and making the orange. On shot 3 there was not much green so instead I extracted the blue and increased the saturation a bit, which really made the blue pop, the effect Henry and I both really like.


Colour Grading

When I’m looking for inspiration on colour, I like to look at a couple of websites. is a great website which gives you the general colour spectrum of a movie, where you can clearly see how the colour effects the mood of the piece. The second website is:, this website create a an average colour for every shot in a film and then puts all the colour into one image. So you can scan through the film and find the shot with the specific colours you are looking for.

When browsing on moviesincolour I came across Fantastic Mr. Fox, I really liked the monochromatic colour scheme in the movie. It gave it a very warm, autumnal feeling, which was a similar theme that we were going for in our project. I want to steer away from the traditional colour spectrums we see in movies, where complimentary colours are over used and can become quite cliché. The deep oranges are really powerful and compliment the orange fur of the fox brilliantly.


We started filming in October, so just at the start of the Autumn season. Some of the trees we just starting to turn so this gave us a reference to grade the rest of the scene to match these autumn oranges. I was able to do this by first using the greenscreen key in nuke to create a mask just for the grass and green leaves. Then using a grade node I was able to adjust the red and green channels to give them an orange/yellow, almost mustard, hue.

Here is an example of the grade we have put on our scene. In the first image are all the different nodes layered on top of each other so we can see how the colour is built up. On the left of the image is the final grade and on the right it is the original plate. First off, I adjust the black, white and gamma levels to bring the image to life, the original plate is very desaturated. The next step was keying out the green of the leaves, grass and reflections and turning them into an autumnal orange, this took a few layers to capture all of the green effectively. After this I keyed out the lightest points using a luminance key. This way I was able to boost the highlights making is look sunnier. The final step was apply the overall grade; this is was then gave me the monochromatic look I was after. The final Grade is the image on the right.

We chose to go with this grade because it reflected the time of year we were shooting in, early autumn. This time of year is, in my opinion, one of the most beautiful seasons, there is so much depth of colour in all the orange and brown hues. It also compliments the soft, pale yellow paper, of the crane. Having these soft, almost desaturated colours, really invokes the emotion of peace and tranquility which we were trying to achieve in this shot.

From my research I have found that each colour represents a different emotion. According to the article I found on the creativebloq, they say that; oranges give the feeling of excitement, without the severity that red gives, and yellows represents happiness and friendliness. They also state that, brown gives ‘an outdoorsy’ feel and can be used to represent reliability and sturdiness. This is also represented in the firmness of the trees in the scene. They also talk about the vibrancy of a colour being vital the the emotion it gives. Bright colours are represented as energetic, while darker shades are scene as more relaxing and immersive. Which is precisely the mood we are aiming for.



Here is the 360° panorama image I have created for the IBL in our scene. It was quite a challenge to put together, for a few reasons. First, we were unable to take our images in the exact location of our origami crane, because we weren’t allowed to shoot on a tripod, let alone try and put one in the water. Secondly, we had to take the images on the little, concrete bridge the crossed the pond. This gave us a large grey area along the bottom of the image, which would have been undesirable for the IBL. This took a little bit of matte painting, taking aspects of various photo and fine use of the clone too, to remove the bridge and give the illusion that it’s shot on top of the water. Lastly, due to most of the environment being organic material like trees, bushes and flowers, which can move around in the wind and tend to be quite complex, it can be hard to line them up perfectly on their seams. I used the program PTGui, a piece of image stitching software. This software speeds up the stitching process drastically, allowing me to make easy adjustments to the positioning of each HDR image. I was then able to touch it up in Photoshop.

Here are the 4 HDRI images I created by converting 3 bracketed images on adobe bridge, using the merge to HDR tool. You can see here how much of the bottom of the image I had to replace, all of the bridge and the foam. The Sky was also too dark so I had replaced it with some bluer patches.

Because the IBL was in a different place to where the CG was going to be placed we were unable to use it to drive the reflections on our water. This was a bit of a shame since our reflections are true to our environment. We ended up having to use the back plate as our reflections, although it looks good, it wasn’t perfect, but there was no other choice.


Modelling Development

I started off the modelling process by first understanding the anatomy of the origami crane. I learnt how to build my own origami cranes out of paper by follow some instructions from the internet. The helped me see the proportions of each part of the body, but also to understand where all the folds would be visible. Moreover it allowed me to have a physical reference that I could constantly study whilst modelling simultaneously.

for a little bit of fun and also to get an idea of the scale of our crane at a world scale, we made a giant origami swan. This was made from large rolls of paper, taped together and folded the exact same way as its miniature cousin. We would have liked it to be a bit larger after folding, but it would’ve been a lot more challenging to fold, than it already was and required more space than the library could offer!

Whilst modelling I went through few different variations of how the crane would look, i found it quite challenging at first to get the shapes I wanted, but on my third attempt, after I had thinned the model to make it look more paper like, I was more pleased with my results, it was then just refining to get the model I wanted. I was using some images of my crane models from different angles as reference for my model. This helped me get a sense of proportion.

Click these links to view the models in 3D

I have been using substance painter to create the diffuse texture to apply to my sub surface shader and to create a normal map to add some more details such as folds and creases


Down below are the extra textures I have created to apply to the subsurface layer. I felt it was necessary to use custom textures on the sub surface layers as well as the diffuse, it gave me a lot more control over the look and the detail of the shader. crane_textures

We can more clearly see how these two textures are effecting the shader by having a look at the test render below.  The darker texture is the sub surface colour, this is what is adding the dark almost marble like veins that run throughout the paper. I gave it this effect so the surface looked a bit more interesting and tactile, instead of being a really smooth and lambert like. The second texture is the scatter colour which specifies what colour will be scattered in the sub surface, although it make look a bit grey on WordPress it is actualy more yellow/orange. This is what gives me the warm orange glow that comes through the thinner parts of the model like the wings.



Here is the normal map, used to create the organic texture on our lily pads, made in Substance Painter. I have also used the sub-surface shader on the lily pads, since it gives a nice translucent effect around the thin edges. I have also increased the specularity which really makes the little veins pop. It was an artistic choice to make the specular colour orange and not a more realistic white. This orange would fit well with our autumn colour scheme.

Rigging: Painted Weights

After modelling the crane I sent it over to Henry for him to rig, I thought seeing as he would be animating it, it would make sense for him to rig the model so he would understand how it all moves. There were a issue with the skinning, causing some parts of the model to move undesirably, this is easy fixed by remapping the  painted skin weights. Essentially a texture that changes the influence of a joint on its surrounding geometry.

Rendering and IBL in Vray

Here are the settings I have used rendering the Audi car in Vray.

This first image is for setting up the IBL using vrays dome light, along with rectangle lights. The global illumination settings are similar to that of mental ray and it takes multiple small increases of the subdivisions to increase lighting quality, reduce noise, and reduce render times.


The next image is for creating render passes. It is much simpler in vray than it is in mental ray and only requires a few clicks. You can select the desired render passes from the left and add them to the right, this then apply each pass to the multi channel EXR file. Some passes such as Fresnel and ambient occlusion are not in the default render passes so you to create a custom pass using the extra tex pass. This then allows you to apply a shader to any pass to create the pass you want. In this example I am showing how to apply the ambient occlusion pass.redner-passes

Here are the render passes, in order they are, beauty, reflection, refraction, Fresnel, shadow, specular and ambient occlusion.

In vray to smooth a mesh for rendering it needs to be subdivided at the render stage not by pressing 3 or apple the ‘smooth mesh’ tool. This gives better results compared to maya’s subdivision method.subdividing


Ignoring the music that accompanies the hour long video, it is actually a brilliant source of reference on how the water should react to similar sized birds in the water, on a relatively calm surface. It also serves as reference for the animation of the crane and how the water will react when the crane pokes its beak into the water.

Here is the are some progress videos of the water simulation I have been running. Bifröst is very particular about scale. All of its calculations are computer in a real world scale where 1 unit is equal to 1 metre. Now, Maya’s default is 1 unit is equal 1 cm and even if you change this Bifrost works independently of Maya’s units. Because of this, interesting physics and interactions can occur which can be seen in image 1 where there isn’t enough interaction with the water and in image 2 where there is too much interaction. The final image is where I have the scale set properly. Because it’s built at cm scale we have to adjust the particle density and gravity so that the physics are relative to its scale.

  • 1 Voxel = 1 Maya Unit
  • Voxel = m
  • Gravity = 9.8m/s²
  • Liquid Density = 1000kg/m³

Convert to cm:

  • Gravity at a real world scale is 9.8m/s² and as I am working in a cm scale in maya need to I need to adjust me gravity and density accordingly. So there are 100cm in 1m so: 9.8*100=980m/s²
  • The liquid density is set at default to 1000kg/m³ which is approximately the density of water at room temperature. Because I am scaling down and it is being cubed, I need cube root or  1000  1000/1000/1000=0.001

With these adjustments the water now reacts accurately to any motion or force.

bifrost interface.png


The vorticity attribute calculates the accumulation of rotation magnitude (curl of velocity field) within voxels. The vorticity channel is not physically accurate, but can be used to simulate churning when viewing and rendering. I have disabled this as it was giving me some annoying effects of the water churning for too long, and it is not really noticeable when the water is quite still, it would just cause a longer sim time.

Bifrost Liquid Material

Here are my liquid material tests, I’ve been adjusting the Droplet reveal factor, surface radius and smoothing, within the bifrost meshing tab of the liquidShape node. By reducing the droplet radius and increasing the smoothing we get a much smoother surface, see the last image. with higer values the water is rendered in more detail and picks up more ripples. This isn’t exactly desirable for our water, as we want it to have strong reflection to match the still reflective water in the park.


By enabling the Bifrost Meshing attribute, Bifrost converts the voxels into a dynamic mesh, this greatly improves the render quality and also the render time.

  • Droplet reveal: Creates and preserves the detailing of the mesh
  • Surface radius: increases surface detail and adds more detail
  • Droplet radius: Increases the size of the droplets when the separate from the mesh
  • Kernel Factor: Adjusts the width of the surface smoothing kernel
  • Smoothing smooths the mesh
  • Resolution factor: increases the resolution but can cause long delays if set about 2-3

One problem I did encounter was that we wanted our water to look quite still and reflective to imitate the water of the pond. To achieve this effect I needed to have  relatively low settings, in the ‘surface radius’ and ‘droplet radius’; and the high settings on ‘kernal radius’, this really smoothed out the water surface, but at the cost of causing the water to become quite globular looking. This had a knock on effect on the ripples on the water not being as prominent as I had hoped. However they are still noticeable and having the surface match the live plate was much more important. You can see in the images above, where on the left image it has high droplet and surface radius’ and the right has low settings.


Here is a list of content I have found to help me troubleshoot any  of the problems i encountered and also what has helped me learn and understand Bifrost’s interface.

The first link is from the autodesk website; and although it does contain information on the whole interface, it does lack any description on how to use attribute.

This video helped to understand the basic concepts of how to set up the scene correctly. It also shows how to set up a guided simulation, whereby the water is driven by some geometry, I, however decided not to use a guided sim because it gave the surface too much motion and wasn’t reflective enough.

Here is a document I found that gives a clearer description of what each attribute does and how it functions. Although most of it seems to be right, it looks like someones notes from a lecture, so I take each note with a little bit of cynacism.

Crane Model and Lily Models

Here is my crane model, this a 3D preview, some polygons look odd because this webiste does not support subdivisions.

Again this looks very low poly but when V-Rays subdivision is applied it looks much smoother and more lily like. Modelling in this fashion is beneficial to the render times as it is best to keep the poly count as low as possible.

Vray and Sub Surface Scattering

For our origami crane to look as realistic as possible, we need to apply a shader that can mimic the properties of real paper.  This is one of the reasons I wanted to use V-ray to render out our assets. The capabilities of its shader set are far more advance than those of mentalray and its a far more widely used renderer in the industry, so it would be ideal to learn as soon as possible.

During our recent visit to the Bournemouth FX festival, there was a lot of talks about V-ray. At a talk from Random42; a medical animation studio, where V-ray is there main renderer; they spoke about a shader called VrayfastSSS2 or Vray Sub Surface Scattering. Its main purpose is for rendering translucent materials like skin or marble. One of the benefits for the SSS2 shader is that it allows light through a material based off of the density of the object, which is perfect for our crane model as some parts like the wings are thin and need to allow more light through than the body, which is quite dense with paper folds.

From these examples we can clearly see how light is being scattered around the object and by increasing the scale parameter we can adjust at what depth the light is scattered to giving it a more translucent effect. Although you do lose some of the definition from the lack of shadows, so there is a balancing act in finding the sweet spot.

Here is some of my experiments using the SSS2 shader on our crane model. In the first image I was having some difficulty with the V-ray subdivisions causing the shader to react oddly to  the geometry. In the second image I’ve solved the problem and increased the amount of scatter light. We can see its effect clearly on the wing along its fold line. the third and fourth images are using the same shader but the fourth image has improved UVs so the noise bump map would effect it properly. I still need to add normal map to add some finer details and to add a diffuse layer to apply a uniform texture.

vray shader.PNG

Here the settings I have at the moment. The overall colour will be changed to the diffuse channel I export from substance painter and I should change the index of refraction to closer mimic the IOR of paper, at the moment it is set to 1.3, this setting is for most water based materials.


The Chaos group website is a fantastic source for every piece of information you could want about VRay. The documentation is very thorough and precise and easy to navigate. This source has vastly helped me improve my ability to texture/shade and render in VRay. There are even tutorials on some more general concepts of Vray.