Password: thingummyblob

I, like the majority of the class, really enjoyed the sound of the Beano brief. I Hadn’t put my own idea forward for the brief because I wanted to focus my work on the areas of industry that interest me and the skills I want to improve. I am working on a couple Beano Projects.

The first and biggest is Dorota’s Thingymeblob project, with Tomas and Sam. Dorot, Tomas and I have worked well together in the past and with Sam animating, I think we can produce a very polished final piece. For this project we are trying to distribute the roles effectively and use proper workflows between all the different stages. I will be creating the dynamics, texturing/shading, cameras, lighting, rendering and compositing. These are the areas where I feel I could offer the best work and also where I want to improve.

Here is the plan I have out together for our group Beano Project, it details all of our roles and, if we stick to it, should have us finished a week before hand in, to allow for any changes and mistakes.


Shader Work


Here are the first tests I have created. I am using the same VrayFastSSS material as ihave done in previous projects. The scatter light it creates has a nice aesthetic on a lot of objects. I am just doing some colour tests to see the best colour for The Blobs base colour. We intend her to be able to change colour depending on her mood so I will do more tests once I’ve settled on the basic shader.

Dorota wanted the blob to be more transparent so I changed the scatter properties to refractive and increased the scale so that is became translucent. The bubbles inside are created using MASH, Maya’s new procedural effects tool. In the first instance the spheres inside are shaded with a glass material with inverted normals so they look like bubbles. Dorota wanted it to be more like a lava lamp so I’ve applied a light material so that they glow inside the body

I had been struggling a long time with creating a shader that had bubble built into the refraction but my attempts were not successful. Whilst I was researching for methods to distribute objects throughout a mesh, I came across this tutorial on MASH, which is exactly what I needed and there are no problems with using animated meshes either.


Here is the shader on the updated mesh. We have the different colours for all the shades she is going to take on during the animation. I have still yet to add a bump map to the model to get some really nice liquidy drips come off her body.

Dorota’s instruction was to make Simon look cute like the children in Pixar, Dreamworks and Universal movies. It’s all in the eyes, that’s where we see the most of the characters emotion. She gave me some reference files to work from and I had done my own research using tutorials.




For the smoke simulation I have been using Phoenix FD, a fluid dynamics plug in for maya that I have bought to prepare myself for working in the industry. I have been using this project as well as both VFX projects and Dev’s industry practice unit, to further my training using Phoenix FD

Here is a quick video of the workflow used to create a smoke simulation, this is an example of how I made the higher resolution sim, as you can see me referring to the orginal simulation for the attributes.

Here is the higher resolution sim, not quite as high as i would like but we coudn’t wait for a high resolution version, it did come out a bit noisy in the renders.




Doing the lighting for this project was quite challenging, due to the high detail of all the assets in the scene and the amount of light, even to get out low resolution tests would take a very long time, making it difficult to make changes. Having all shots contained in one scene had its draw backs and benefits. it made the lighting easier as i didnt have to recreate lights for each scene, just add them where appropriate, but this is also the reason everything became much slower

Below is a series of my lighting test, trying to find the correct mood for the scene. For some objects I would create the lighting in its own scene and reference it into the main scene. This allowed me to update the textures and lighting quickly without waiting for all the geometry to compile in the larger scene.



I really appreciated being able to use the render farm this term. It was helpful to get to grips with it before the start of my 3rd year where I will be using it a lot more, hopefully problem free. Having the render farm s what has made this project possible. The renders would’ve taken too long on our own computers or would not have been a high enough quality to submit to The Beano. Due to unforeseen circumstances we were unable to produce all of our shots on the render farm. This meant we had to find a work around. for shots that were cancelled mid way through, we decided to just shorten the length of the shots. We had to cut 1 shot, and render the other shots at a lower resolution using our own accounts on the render farm. The simulation scene needed to be rendered on my won machine due to the farm not support the Phoenix FD plug-in. It was a shame some shots were rendered at a lower quality, in my opinion it is very noticeable. Shots 8, 11 and 12 are the lower quality shots.

Below are the final renders for each shot.



It was my intention to render everything out from the scene individually using render layers, giving me full control over the lighting and compositing. However I was unable to do with with the tight deadline. In the end we just produced beauty renders, with reflection, GI, AO and depth passes, this gave me some control over the look when compositing but not to the detail I would like with more time. The depth pass is handy when compositing allowing me to add depth of field to my renders in post, rather than rendering it straight onto the shot, which increases the render times.

To save time I have used a workflow for grading I have used before. It’s very simple consisting of a grade node to boost the light and darks, and a colorgrade node to boost the intensity of specific colours, namely the red and the blue. I then applied the specular and reflection passes over the top to make them ‘pop’ livening up the scene, the final step is adding in the depth of field using the depth pass and the ZDepth node. I found a neat trick online for upscaling 1080p footage to 4k without too much quality loss, using a TVIScale node. I found this to be effective scaling from 720p to 4k and then reformatted to 1080p, obviously the quality is not the same but it was better than simple reformatted 720p to 1080p.

Here is a comparison between the different scaling methods, you can clearly see the fidelity of the TVIscale node, adding far for detail than other methods.


For reference here is my nuke tree and a close up to show my work flow.


The ‘Beano’ brief, a project I was very excited about, not only because of the prospect of money at the end but also for the connections with the industry it could bring and the opportunity to work alongside industry professionals; giving professional and constructive feedback. At the start of term, it was my plan to assist multiple groups with the lighting, texturing/shading and rendering of their 3D projects. I planned to work on Dorota’s project as well as Ren and Finns 3D project, however, due to some technical issues Ren and Finn were unable to produce animation ready for rendering by the deadline, this, however, gave me more time to work on my other project. Despite this, I will still be producing some beauty renders for them, for the next presentations, at the Beano headquarters, on the 20th of June. Dorota and I have consistently worked well together on previous projects, developing a strong personal and professional relationship, which is key to our success. Over the course of the term I became more involved project, taking on the role of producer, organising people’s workload and deadlines, problem-solving and making sure the project was finished on time, being the only group to have a finished product by the 6th of June, I feel I have achieved this role successfully.


2nd Year Film Project ‘Rushes’ VFX


“Rushes is an experimental thriller observing the instant nature of a bad decision and the ripples and effects it can have on other people’s lives. Looking at the mechanical nature of time from 5 different perspectives this stylistic short takes inspirations from High Rise, and Vantage Point. Set in a confused world, somewhere between the 70s and the future there is a strong push on colour and distortion through art deco themes.”


I was approached by Tara Trangmar, a third year DFP student, to do the VFX for her FMP. She is the producer on the Vanitati project and had heard I was keen on creating liquid simulations.

There are four different shots that require VFX. There are two screen replacements I need to create and composite on Nuke. Also, there are two tricky water simulations, the first requires a custom mesh of a duvet and a guided simulation to trickle down it; the second requires the liquid to emit from a moving person and react with the scene around them. What makes this tricky is the camera is moving also.

I chose to help on this project because it was an ideal opportunity to practice some more complex simulation and compositing techniques. This is a solo project for me so I am able to practice some efficient workflows for if I ever work freelance.

Shot 1

I am having a lot of issue with this shot. There is tonnes of motion blur as the actor starts to move. This is making it impossible to track the phone screen, it doesn’t help that the tracking markers are not crosses or small dots. there is a solution however, the video is longer than the shot and there is about 10 seconds art the start where the actor isn’t walking and this is trackable, I’m just waiting to hear back from Tara on whether it is ok to move ahead with this change

Shot 3

The only reference I was given for this shot was the short story board and that she wanted it to be ‘kubrick esq’. This was the scene from The Shining she gave me as reference.

Because this was the only reference my first tests the blood was very powerful. After showing it to Tara she said it was too much so, next time i am going to tone it down, make it more realistic.

First of all I needed to track the camera on nuke, that’s what all the locators are in the first video.  With this markers as a guide, I have made a basic model of the room for the liquid to react with. The first video is made using Bifrost but I have switched to PhoenixFD as it is easier to simulate smaller scale liquids and renders better with Vray. I am using a similar method of using a body rig as a collider for the actor and animating it to match-move with him. This is working OK, but there is a little bit of sliding where it doesn’t completely match his movement, I should be able to fix this in Nuke though, using the tracking markers provided.


Shot 4

This shot requires the screen to be replaced with a custom calling screen and for their to be blood splatter over the whole thing.

First of all I am planning out what needs to be done. The tracking and screen replacement will be done on Nuke and the blood will most likely be done using Real Flow.


First of all I need to create the video for what the screen is going to be replaced with. I have used a combination of these Android templates and custom images in After Effects to make a calling screen.


Below is a breakdown of the call screen replacement. I’m tracking the green markers, and using the data to drive the roto paint to remove the trackers. The same tracking data is used to hold the screen and cracker screen onto the phone.


Here is a quick slap comp of a still frame of the fluid simulation. Here I’m testing to see if my camera track is sticking to the plate properly.




I took on this project because I wanted to test myself produce the full vfx pipeline. This project has required me to do modelling, rigging, texturing, lighting, rending, dynamics and compositing. This project has really helped me improve my dynamics abillties, with varied shots for me to practice on. In hindsight it was too much work to take on, I think it would’ve been possible to do just one VFX project. However I have time over summer to fully complete each shot.

I had a lot of issue with these shots, mostly to do with tracking markers and lack of reference. I put the blame on myself because I was not able to attend the shoot where i would have been able to better direct what I needed, however it was on our last deadline so I could not make it. This made it a good opportunity to work with footage that wasn’t perfect, causing me to find effective ways to solve the problems, even if it is ‘boshing’ it till it works. The method does’t matter as long as it works.

Submission and Evaluation

RSA Submission 


This was my section of the submission



intro write about our ideas

This unit has probably been the unit with the highest workload so far, but that is probably my own fault, taking on too much work even when advised against it at the beginning of the term. But how will I improve without increasing my work load. Overall it has been a good unit, I’ve enjoyed finally getting to learn some real animation skills and I have seen my animating skills improve over the duration of the unit.

We wanted to take this unit as an opportunity to re enter the RSA after we sadly didn’t win last year. We had a point to prove and hopefully with a year to improve our skills we have produced something that has a possibility of winning. We tried to take on board what the mentioned last year about the work being too confusing, so this time we tried to produce something that had a clear story and purpose. I think we achieved that by creating a very visually appealing piece with a short story that an audience could follow. We wanted to make something a bit different compared to other videos that all follow the similar motion graphics / info graphic style. We wanted to make something that really stood out, which is what I think drew them to our piece last year.

The RSA entry was heavily influenced by Pixar, with all our characters taking reference from previous Pixar characters and the whole piece has a very Toy Story esq. to it. We did this because we know how iconic Pixars look is and know it is an effective way to create emotional stories that audiences can empathise with. Empathy was a big part of this topic and creating animations that audiences can empathise with is key to making an impactful video. I think I have achieved this, especially with the grandfather/grandchild relationship of Walter and Cody.

I decided to do this project as a group because the work load would have been far too much for just one person. I worked well with Dorota last year on this project and it was both of our intentions to enter again this year. We asked Emma to work with us as we thought we would be able to get more done when there were three of us. Sadly towards the end of the project Emma was having some personal issues which meant she was unable to complete a lot of the work she had been set. This set our work back a lot because it meant mine and Dorota’s work load increased and sadly it caused us to, in my opinion, not produce a product that was up to my standards. This increased workload to a toll on the other jump and modelling projects. I was unable to fully complete either of them, firstly, because the RSA hand in was earlier meaning I had to leave the other projects until after the hand in and secondly because of the issues with Emma I was extremely stressed from working and not sleeping for a week, I had lost a lot of my motivation.

Another problem we had was, because we were using Vray, the other two were not familiar on how to use it, this meant a lot of my time I shouldve been working on the animation, was spent trying to set up the scenes for the other two to render. towards the end Dorota got the hang of using vray was able to render without my input. However because of their unfamiliarity with the program some of our shots came out looking slightly different. This was also the issue I had when Tomas rendered a couple of the scenes for us. I tried to match the colours in nuke as much as possible but it is still noticeable in my opinion.

On a good note I was extremely pleased at how my renders came out. Lighting and rendering is becoming another area that I am really interested in, aside from dynamics, I really enjoy the outcomes I am producing and see my self improving all the time. I think the depth of field really sells the toy like nature of the piece, close up to the toys has a large depth of field and when its far out in the room it has a low depth of field. I also feel my animation skills have greatly improved over the course of the unit.

I am disappointed in myself for not completing the jump or having the time to create the IK/FK switch on my arm rig. I know i am capable of completing both to a high standard but the work load for for was just too much. I now know for the future not to take on too much and really focus refining something fully before trying to take on anything else.

In conclusion it has been rather up and down this project, there has been some things I have been really pleased about and others that have really dampened my mood for the whole unit. I still intend to pursue improving my animation skills and intend to finish all the projects to a higher standard over the Easter break as i think all of them would be fantastic additions to my show reel.




Modelling Development

I started off the modelling process by first understanding the anatomy of the origami crane. I learnt how to build my own origami cranes out of paper by follow some instructions from the internet. The helped me see the proportions of each part of the body, but also to understand where all the folds would be visible. Moreover it allowed me to have a physical reference that I could constantly study whilst modelling simultaneously.

for a little bit of fun and also to get an idea of the scale of our crane at a world scale, we made a giant origami swan. This was made from large rolls of paper, taped together and folded the exact same way as its miniature cousin. We would have liked it to be a bit larger after folding, but it would’ve been a lot more challenging to fold, than it already was and required more space than the library could offer!

Whilst modelling I went through few different variations of how the crane would look, i found it quite challenging at first to get the shapes I wanted, but on my third attempt, after I had thinned the model to make it look more paper like, I was more pleased with my results, it was then just refining to get the model I wanted. I was using some images of my crane models from different angles as reference for my model. This helped me get a sense of proportion.

Click these links to view the models in 3D

I have been using substance painter to create the diffuse texture to apply to my sub surface shader and to create a normal map to add some more details such as folds and creases


Down below are the extra textures I have created to apply to the subsurface layer. I felt it was necessary to use custom textures on the sub surface layers as well as the diffuse, it gave me a lot more control over the look and the detail of the shader. crane_textures

We can more clearly see how these two textures are effecting the shader by having a look at the test render below.  The darker texture is the sub surface colour, this is what is adding the dark almost marble like veins that run throughout the paper. I gave it this effect so the surface looked a bit more interesting and tactile, instead of being a really smooth and lambert like. The second texture is the scatter colour which specifies what colour will be scattered in the sub surface, although it make look a bit grey on WordPress it is actualy more yellow/orange. This is what gives me the warm orange glow that comes through the thinner parts of the model like the wings.



Here is the normal map, used to create the organic texture on our lily pads, made in Substance Painter. I have also used the sub-surface shader on the lily pads, since it gives a nice translucent effect around the thin edges. I have also increased the specularity which really makes the little veins pop. It was an artistic choice to make the specular colour orange and not a more realistic white. This orange would fit well with our autumn colour scheme.

Rigging: Painted Weights

After modelling the crane I sent it over to Henry for him to rig, I thought seeing as he would be animating it, it would make sense for him to rig the model so he would understand how it all moves. There were a issue with the skinning, causing some parts of the model to move undesirably, this is easy fixed by remapping the  painted skin weights. Essentially a texture that changes the influence of a joint on its surrounding geometry.

Rendering and IBL in Vray

Here are the settings I have used rendering the Audi car in Vray.

This first image is for setting up the IBL using vrays dome light, along with rectangle lights. The global illumination settings are similar to that of mental ray and it takes multiple small increases of the subdivisions to increase lighting quality, reduce noise, and reduce render times.


The next image is for creating render passes. It is much simpler in vray than it is in mental ray and only requires a few clicks. You can select the desired render passes from the left and add them to the right, this then apply each pass to the multi channel EXR file. Some passes such as Fresnel and ambient occlusion are not in the default render passes so you to create a custom pass using the extra tex pass. This then allows you to apply a shader to any pass to create the pass you want. In this example I am showing how to apply the ambient occlusion pass.redner-passes

Here are the render passes, in order they are, beauty, reflection, refraction, Fresnel, shadow, specular and ambient occlusion.

In vray to smooth a mesh for rendering it needs to be subdivided at the render stage not by pressing 3 or apple the ‘smooth mesh’ tool. This gives better results compared to maya’s subdivision method.subdividing

Idea #3

As a group we liked the idea of having a something that is time warped. Either a time lapse or slow motion would look really effective with our CG. We thought we could incorporate slow motion into our insect idea, with some water droplets. But for this idea we said a time lapse of something decaying, like a statue or building.

I have put together a mood board of different ideas to do with decaying and a possible location of parliament square as we know there are many iconic statues situated in the area.


With this idea I would hope to use some particle and fracture simulations to create the decaying effect and would give me an opportunity to continue practicing Houdini.


Below is my first story board, this was a quick sketch I used to later create my concept/storyboard piece also listed below. The first storyboard was me just quickly getting my first ideas down which later helped me to layout my second storyboard on Photoshop. This second story board was more for expressing the mood and colour for me scene than it is for showing the camera angles.


My third story board, although similar to my first one is a better representation of the camera angles and detail I intend to use in my scene and is more inline with my animatic. Currently I feel there are not enough shots within my story board really tell my story in a clear manner, so I will be creating another, more detailed storyboard, in the future.


Asset Creation: Cave base model

Here is the model of my cave created in ZBRush, this is my first time using the software and I have found it quite challenging, but I am slowly picking it up. I am pleased with the final outcome however. The main issue I have been having is with the texturing and UV maps of the model, when applying texture in ZBrush it looks OK but seems very pixelated, there is also stretching some points, this is when using the spherical UV option. I am also having issues when I try to create the normals within Maya using transfer maps, from the high poly to low poly. The next issue is also in Maya when I try to unrwap the UVs its very messy, this could be due to the still relatively high poly count even on my low poly model.


Here is my first animatic based off of my third story board, it is currently not as long as i wanted it to be and doesn’t contain enough detail to convey my story as I want it to, however it does give a good example of the setting with the first few seconds. I’m still undecided on the concept of having it all in one shot, although it may look nice, I’m unsure whether or not it is appropriate for this piece. It was created by attaching the camera to a motion path and parenting the aim to certain aspects of the scene, I have done this to create as smooth a camera movements as possible but at some points it is still not how I want it to be, this will be improved with further refinement of the technique, which is still quite new to me.  I will be creating another animatic with separate shots, some of which can be seen in my blocking renders.


What industry roles interest me?

After a bit of searching, a few roles have started to stand out to me, these are; character animator, effects animator, animator, VFX artist and concept artist. I plan to look into all of these different positions to gain a better understanding and see what interests me to look deeper into.

Before starting on the course, being a visual effects artist was a field that interested me greatly and still does today. My interested started when at college, Double Negative came in to our college and gave a presentation on working within the VFX industry. They spoke about all the different pathways within the company, from concept artists to compositors and the process of becoming a member of the team at DNeg. Most people start out as a runner, delivering post, grabbing coffees, printing just being an all round helper. Currently on the DNeg website they are offering positions as a 2D or 3D runner, and from being a runner you are promoted to your field of interest usually after 12 months.