System Integration stuff; I need to get this blog hooked to Facebook; female singer.

As some people know, I’ve been involved in high volume storage for about 30 years … lot of multi-peta byte systems.  I generally like shared nothing storage architectures but for pudls I decided to put raid drives on each server.  This was causing all sorts of problems keeping things in synch, and driving me a bit crazy (since I had literally random terabytes of data on 3 different servers.

I recently moved to oneblox (company is exablox), which is a networked shared nothing architecture.  Initial couple days was a struggle but my ring is working real well right now.  I only filled up about 25% of my capacity of drives in the ring, and did that with total crap disk drives(with exablox you put in whatever drives you want and they go into a shared nothing array).  I’m now going to put in another 25% of the capacity with decent drives.  In a few more weeks I will fill up the entire ring with high quality drives.  I’ll publish some numbers when I get things fully working.

I btw really need to get my blog and previs area hooked to Facebook.  All my previs and story development stuff is actually available to anyone, but I really haven’t explained how to get it.

Another thing, now that I have basically four main scenes in the painting, I’ll need a female singer that kind of counterposes the Painters songs, to sing inside the painting for the female Model.  Possible a adult female and child female singer for two sections btw.

Changing the height of the painted worlds

On the scene where the Painter looks into the dark painting in the shed, I had the Model standing around the same height inside the painting.  Now I have her down about 4 feet, with the painter looking down and handing down the cup to her.

On the painting side in the shed scene it’s real dark, a reflective puddles of water and a center section where there is a fountain well.  I have been going back and forth, but decided to put a girl at the well to fill the cup of water for the Model.

In a later scene, where the Painter is looking toward the bed and the painting of China, I’ll now have both the women and the child looking up in the darkness to the light coming in (from the painters viewpoint).  As the light rays hit, it will lighten the golden “autumn” mountains.  I have been getting my maya/vue/plantfactory process down for the inside the painting scenes, and need to get the RL painting textures into that mountain and foliage (vue created landscape and plant factory created trees).

I got a growing tree animated using both the vue xstream animation and the plant factory node based creation.  That basically took me all day, but it also really helped me create some great links into the second reel (the china gold scenes).

Growing the Plants …

For the female model that is inside the paintings, the world is created through VUE Stream and PlantFactory.  Both programs are fully integrated into Maya, so it’s not like I need to learn two completely different packages, it’s more like plugins inside Maya.  Still, I need to program a bunch of the plant growth and world population of the plants.  First main scene that uses is this is when the Model takes a cup from the painter and goes to a stream (inside the painting), gets some water and returns the cup to the painter that has clean water in it.  (child has drank poison by now and has wandered off, so healing cup goes to waist)

Anyway getting this scene going, should be pretty interesting.

Inside the paintings with VUE xStream

On the scenes where actors entered the paintings, I was having problems creating those worlds.  In my “gold” china painting, that world wasn’t coming together, neither was my painting of the sunset, or even the black painting.  I’ve finally decided to use VUE along with maya to generate the worlds.  I was doing some pre-viz work in second life to just test some idea’s out, but was finding coding it all directly in maya was just to much time as I moved out from previz to full shot development for those scenes.  I need to quickly come up to speed on VUE this weekend.

Also btw committed to having the Model only inside the paintings, this really add’s a lot … is one of those obvious plot points which I should have figured out long ago.  I was having the model go in and out of the paintings, now that she is stuck in the paintings with the painter looking back intime of the paintings, creates a lot more tension that wasn’t there before.  Also from a vfx standpoint, just makes a bunch of things a lot more easier.

“Observing the World Changes the World” is something I want the audience to take away from my film, but I didn’t have a good mechanism for pulling that together.  Now I think i’m on track on that element of the story spine.

Where’s the chicken mesh …

hmmm … where did I put that chicken mesh.  Also trying to decide for a shot if I should have the chicken fly in from the left or the right, seems like the performance capture for the chicken was fly in from the left … so left it is.  ( I will have to recompose though, since the chicken would be going in “behind” the painter.  Nice thing in this shot is the painter is full CG, so don’t need to roto the painters pants.

Trying to find it with nuke studio, that program is going so so SLOW today.  I think on some problem I was having with studio I flushed the caches, so It must be rebuilding them …. ok take a few hours break.  I’ll continue the search with my old windows folder approach.

Turning all my computers and disk array’s on, aside from helping me find “the mesh” … will heat up the room.  It’s around 59 degrees.

Why my renders where going slow, doing a lot of performance capture today …

I figured out why a bunch of my renders where going slow, fortunately I’m in class last weeks lesson, which I have not done yet, goes over exactly what I did wrong.  Victor Perez is one of my teachers at FXPHD, and I just learn so much stuff that is critical from him.  LMAO … he just was showing something similar to what I was doing and said “that my friends is wrong”.

Will be doing a lot of performance capture today in XSEN suites.  Mainly doing the snake, dog and chicken.

Was three classes behind on Victor’s classes, still working on those classes. k … finished two classes … need a nap. Also redid a roto and re-ran a composite with the speed improvement completed. hmmm … wasn’t very smooth … my cheat on water is to motion blur it, takes some CPU/GPU time … but my brain can only handle so much key frame animation.

Had an idea of peeling my (when I’m performing People”) skin off and turning into a snake rather then turn into a puddle then a snake, going to be freaking weird.  Also I had to break it up, just took too long taking off the xsen suite while I was recording.  Fun stuff, got me winded a bit.

Disk array didn’t start up this mourning

Going crazy, my disk array on windows didn’t start this mourning … rebooting … what a way to start off … since I was really trying to finish off C010.  I back everything up every day, actually don’t have the back up of all of yesterday … i’ll do that after I get the raid working … but it’s these stupid things that take all my time.  I have a second disk array that is also acting up and have a response from support on.  I should have gotten high end disk array stuff ;(

arghh … it’s up, doing back up right now.  Ok … I’ll take a 45 minute coffee break while backup is going. getting file drop offs every now and then on a transfer … arghhh.

On another note, I just got my updated licenses from the foundry, need to install them … I’m real crappy at installing floating licenses.

ok … got the batch render in maya working ok … back to black.

hmmm … lol … with all the switching around I roto’ed a big section on the wrong format when I was clean plating for the maya render.  Maya render worked out nicely though. finishing up a major composite, one of the first times where I lined up xsen performance capture with clothe.  I might get competent one of these days.

Hmmm …

Getting my junk together and hunkering down to get a rough on reel 1.  working on P029_C010 mostly today … I need to finish that by tonight.  I’m using my new re-entrant workflow to finish that off so gives me a workable method for finishing the reel.

ok … puling in the lidar into P029_C010 now, actually need to review the best lidar for that scene in the obj lidar directory … done

ok … adding the camera to P029_C010 … but just did some business stuff and need to go on a walk.  Also just sent out the camera as an asset.  Kicked off a deep imaging run, no idea how it’s going to turn out. files are getting a bit big, going to prune out most of the deep stuff. Ok … re-rendering P029_c010 … have some reasonable sizes.  Going to bring in People/Snake next and render that.  I Also need to decide if I render the chicken, I don’t have the performance capture of the chicken … crap … need that.  Arghh … messed up on a camera … was double rendering.

4K workflow is giving me some nice 2k distribution options

Since I went 4k wide screen (4096×1716) for a kids film … realistically the vast majority of viewing environments will be either normal HD or on things like a iPad. If I had gone 858 high the more “kid” formats (which are either 1080p or 4/3) really looked crapy upscaling the screen height many of the kids view on as i basically clip the sides of the wide screen ratio for the tablet formats and/or the 1080p format(i.e. kids hate black bars on the screen and generally want to see faces bigger). So if I had gone non 4k with the 858 height it really looks icky upscaling 858 to 2048 on the new iPad and 1536 on the current iPad. So being 1716 height is giving me some options on distribution discussions which I really hadn’t thought of when I started production.


Shot in kitchen with egg and the frying pan

R1. everything on the roto is NOT working; so mask out the pants and do a overlay in the xsen suite for the painter

  • Xsen suit for painter on that comp is … ? (C012 is July28_2015-20, C010 is july28_2015-018.MVN  … obviously the dates on the record are wrong, also comments in the file are wrong .. says C011; -19 is actually C011)
  • motion builder isn’t reducing the key count from 240 fps to 24 fps after doing the filter, i’ll just keep it at 24 fps, probably wouldn’t hurt if I do the particle release at 24 fps, if i’m doing a water simulation btw.
  • got the performance capture and my model now in maya, need to bring in the lidar tomorrow so I can aim the camera properly

Re-entrant Workflows

I’ve been slowly moving to re-entrant workflows.  My definition of a re-entrant workflow is where the story, editing, compositing, production photography, vfx, sound recording, music … are all done together and merged daily (i.e. in one day, all those activies occur and the results are merged within some complex package).

The more I study re-entrant workflows, the more I find them actually done by the movies I like.  These re-entrant workflows aren’t really explained well, if you read books they normally break everything out by disciplines in parallel and linear manner.  The films I really like that show re-entrant workflows that have also published how the film was made are: “Lord of the Rings”, “Inside Out”, “Mars” and “Avatar”.  More films are done with re-entrant workflows, but these are the ones that I like plus have a lot of documentation on how they were actually made.

Also I think films that are either heavy animation or performance capture have to be done this way to be effective, and that’s what my film is all about.  I think I was referencing other workflows that were not performance capture oriented, thus was having a lot of workflow “collisions” in things just not coming together.


Pretty late … I’m having problems again with nuke studio, seems as I do more complex things there’s imbedded stuff that just straight out breaks.  I think i’ll do the major editing through studio, but actually start the composites in nukex directly.  Nuke studio is just driving me crazy with “weirdnesses”.  Also since I’m moving the compositing to windows, I have had to remap all the composite files.  Thinking more about this I might as well bite the bullet before the compositing mapping gets massive, and just flatten things out a bit tomorrow. I think I will have “raw”, “comp”, and “final” … where:

raw = r3d

comp = nukex & maya aligned to r3d’s (minus all huge weird numbers).

“final” =  will basically be nuke studio & protools in with reel sections (probably keep the .mov files in this directory will the stems)

Also the top disk will have a name, but there won’t be sub versions.

I’m my own system support for Pudls

I (pat) am doing all the systems support for pudls, definitely not optimal.

I’m rebuilding one disk array right now (24 Terabytes ) on my primary server (has 75 terabytes total).  Also just finished connecting up a 24 Terabyte disk area to my windows superish computer box, and started a 12 terabyte data transfer over the 10 gigabit network I manage.

I also just moved protools over to the windows box, went through the 5.1 options and have that figured out to my m903 amplifier I use to mix.  So moving all my sound and final editing over to windows.  I was just having weird disk array problems with protools that were driving me crazy.  I’m starting to sound mix the first reel, the surround sound mic setup is working out well.

Linking scenes

Real important thing that I work a lot on is how the characters problem solve, and how that problem solve is linked scene to scene.  I was just having the painter open the refrigerator and see it was empty, but that was having a problem creating a compelling problem.  I now have the painter looking into the fridge and seeing the sugar drinks, with one drink missing (the one that was given to the child in the previous scene).  Painter now decides to not drink the sugar drink, and goes over to get some water from the facaucet (this is really the starting turn).

I’m connecting this scene is now connected to a PEOPLE going into the underground area, where he is handed a drink and a smoke, of which the Painter doesn’t know about.

2nd Reel starting shot & adding shots

Going through all my footage, I need to get a 2nd reel starting shot of the painter looking into an empty refrigerator then requesting that people order food.  I’m doing this today so will have to clean the refrigerator out also.  (I don’t think leah has ever seen me clean the refrigerator and

Another shot which I have is a gas powered weeder destroying a large weed as it moves toward people, I need a transition shot of that of an empty bed where the dog and squirrel jump up with the distant noise of the gas engine starting.  (this starts reel 4)

I also have some shots somewhere in reel 4 & 5 of the house being undermined with a cave where the squirrel and the children are trapped, this is leading up to me “battle” reel 6, so I need to figure that out much better.  I did have the battle underground in the backyard, but having People undermine/erode the home is a better progression.  Reel 6 is primarily vfx, so I have been pushing that a bit as I master maya better.  I’m actually taking a great maya class right now, need to keep up on my classes while I’m working.

I’ve also added a bunch of math scenes where I explain my zero based reference time in detail.  I need to get those scenes more flushed out, not sure if they are in reel 2 or reel 3.

Warping time and editing

I’ll upload the shot later today, but I was having problems on the FIRST cut on the film … I know … drives me crazy that the first cut on the film wasn’t working right … I’m like … wow … this first cut of the film isn’t working … it’s going to take me forever to edit this film.

In almost all my shots I’m warping time in multiple directions, for example in the first shot I think it’s about 20 minutes of film compressed to about 15 seconds while composited with a static shot of the painters leg, along with all the vfx, which is realtime of me in the xsen suite.  That’s just for the first shot, the 2nd shot is more or less realtime of the painter laying in bed (r3d) and xsen in realtime playing the squirrel with physics simulations (with unusual gravity settings).  Anyway the cut between shot 1 and 2 just wasn’t working for me.

So I was reading a book old book last night called “on film editing” where the book was primarily about editing on real film and doing real physical cutting.  Anyway, even though the book was ancient, since it was talking about analog time quantized by frame, it completely explained my problem and how to fix it.  (i’ll need to have the leg in shot 1 move a bit more, and in shot 2 I will shift time then compress it, so the leg movement overlaps between shot 1 and 2 by 5 frames).

If I was editing in a “normal” editing package this btw would be hopeless, but since I’m editing in nuke studio, I drop into nukex which has complex time operations which I program via it’s node interface.

My Indie viewpoint on budgeting

I’m in the Indie segment where basically the money runs out and I have to be prepared to do anything to complete a job. So having a floating license from the foundry and autodesk makes cents 😉 I btw think there’s two totally different segments of the indie crowd which creates a lot confusion on the boards. One segment basically consist where the director/instigator doesn’t have any technical skills, whom scrapes up the money via scams/begging/pleads/bogus_requests/manipulation and needs to rely on others to get things done (often by not paying them what I think is a fair wage).

The 2nd segment/cluster is where the director/instigator has some technical skills, spreads things out as far as the budget permits, but will do any task (for sure not perfectly) to complete the job. I think there are pluses and minuses of both segments, but the second segment is significantly more financially successful(i.e. Rodgriguez, Cameron, Jackson). I also think RED is much more aligned to the 2nd segment then the 1st, which creates tension on this board.

As a member of this 2nd segment, I think I have about 6k of licenses for my 2016 year which allows me to control my budget/funding rather then spend most of my time begging. I really, at a emotional level, I find the first segment’s viewpoint incomprehensible.

Model’s introduction scene and central problem

On the scene where I’m introducing the model, I had her sitting on the chair in front of the black painting.  I decided to put her in the painting behind a veil, and start off the scene by having the painter look behind the veil.  Somehow I need to have the painter comment to her that the child in front of the shed needs some food and water … so she get’s it from the mountain’s behind the veil (i’m using that classic mountain in china btw).

Also when she comes back from the mountains behind the veil, when she hands the food&water to the painter, the food turns to dust (i.e. he can’t provide for the children).  As the painter now has to walk past the starving child without food, this gives a chance for the snake to come in.

When the painter goes back into the house, after leaving the child alone at the shed door, the snake slithers out from the house and gives the child “food/death”.  When the people/snake come back into the house the painter then asks the people/snake to order more food, thus inadvertently feeding death to the children. This now sets up the central problem of the painter.

merged two characters

I was having problems with two characters “people” and the “snake”, there were plot conflicts and I had a couple of scenes in the first reel that were not making sense.  I’ll shoot a new scene at the back door with people and the dog, will pull everything together. Huge sigh of relief.  I’ll also have to do some xsen captures for this, thinking about it since the dog and people are fully animated, all I will need to do is shoot some cold plates.  Actually I should with the same angle have the painter go through the door and put his shoe’s on, so I will also shoot footage for that.  I’ll also do a laser scan of the area [remember to put shades down the on the laser scan so i don’t get weird measurements].

Been going through the extended edition of the hobbit, learning a ton from that (especially how they dealt with plot problems, and re-did things in the extended edition).

Anamorphic Flow with Maya

I prefer outputting my EXR files in anamorphic rectangles, and was having problems doing that.  Now I set in the camera shape:

  1. Camera aperture at 18.96, 15.8
  2. Lens Sqeeze 2
  3. fit resolution gate: fill (hmm … seems “overscan” works better? … need to look this up)
  4. on the image plane i set the placement tab to “fit: to size” and then click on “film gate” in
  5. click on the render tabs, goto the render settings, select device aspect ratio, type in 2048 then 1716, make sure device aspect ratio is 2.37 and pixel aspect ratio is 2

Hmmm … saving this as an asset … i think i have it worked out.  Is nice because now I basically am doing 4k deep pixel renders at lets see … around 5 seconds per frame.

Had some exr errors on the mac, no idea what went on but ran nukex on windows (actually rendered to the maya director on P031_C007 (painter down stairs, squirrel jumps onto peoples head).

Got the people shot down

Though I have finished composites, this was a tricky one to finish and perhaps the most important one.  It contains the protagonist and the antagonist, and had to be compelling.  There are 3 xsen actors for the antagonist and I did the protagonist with my roto painting normal technique.  Nice to have at least one shot “finished” all the way through compositing.

Starting to “get” editing

I defiantly wasn’t getting editing.  I think I was confusing editing with a bunch of stuff, is especially easy to confuse things when your doing all the processes yourself.  I started “getting” it when I split sound editing from sound mixing.  For sound editing I don’t worry about the exact frequency layout, I’m mainly concerned with getting the correct section of sound composited in the timeline.  I think of sound editing much like I think of video compositing.  So at least for my brain, editing is moving non-linear media into a linear space.  Mixing on the other hand is moving a linear space into a non-linear space, thus color grading in how I think of it is “mixing”.

I was messing with my monitors going a bit crazy, till I realized I could get a awesome headset that would be great for video & sound editing/compositing.  (In my mind I kind of merge the words compositing and editing, and since I do sound with color there’s really not that much of a difference)   Anyway without spending over a $100k I just couldn’t get my compositing sound side working right, and I found these awesome headphones that are almost as good as a 100k sound room (Audeze).

On the compositing side, I am doing “people”, I had the shot where the camera was moving, but I can’t do my “normal U/V” drawing while the camera is moving and don’t have enough time to mess with moving the camera solution to maya.  Also if the camera solution goes to maya, it restricts me from seriously messing with the composite.  Anyway I’ll need to do a cut on the people falling out of painting scene.  I actually have a camera A & B going for that exact shot, so will just do a cut. Actually doing the cut will be kind of nice since it shows more pathing to the kitchen.

Performance Capture Summersaults … feeling a bit over the top

On the xsens performance capture system was doing some summersaults, man I have not done those in about 20 years and I’m feeling a bit icky.  Doing performance capture is a huge health benefit for me, but something I need to work on much more.  Read a book on tibetan yoga, I probably need to re-read that a bit more and do some of those exercises before I do flips and such.

As a side thing, I’m normally doing performance capture around cameras and have to make sure I don’t trip and kill myself.  I’m doing everything, the actor, computer monitor trigger, camera setup, light setup, green screen background … ton of junk and I have to make sure I don’t royally destroy myself.

Got my “brain” rendering going well …

Got the rendering going without using uv base, just me rotoscoping, looks real nice.  For “people” falling/washing out of the painting, I need to do that with the XSEN suite though.  I’ll do that tomorrow.

Did another class today on transforms alignment, really helped me out for it to look pro.  I’m doing a big nukex run right now, and will kick off another run in 7 minutes … then go asleep.

hmmm … hand painting UV normals for 3D rendering?

I was dreaming a bit, and was painting normals (U/V) onto a plane.  Would also make sense as a thematic element (i.e. since the “painter” is a VR painter).   Who knows how my weird brain works, but if I could paint normals quickly it would drastically speed up my workflow (like a factor of 10).   Doing a shot today just as a test, I was having problems on facial tracking, but if I could paint the normals on top of the facial track it would make the shot fairly “artistic” yet real, and most importantly get things done.

For the shoot, using a 35mm MP anamorphic in 6:5 crop with the RED Dragon, focus is on the stairs top and I will walk down the stairs.  I’ll do a quick camera move forward and backward to get a 3D solve in case I need it.  I’ll also do a clean plate between the two primary focus points (hands of the painter at the top of the stairs and the hands of the “people” coming out of the painting).

Leah is watching a day time soap thing while I’m trying to setup the shot, hardest thing for me to do is block that out … I think I will try to find my noise canceling earplugs.

Thinking about this a bit more, Disney guys did this “normals painting” a bit on the water shots in Snow White, probably took them a long time to do though.


12:30 … planar tracker is ok on things that aren’t drastically moving and/or changing shape, but I tend to do long takes and the plane starts falling apart fairly quickly.   I think me drawing the normals will be fine, but I can’t draw the normals every frame, would just go too slow.  I think I’ll just have to put me drawing normals in my “toolbox” and use it when the plan isn’t changing drastically.  I might look at the nuke drawing package again, but that was a pain and needed a nice mesh.  Another thing is my mesh is always moving, so things with static meshes don’t work too well.  I’ll pound this out tomorrow.


didn’t give up, I had my alpha at like 8000, when it should have been 1 … was causing  crashes and messing all sorts of things .  I’m am writing the comp out right now, looks good.

Classes I take …

I take 3 classes from FXPHD.COM a semester, been doing this for about 3 years.  I also per month take a bunch of ad hoc classes based on what I’m doing: around 4 classes on, 3 classes on , and 2 classes on  For this facial tracking in the last 2 days I have gone through 2 classes completely and going through my third class right now.

I think it’s worth it taking these classes, I can make it through a class in about 3 hours, and those three hours could save me weeks on doing the wrong thing (i.e. using after effects for facial tracking) and speed me up months to get professional results (i.e. using nukex 9 for facial tracking and painting).  It’s not obvious which package I should be using for various things, sometimes AE, sometimes Maya, sometimes Houdini, sometimes MotionBuilder, sometimes NUKE.

For pulls unless there is a good reason, I try to do everything in either nuke or maya, there is a certain mastery you get by just completely knowing a package.  Also I need a package that integrates well with other packages so I don’t get “stuck”.  Getting stuck to me means the shot is 90% there, but to get that pro look that last 10% becomes increasingly impossible.  Normally I waist that 90% of the time before I realize that the last 10% is impossible, so that gets very frustrating.  It’s just a good mind backup knowing I can complete the shot with the tools I have, and actually know how to complete the shot.  Lot of times I actually have no idea how i’m going to complete a shot and have to go into a research mode … which can be fun but when I’m trying to get something finished gets real frustrating.

Rotopaint in nuke is probably getting my most goto weapon on dealing with problems, it’s just amazing what I can do with it.  I think getting to the point of mastering just a single node in Nuke like rotopaint, has taken about a year.  You’d think it would be quicker, probably if I worked next to someone that knew what they are doing, but I go down a lot of false paths … plus it just takes time with a tool and you just need a certain amount of hours on a tool to gain mastery.  If I’m jumping between a ton of different tools, I don’t gain that mastery, thus my recent focus on just a few tools.


6 pm … I can’t believe I haven’t been using the new nukex 9 planar tracker.  It is way improved over the old tracker, and will literally save me about a 1000 hours of work.  This is a good example of me needing to focus on a smaller set of tools, so I keep current with the improvements in workflow.  I’m also sure some major guys gave the input on these changes, because the new planar tracker is amazing for A level quality feature film.

Facial Tracking Workflow

All shots of faces in the film are fully modeled, I want the facial shots to be driven off of the the actors facial movement.  I also want this to be driven off of the RED R3D anamorphic files, so i will overlay the model generated faces (which will be manipulated via facial tracking), right over the actual R3D footage.  I want this to look real high quality.

So the basic flow is:

  1. clean plate the r3d to remove face
  2. track the face on the original r3d in AE
  3. put in 3d model of face in maya, and import the mouth, eye, chin and cheek information
  4. manipulate the full model (xsens suit plus facial data)
  5. render full model in maya, output to nuke using normals
  6. render in nuke

Alternatively, I’m pondering a slightly different workflow

  1. track face in AE, map facial sections to target points in AE
  2. render the facial sections in AE as normal maps (only the facial sections
  3. composite facial sections in nuke

I really like the AE render of the sections in AE, will allow me to do more “cheats” in nuke.  Not exactly as clean, but I can get a bit more artistic in nuke – which has worked out so far.  I’ll be working on this for the next couple of days, Ill do some test footage tomoroh of me singing, for me to work with this.  I also have to make some decisions on how much dialog vs actual singing will be in this. In my brain it is leaning a bit more singing then I was expecting … also has the “bad” guys singing … which I was totally not planning on.

If I do the singing part, opens up that the “model” is also singing.  I generally write for duets, is impossible me singing the female part, so I need to ponder who would sing the female parts (and I’d probably need to shoot her head and get the tracking data while she sings).


Hmmm … the output of AE keyframes to nuke or maya is pure crap.  So basic flow of AE to maya is no way … I hate doing things in some hacky way for a feature film (could be doable for a short but I’d go insane doing things a 100 times for a film).  I did look at the second workflow alternative, but seems a bit constrained.  My third approach, which I’m looking at now is using the planer tracker in nuke 9, which has really been improved.  10 PM btw

More Rotoring & Compositing, pondering out my foley

I was doing a lot more roto’ing and compositing today.  I think I may need to tweak some tensional elements (twist in two directions) on the Painter going down stairs.

I have been planning out my exact foley approach this weekend, I think I have a plan … it’s kind of tricky but I think will work fine.  I was thinking of doing the folley in 5.1, but I think I’ll just record it in 5.1 and mix it in stereo.  Actually what I’m going to do is do a “base” recording in the exact room of the shot, with the painter being the primary foley target (who will walk in respect to the 5.1 mic setup).  The animals will be mic’ed in separate rooms and done with sound design, moved to mono stems … so will have a mono foley track for each of the animals that will go to the dubstage.

Performance Capture

I got a lot out of my performance capture training yesterday.  I’m starting to get it.

I need to do a bunch of performance capture for scene 2, i can’t re-use some things, so here is my performance capture I need to do:

  • dog jumping from bed, then running down hall
  • squirrel running down hall, jumping to rail, then jumping on the peoples heads, then down the stairs
  • chicken flying down the stairs

I already did the shoot and have the performance capture of the painter somewhere ( I need to get more organized on performance capture files).  I did the clean plate for the painters shot, and roto’d the painter in the shot already (really I don’t need the xsen file for the painter, i can do that as a reference from the roto).  Here’s the clean plate:



Hmmm … ate processed foods for two days when I went to LA, is too “dangerous” to put the xsan suite on today.

Prepping the Rushes Today

I need to get things packaged up today, here’s what I need to do:

  • complete the rerun of scn1a (i had a batch conflict that was messing up color)
  • scn1b add the pad and the egg falling on it
  • scn1 get a nice prores 4k ana
  • scn1a do some kind of vr side shot
  • scn1a export as full 4:1 aspect ratio (prores)
  • record temp track for scn1, also mix it in protools
  • merge audio and prores in studio, output in 4k ana
  • do a quick right up of the film
  • move rushes to beefy windows machine
  • move rushes to dropbox


Still messing around the the color space, sometimes doing things in cineon looks better, but it just gets confusing in my workflow as its hard to see mess ups.  I’m just going to do the screen monitoring and prores in rec 709.  When final output happens “I may” put it to cineon for the “pro” color graders, unless I do it myself, where I’d probably be better off at rec709. So REC 709 is my standard now.

Instead of grading a section to create focus, I have this new way of stretching light into a space to create focus.  Is working better then messing around with grading, also fits how my brain thinks … and is real quick for me.  I have to do it inside nuke though.

Just did a music recording, also started a new song.  Still trying to figure out how to copy to a “CF” card and load that into my Mac Pro (which has protools).  I need to do more “dry runs” before I actually have to get something done.  I’ll try and take notes next time I do that and put that in my production notes (i.e. here).

arghh … 3:30 am … distribution formats drive me crazy.  Also still rendering a side wall.

Got a music track going.  Sounds good, but is obvious that I need a full film track to go with the music track (i.e. foley, ambient, vox).  Also I’m finding the spotting of the music is something I’m going to have to think about more.  I’m leaning to a bit less music, i’ll have to practice some different queues for the footage.

What I’m doing today …

Hmm … what do I have to do today (it’s 9 am right now):

  • UPDATE EGG drop from the chicken (maya physics -> nukex -> prores 444)
  • PEOPLE CRAWLING out of painting ( xsen performance capture -> mvn ->motion builder->maya-> nukex)
  • VR DEMO (i’ll take the first scene shot 1, then use my current capture to render one side).  I’ll use the left side of that shot, has painter sitting up in bed (head shot) with the squirrel hopping around (squirrel moves to the center on the next shot).  Right side would be the dog, but I don’t think I can get that together by end of monday.
  • ponder COLOR CORRECTION for my prores 444 rushes
  • Record TEMP TRACK for rush

[Jack & Laura on a iPad for the VR test, the master prime anamorphic flare was totally accidental 😉 ]



VR DEMO: Planned out my VR shot, also made sure it made sense with the film.  Also am adding in a re-entrant product placement for the VR shot (i.e. the VR that will be used by consumers will also be used in shots in the film).  Did a quick story board mentally to get this done.

UPDATE EGG: having a lot of problems with maya physics on the egg (2 hours waisted).  I’m just going to give up on maya physics and move to bullet physics.  Also I’ll do the animation of the egg in nuke with a hold on frame zero, then retime in NUKE the bullet physics when I want the egg to drop.  I’m going to stop using maya physics, is just too flaky.  Also there is a general problem with meshes, so I’ll create boxes where things collide (and deform the box a bit if I need a roll). arghhh, since I have nice laser scanned meshes of the objects for collision which I’ll have to rebuild by hand.  Got It all working with bullet physics, rendering now and sending to nuke.  …. Got the render done, looks good in nuke, will need to do the compositing, probably while I’m integrating the placement of the game inside the film.

VR DEMO: seting camera, Jack is asleep in the room, so don’t want to wake him up.  I was able to get that shot, both for the film and for examples of the VR game.  Need to get the composite done for the film, I’ll probably do that late tonight. Got the compositing done for first cut, had to play around with the edit a bit

PEOPLE CRAWLING: couldn’t get that done today.

COLOR CORRECTION: I slightly adjusted the “tint” of one cut, just so short term the edit is ok.

11:57 PM … time to start to relax.  Major concept improvement with the re-entrant VR in the plot.

1 AM … crap … didn’t do everything cineon on the writes … going through and making sure everything is cineon before it goes to grading.

Maya Physics

Worked on physics today and did more animation for the squirrel, which jumped onto the painters leg and dropped the egg.  I moved from bullet physics to maya physics, but for some reason the physics engine ddin’t work out when I ran it.  I needed to save all the components as maya ascii via select, then I  re-imported the necessary components and it all worked.  Kind of frustrating but made it through.

Cycled the machines after windows couldn’t see the mac system.  Doing compositing now, dragging in openexr files into a nuke composite – actually looked good really quick.  Did some vertical scale squishing on some animated water and I think that helped a bit.  My roto skills are also getting much faster.

Tomorrow I need to do some xsans suite stuff.  I’m also doing training on xsens next week

… around 11 pm now …

Just finished scene 1, it’s taken forever but I’m feeling good about it – finally the footage is good, story is good, vfx is good, compositing is good.  I’ve probably reshot/re-vfx/re-composite/re-music scene one about 20 times.  Was starting to feel it was not coming together so is a big relief to have it done.  I have the other scenes of the film shot but need to do a lot of vfx and compositing work.  I think all the performance capture animation is done through reel 1.  I need to start on the sound track for reel 1, so I got a fully reel finished.

… LMAO … I forgot to composite the dog on scene 1 … well … I can do that on the side while working on scene 2.  Made a decision to cut 10 seconds from scene 1 and just introduce the dog in scene 3.

Clean Plating … lot of hard work

I’m getting ok at clean plating, basically there’s a lot of scenes when I am (i.e. the painter) is in the scene but I need to remove my physical arms and put on the “water” arms.  In order to do this, I need to create a “fake” what you’d see if you where looking through my skin (i.e. if my top half was invisible), then do the water distortions on top of that.  The first step (to make it look like there are pants walking with no feet or torso) is all clean plating.


I’ve really never done clean plating before, I’ve kind of walked through it barely in some tutorials, but to do clean plating on walking people is a major step.  There is a lot of tedious work and you have to really think through how your going to clean plate.  I think I’ve done ok so far on clean plating, So am fairly happy with the work, if anything I’m going way too slow and need to speed up my clean plating process.

Weirdly my animation now is going fairly quickly, it’s things like clean plating and rotoing, which takes my time and there aren’t really any good short cuts.

Another thing I’ve been just “dealing” with is managing the 10g network.  Every now and then it slows down, then I have to debug what’s going on.  I really depend on the 10g network, moving 1 terabyte of data is no problem on a 10g network, but if it drops to 1g it’s like forever … so is worth me debuging what’s going on.  Sometimes its a window 7 vs windows 8 issue, other times it’s a windows vs mac issue.  Basically now if anything goes wrong I just reboot the computers effected.  Hacky but is the quickest way for me to fix things.

Doing a lot of batch UV maps tonight of the painters feet, it takes a bit of playing till they “jiggle” around how I like them.  I think around 900 frames is just about an hour.  That’s about 4 seconds a frame, which also includes the water simulation, meshing the water, and creating the UV map per frame.  I think that’s a pretty good time, is running on 20 intel 3+  ghz cores with a nvidia k7000.

… ARGHH … finished the run … and the network slowed down again … not going to reboot and just going to relax for an hour while it copies the files.

… ARGHH #2 … it bombed out … I then recopied and it went fine at 10G.  One thing i’ve noticed is if there is any lock or anything in a mac directory it seems the windows copy just goes slower then crap.

Playing with time in a painting

In scene 4, the painter opens up a painting and as the Model in P033 shot 9  drops her veil to enter.  I’m playing with time a lot for these shots, i’m summing frames then vibrating in time the painting as she enters it.  In shot 9 you can’t see the painters pants, just the sweeping hands … so shouldn’t be too hard to match up.

I’m not sure exactly how the Model should enter the painting.  In scene 2 the people leave the painting, but that’s a lot easier since they stretch into normal space.  For scene 3 we are looking into a artificial space, i think i may need to bend the painting’s space into a artificial space, then wrap the external space into that warped space … hmmm.


did scene 4 today, mixed reality … so far so good

Trying to keep track of stuff, as an example the r3d file for walking out the door is PO32_C004 which is at 24p and the file from performance capture which is done at the same time is sept15_2015-001.mvn.  The cut on that scene is P032_C007 with the xsen capture being sept15_2015-002.mvn .  There are also 3d capture cloud runs, hopefully i don’t need to use them since i’m kind of crapy on camera solves.

I ended up using a 35mm anamorphic, just seemed right.  Also filmed this at 6/5 2:1 camera ratio, so don’t have any extra space on the side to recenter in case I mess up the framing of the shot.  So far looks good when I moved it into nuke studio.

I did shoot at 400 iso, but find i like the shot at 200 iso.  I btw also really like to do everything in linear, even look at the shots in linear.  Most people I talk to like to watch and grade in a exponential color space, doesn’t really make sense to me – so I’ll continue to do everything in linear.  I might grade in rec2020, but holding off on that.


Setting up a outside shoot tomorrow

I was able to reframe the hall shot so I didn’t have alignment of the “painter” character’s xsens data to the r3d file.  I basically can’t waist time reshooting everytime I do a little mess-up.  In the future I’ll always be wearing the xsen suite on things like that.

I’ll be doing a important shoot tomorrow, is when the painter walks by and ignores a malnourished child going into art shed.  I’ll defiantly do it with the xsen suite on and make sure I get the angles I need to really hit home.  I think it will have to be done with two angles.  I think I’ll use 50 mm anamorphic on both, no rack focus.  I’ll also make sure I do a 3d laser scan of where the child will be sitting.  I also “play” the child, so will do the performance capture of the child sitting.

I btw love the dream works modification to openexr 2.2, it reduces file size 5x, unfortunately nuke won’t support it for awhile – i was thinking of recompiling it myself for nuke.

Starting human integration into the flow

Starting to integrate the human motion into the r3d sequences.  I’m a bit chicken doing this, is almost all my steps, software and hardware to pull this off.  Actually runs on 5 connected machines doing different things.  arghh … it also has to work, will be a major concept in the movie.  I’ll do have have some of the motion capture down, but the motion capture for the stair sequence needs to be matched up now, and I didn’t do the shoot at the exact same time as the footage, so resyncing in post … arghhh.  I should have just had the suite on but, it’s so complex doing a shoot and the performance capture at the same time.  I might reshoot it tomorrow with the suite on.

I’m thinking on the reshoot just to hold the camera constant on the human movement white the suite is reporting a 240 fps, then cut with the animated animals, then lock the shot again and walk down with the suite.  arghh … did a bunch of the camera solve for this already but i’ll have to redo that.  Also I think i need some more points for the camera solve, so i’ll probably put something on the door that just had pure white (thus didn’t have a very good solve).  I might even lock the jib after the squirrel moves, then do a focus change.  hmm … so i will have lets see … I will have to do 3 rack focus pulls while i’m acting, setup a computer for the performance capture, move the jib perfectly, and also operate the camera … no sweat.

Liquid Mesh Workflow in Maya

I didn’t write this down anywhere again, and need to copy/past these steps.  I btw do things at the exact size of my body so that the maya liquid is aligned to my physical size that is coming from xsans suite.  Thus though even things like a squirrel is small, I do the water and meshing at my physical size.  So here are my steps:

  1. create a 3d fluid [] (3d container, base resolution 176, size 300,300,300; on fluid density dissipation and diffusion at 5, )
  2. create emitter (select thing like squirrel, then do a create from object emitter with surface option, density measure on the emitter needs to be replace)  [ctrl select the fluid first]
  3. set more of the fluid attributes ( fluid shape-> autoresize, )
  4. select the fluid, then go to modify>convert>fluidtopolygons)
  5. set the render range not so huge, then just do a render (remove the skeleton from the display).
  6. IF it looks a bit “slicey” then on the dynamic simulation go to the navier-stokes solver, all grids, with 3 sub steps and quality at 50, also make sure the “emit in sub steps” is checked.
  7. play with attributes till it looks right

hmmm … rerunning a batch for the squirrel … will be running all night (900 frames … is about 10 seconds per frame for full water mesh simulation and render … which i don’t think is that bad … but I need to make sure I don’t rerun the simulation a ton).