A & B camera setup for Building the Greenhouse

There’s a ton of vfx for building the greenhouse, but it’s a mix of vfx with real shots.

0528161937When viewed from the virtual world (i.e. the girl), they see the construction through  a spherical puddle, so the “b” camera is for the VR shooting.  That is using a Movi M15 with a 5.8mm lens and a RED weapon Magnum at 6k.

The perspective from the “real” world is being shot through a Arri Master Prime Annamorphic 35 mm lens ( at a 3.0 aspect ratio but I crop that to 2.4).


So I’m prepping the rigs right now for sunday and monday’s shoot, this will be a ton of work since it has a bunch of “weed” clearing shots too.

Getting the Dialog pacing down

Still working on ADR.  I also find that sometimes after I see the dialog “hooking” together, I change the dialog and redo the ADR.  It is all sounding fairly nice, getting the dialog pacing right now is the trick.  I also moved to a m149 in hypercardiod at 96khz for ADR, cuts well with a cmit 5u.  Right now I’m doing all the voices with simulation software, so that makes it much easier to align the pacing.

Lens mm compromises for VR work

I like using 21mm for VR work, the composites are nice and clean … but it’s just too many shots to get full coverage (is minimum of 18 shots to get coverage for what I’m doing).

The 6-8 mm is nice, but it looks like crap and the VR solves are trash.

I btw did a test using 35mm master prime anamorphics, that was way too much of a beast and had tons of thin slices(looks nice though).

So I think I’m going to do the rest of the films vr at 15mm when I’m doing a 360 shot.  It’s a nice compromise and the smallest I can go and still stay more or less rectilinear (at least I’m not getting a spherical image like I do with a 6mm lens).

Editorial to Sound Workflow

Editorial is in premiere pro cc while sound is in protools hd 12.4.  I’m just going to lay out the spec interface between the two:

AAF Format between editorial to sound

Leader in front of 1:00:00:00 is 15 seconds, with a single lead out beep/frame at the end of the reel

240 frames pre/post of audio for each cut (i.e. there’s way too much audio takes, so editorial is just giving the 10 seconds before and after the cut)

Surround sound & Sound Design is done outside editorial, so the “sound” starts in protools for FX

Temp Tracks only on MX from editorial to sound (so music goes from protools to premiere only)

Dialog DX in protools is mono, with a DX track actor (painter,snake/people,women,boy,girl)

Dialog DX/FX, this is for the snake/squirrel/chicken … there’s a type of dialog for them so those are a bit tricky.  DX/FX start in protools and go to editorial as DX.  These DX in premiere are sent back to protools for final DX sound design.  Basically daily needs the DX/FX for the animals or the daily’s don’t make sense.

RX Stem – 4.0 stem with L/R/LR/RR even (needs to be mixed to 5.1), this is the surround sound room tone stem (for VR this is seperate stem, for film final this is embedded in 5.1 DX)

Stems back from Protools – RX,DX,MX,FX all come back from protools as 5.1 stems to premeire.  Any video edit’s that change these 5.1 stems need to be communicated back to sound with exact frame changes and a seperate 5.1 video & audio prores file that includes the mismatch (nice to have a marker on where the problems are).

VR Shooting

I just finished a transcode of a VR scene that took 24 hours, I really need to figure out how to make more of the vr wIMG_0111orkflow run in parallel.  It’s been raining real nice outside, also have been using the new plasma lights, so can basically shoot 24×7 now.Here’s a shot of me doing my VR scenes this week (this shot is where the little girl merges into one of the water drops to enter the “real” world, from inside the painting).

On the sound design, I really need to get the chicken and the sScreen Shot 2016-05-01 at 8.00.41 AMquirrel conversations done … it just “looks”/”feels” weird in the daily’s when only the painter is talking and you don’t hear the chicken and squirrel as part of the conversation.  I also do the sound design in a VR/augmented_reality sound system that is connected via protools.


This is then worked with todays foley session (the squirel is the sponge, the chicken is the green thing next to the cup):

Screen Shot 2016-05-06 at 1.33.21 PM

Finally this shot that I am in the process of doing today will be mixed into the fray, to bring the whole thing together.

Screen Shot 2016-05-06 at 12.50.27 PM



Reshot & Finnished a Problematic Scene

There’s this critical scene, where the painter leaves the house for the first time: there’s two “door” transitions, also the “boy” is introduced, and he is sitting behind some weeds against a second door.  There is even a reflection sequence that has to get merged with the ray tracing.  Anyway, I originally messed with this scene a lot, and got back some comments from reel reviewers – basically this scene stood out as being lower quality and disconnected.

So I reshot the scene on sunday, and just finished compositing it today.  Is only about 10 seconds long, but it requires about all my skills … I think I’m finally feeling confident in my compositing, especially very difficult scenes.  I also didn’t need to redo the ray rendering, so I feel good about mapping 3d composites now (and this render is between a background and weeds that are moving in the wind).

Dialog is going fairly quickly, yet …

Well organizing VR dialog is kind of a pain, especially when I’m trying to match performance capture sequences coming back from VFX that are hopelessly not aligned to the reference camera (which I do have a scratch track on the camera).  I’m going to match things up in a somewhat hacky way for the first reel.  I need to get a better method for organizing performance capture dialog for the 2nd reel.

I was planning on doing some more ADR today, but there are all sorts of impact wrenches and guns going off in the neighborhood … would be so nice to have a sound stage.  I’ll have to do the ADR tonight.

First ADR Session Completed

I had to redo my ADR session from yesterday, but was able to shoot and do the ADR to completion today (all the way from capture adr thru compositing to the moving mouth into the puddles).  The composite is completing right now, I need to do the eye glasses so was writing to prores … I think I might restart the job going to exr dwb since adding the glasses will be easier from exr and I don’t light multiple read/writes to prores.  I’ll do the kitchen scene tomoroh as well as mix in the actual audio.   So that will be my first time having full dialog in the reel, I hope to have that done tomoroh.

I btw liked the main actor, the painter, to have bit more “abstract” face.  I was doing the face so it was matching things down to hairs and it just didn’t feel right.  (probably that “uncanny” valley people talk about)

Performance Capture ADR

I have it working now generating full geometries and facial tracking without using maya rigs, just reflecting and transorming spaces to create the geometries.  Is super CPU intensive, I hope people enjoy that the ADR is done by directly manipulating the pixels from the ADR session.  I really like the concept of blending the sound with the spatial tension coming from the camera.  I btw tried to use a shotgun mic for adr, but it just didn’t feel right as I was transforming the space, so I am using a 4 mic array for ADR.  I need to figure out how to move around the 5.1 sound all over the place so it doesn’t loose sync with the puddles that are now moving.

I should probably film me doing a ADR session one of these days, is fairly insane looking.  I btw added glasses to the painter, it will make sense later on.

First performance capture ADR sessions this week

Doing the first ADR sessions this week.  Also will be doing performance capture on the face during ADR then mapping that back to the film.  I haven’t done performance capture ADR before, so working through a bunch of options.  I was thinking of tracking facial features then mapping that to a maya model, but I think I’d really prefer a tensional map of the actual performance that directly manipulated the pixels of the film.  I think this method would capture the nuances of the performance much more then doing tracking of a few features that rigged a face in some shallow way.  I’ll be testing this out tomoroh 😉

Mixing 5.1 Audio in a VR Room

Since I record production at 5.1 (normally 6 mics, L,C,R,LR,RR -m149 for the lows, ccm41 for l,r,lr,rr, then center is with ku4/op6 cause I’m old school), I really like to hear the dailies and reels in 5.1 even when I’m not at a nice location.  On protoools using a mix of Spanner and waves DX, I can atleast validate that the 5.0 production audio acquisition is solid.  I’m also getting fairly decent with protools 12.4 doing 5.1 mixing, for some reason I find 5.1 mixing just a lot easier to do then stereo in a simulated 5.1 room.  Of course this will all go to a dubstage and most of my temp mixes will just be used as a reference.

I btw really like how the DX simulates a “real” 5.1 space by using the camera to modify the sound coming out of headphones.  The little realtime camera feed of a box around my head as i’m doing my mixing in protools seems appropriate as I’m doing editing of performance capture and matching that to surround sound foley.

Traditional to VR transitions

I just finished the cut from traditional (i.e. 4k dcp) to vr.  Required a fairly tricky mix of nuke nodes to do it, but overall the node count was pretty small.  Real glad I have that out of the way.  I need to do the exit from VR back to rectangular frame (360 to 4k dcp) today.

A super important shot I need to get started on is having the child  (of light that is inside the VR world) moving around puddles of light in the real world.  That is also the shot that has the dried weed to green weeds transition.

VR Cinema Edit/Cuts & Camera movement

I think it’s real important in VR Cinema to minimize cuts and have smooth yet interesting camera moves.  I did have a bunch of camera cuts in some heavy VR scenes, and didn’t like them.  I just finished redoing the cuts using some real serious 3d processing, kind of a hassle but I think it is worth it.

The reason I think it’s important to minimize cuts and jerky camera moves is in a normal expereince the screen is only a small part of what your eye sees, lets say 20%.  But in VR the screen can cover around 80% of your vision, some cuts can be much more jarring up to the point of creating motion sickness.

Anyway, my approach is to shoot for VR then if someone wants to do a bunch of jump cuts and/or jerky camera stuff, they add that in post.  I try to have as clean as a camera move I can do while also giving depth to the experience (i.e. doing camera moves that allow depth queues even if the person is not seeing it in 3D).

VR 360 camera rig

vr_cameraHere’s a shot of my 360 video camera rig.  I do a lot of complex stuff in the sphere, so this doesn’t exactly get 360, but I’m able to get a consistent spherical image from this, then i center it on what I’m really interested on (and the rest of the sphere is actually doing loops)

It’s kind of nice since everything is run off of the battery (it’s below and to the right of the screen).  So I can quickly move the rig around.  Also it’s on a ball joint, so I can quickly put it on a large tripod or even a big jib.


I’ve also been working a bit on the 2nd reel plot twist.  Has worked out real nice, I have this quantum approach to plot development … where I go along with a wave of different story lines, then with a observation it creates a story state (i.e. pivot point in the story plot).

Was btw raining when I was doing my VR shot, super nice thing about this rig approach is I could move my umbrella around so I got a “360” degree experience in the rain … without any rain drops hitting the lens – now that’s a trick.

hmmmm …. VR cinematography

hmmm … I’m attempting to speed up my live shooting to VR workflow.

One thing I wasn’t excpeting is how far out between the camera is the actual center of rotation for no parallax, so when the camera rotates there is no parallax.  For the master prime anamorphic it’s a huge 171mm, for a cp2 21mm, I measured it this mourning to about 107mm.  These are real icky numbers for my kessler motion control, since the camera weight would be hugely off axis (not even sure I could do a full rotation given the cradle size).

I do like my nikon 24mm lens a lot, I think I’m going to try and see how that will work today but that should be only around 24mm (plus that lens has no weight basically which is good for my kessler system).  If I do that it will be matching my master prime single angle (possibly cylinder mapped) to the nikon spherical map.  Also the nodal point between the rigs would have to be close between the master prime rig on 19mm rails and the nikon lens inside the kessler motion control system – not sure how best to have exact camera nodal point placement between two completely different camera rigs.

I could probably be off my by a few centimeters between the kessler rig and the 19mm rig for the lens nodal poin, would be workable and I could roto out any edge parallax issues.  That’s probably what I’ll do today, get this all figured out  …. hmmm …. will tax my nuke expertice once again i guess. hmmm …

My nikon AFS 1.4f 24mm is also a fairly large lense compared to the cp2 21mm, and the nikon has almost the same pupil point at the cp2 21.  I much prefer the cp2 21 in matching to the master prime, I will next attempt to remove the XL module to see if I can get this rig to fit on the kessler system.

I just tried a 8mm lens baby, with that I wouldn’t mess with a motion control rig, and it covers a 180 degree area.  On the lens baby I’d do 4 quick rotational shots, then just have the lens baby fixed on the general area the master prime is shooting.  the 8mm sphere would then be defocused based on the close focus distance of the master prime.

As I look at all this I probably need to do a mix of all these techniques and just keep my metadata extremely well documented on the shoot.  Also scene based lidar with exact camera placements would be mandatory.


btw the leave spherical render just got approved as production, so that is nice. LOL … so I can get back to what I was suppose to do this week, is finish the trailer.


I truly hate the feel of stiched 8mm fisheye, so that will be a NO (is ok for reference though).  …. it’s 1 am … doing a 9:1 aspect ratio with full sides is pretty easy with the anamorphic master primes.  As I transition the immersion i’ll move from 2.4 to 8:1 on the real footage for the 3rd reel story turn, while staying 360 on the paintings.  around the 5th reel there will start being a merge which then will create a mixed 8:1 & 360, with final in a complex 360 format (which I’ll do with the anamorphic, pain in the butt but will be worth it for those ending scenes).

4k vs 1080p

I did the leaf run at 1080p, man it just looks like crap.  I’m rerunning it now at 4k.  Also I noticed a small math problem where I had the leaves hit by the wind too much before they hit the water.  Also I had the water flow not hitting the leaves too much after it did hit the water.  Only 4 hours so isn’t a horrible render.  I am working of cinematography stuff mainly today too.  Also I have to figure out facebook integration today.

From principle photography it is hard stitching when there are things in the background that are moving.  My main problem on movement is wind of leaves & bushes.  At the same time I don’t like stitching stills with motion.  Also the feel of and blur of the master prime anamorphic lens is great, having that match with bluring a still on something like a cp2 21mm is a major pain.  I think if the entire shot has no anamorphic blur that easy, or if the shot has a exteremly shallow depth of field on the anamorphic, and everything else is blown out … that’s doable.  Rotating the anamorphic lens in 360 is kind of a hassle due to weight, the cp2 is a much lighter lens for quick rotational mapping.

I need to get my transitions worked out between spherical shots and flat shots, I just might  do a jump cut.  At least in the start of the film do jump cuts between immersive and non-immersive filming, then around mid point of the film start merging the flat and spherical cuts.  This also makes sense from a story viewpoint, so is what I’ll go with.

Doing the Spherical Frame render of the leaves

Doing the adaptive spherical frame renders of the leaves today, also have the leaves moving through the water.  The nuke particle system is pretty powerful, it took me 3 days to learn to do the more complicated stuff (i.e. I needed to use expressions with the more complicated nodes).  The way you can bounce particles with projections is very powerful.

My leaf render run is 100 seconds of a spherical frames against complex 3d models, is taking about 24 hours on a two 10 core Intel CPU & m6000 (and the CPU is at 100% most of the time).  We will see how it’s doing tomoroh, I might send it to the render farm tomoroh mourning bit it’s a bit too late tonight for me to package the run up for the farm (i’m doing a interactive render on it now, probably shouldn’t do such a massive render interactive, but I had to make some changes around 7 pm and am a bit burned out tonight).  I’m btw on nuke 9.0v8, with no memory leaks on this job at least 😉

I also need to do a cloud run, get that going tomoroh.  I really want Film Trailer 1 out friday (at least viewable by the exec team).

hmmm … 9:50 pm … render was going too slow, stopped the run and played with aliasing and normal options.   I think it was the aliasing option on the render, so  i moved it to no aliasing with no normal output – run length changed from 24 hours to 2 hours. Done by 7 am, i did notice a 20 gig memory leak but nuthing horrible, wouldn’t be a problem if it was running in a command line batch.  Doing a conversion to a “.mov” file so I can validate the render was good, so I can move on today.

MOZVR and a-frame

I really like the html markup for VR called a-frame by the Mozilla group.  Is really clean and flexable, I can even do point based sound using the markup tags (which I think is pretty amazing).  Lot of exciting things happening in this area, anyone doing something right now is dealing with alpha and beta code so it’s not for the feint of heart.

I also like the nuendo/wwise integration that works with lumberyard, way more powerful but a major step up in complexity then just using tags.  I think I will be supporting both, along with google’s youtube 360 thing (the youtube approach is extremely easy conform from my Adaptive Spherical Frames but isn’t as flexable as the other approaches).  I’ll try and complete the youtube one first this week, then do the MOZVR/A-Frame … then just start learning the lumberyard/wwise/nuendo approach.

It’s 1 am, just finished my first “final” run of the leaves on the tree’s (did this for compositing for 1080p 360 video, will need a seperate run for feature distribution).

This weeks to do list …

  1. add leaves to the ASF (currently the tree in the spherical frame doesn’t have leaves)
  2. map out my social network for the film, update pudls.com
  3. Do “Trailer 1” for pudls (video)
  4. Do “Trailer 1” sound
  5. Do “trailer 1” lettering (the “pudls.com” starts in fron then moves to the back of the 360
  6. Do youtube upload
  7. Do Aframe upload
  8. Integrate with facebook
  9. Noticed that the Models feet aren’t sinking into the water, need to do a depth merge on the Model and the water.

Live Action AFS filming

Though the vfx primarily generated Adaptive Spherical Frames are working well with my VR cinema system, for live action it hasn’t been so smooth.  I now have two weapon cameras with motion capture that are building up my adaptive spherical frames for live action sequences.  So far it’s been a bit easier then I expected, the first major live action shot with a AFS frame is when in the film the Model hands a cup of water to the Painter, and the painter walks outside.  I will actually have to reshoot that sequence, I will do that this week with the two RED weapon camera’s.  technically I should be genlocking and time sequencing everything for that shot, but next week I will align the times by hand.  Long term I think this AFS for live action will actually speed up shooting rather then slow it down (it currently is slowing down the number of shots I can do per day).

Just got my first test 360 test file loaded for Network Distribution

I just got my first 360 video test file loaded into youtube.  It basically represents a culmination of my testing with my adaptive frames for the film.  So each frame of the film is a adaptive frame which can then be distributed as VR (full depth mapped), normal 360 (pannable in 360 on a flat screen), quadHD normal, and/or anything else that makes sense.  The youtube test was my first “distribution” test, just to make sure it all works.  After I talk around, I’ll start working on 360 trailers for them film.

Hmmm … working on chrome … worked once on my android phone, but then didn’t run the 2nd time.  I don’t think this is my issue … probably just a lot of stuff on the network side that isn’t fully stable on vr video.  Though long term I think vr on the network is the way to go, I think short term it’s probably best to do things with downloads.  Of course this isn’t working with the apple browser at all, must be some kind of angle apple is pushing so things are not compatable?

Finishing last VFX sequence of reel one

I can’t wait till reel 1 is done.  Doing renders now on the last of the major vfx sequences need for reel 1.  This will be first “completed” section of the film, around 12 minutes … it’s really the first time someone can watch a section of the film and “get it”.  Previous to this, my mix of live action, performance capture, and complex story … was so far out there it was hard for people to really understand what I was doing.  Which the last major vfx sequences in reel 1 running, it’s also a emotional confirmation to my self that I can do a “A” level live action film … up to this point it was 90% bravado.

It’s going to take 12 hours to render this sequence, I’m unfortantly under NDA on the most critical piece of software I’m running to do this, so can’t do a screen dump.

Spherical Bessel Function Re-mapping

Walking from the coffee shop, and after doing some super complex math this mourning, it “hit” me how to re-map spherical bessel functions as a fundamental theme for my 4th film. “Pudls” is a four film sequence, the first film that these notes are about are primarily the first film which script denotes it as “Puddles of Light”.

Sometimes I think and connect with a new concept with a general “positive” feeling.  Othertimes I know something is correct, but it’s a more tumbling backwards feeling.  The spherical bessel function remapping for quantum gravity is more of a punched in the stomach then falling backwards into a abyss feeling.  Will take me 4 years to get it done.

Transforming spherical spaces

I can take a shot from one camera, then transform that 3d space to a perspective shot of another camera’s location.  I’m doing this in frames, having a bit of a problem when I have some spatial object wrap around the frame, hmmm … i’ll probably cheat right now (i’m sure there is some expression i could write but I need to ponder that out more).  Here’s my test of a AF camera1 to AF camera 2 node:

Screen Shot 2016-03-04 at 3.35.28 PM

Deep Lighting From the Audiences Perspective

For my vfx I don’t use “normal” lights, but volumetric lighting drives all my caustics.  On my lighting setup what I do is shoot light rays down into a virtual audience that has fog around them (whom sit within a panaroma), then the reflected light from the fog and audience is driving my vfx and scenes.  The projected light rays down into the audience are normally constructed from real footage (from r3d files), so the color space is consistent between primarily vfx shots and primarily “real” shots – to give a consistent color space in mixed reality.

This is a fairly expensive CPU way of doing things, but is plot/theme driven so is something I just need too do.

Trying to decide on the little girl “look”

I know, I know, I should have decided on the little girl “look” for rendering a long time ago.  But I have learned a lot from rendering the women, also I want the little girl to have a completely different look then the women.  Also there is some “plot” stuff that has come up where doing some complex meshing based on particles I think could work.  I of course do the performance capture for the little girl, lmao but the animation feels a bit more like a monkey then a little girl … but what do you expect.  Here’s a shot from maya on binding the new girl mesh to the performance capture skeleton (this is her going down the stairs):


I have also improved my VR cinema jobs by a factor of about 10, so that free’s me up to do some things that I wouldn’t have tried.  I still need to finish the little girl scene this week.

AF frames and CPU


Well my Adaptive frames can do caustics as well as reflections.  But the caustics do take a ton of CPU, and often can’t be paralized.  I had some bubbles of paints that were not moving enough in a shot, so had to redo them … was about 24 hours on 20 cores for 1000 frames.

Women in Water: Generated a normal pass from a AF frame – light ray merging

Did some routines that are able to add normals to a AF frames that were missing normals.  Wasn’t horrible code, doing 1000 frames is taking about 8 hours along with a bunch of other processing (about 10 grades and a few distorts, and a couple of blurs). Doing all this with 8k frame sizes, so pretty big.


This should look really nice with 3d glasses on, but I don’t have any setup today.  And here’s a pre shot of the women in water (she has a black veil covering her in the movie) :


Huge batches …

Doing a lot of huge batches in nuke.  I really like how the water and paint are flowing, has taken me awhile to get in where I want it.  Here’s todays batch runs:


Batches just finished, it’s 10:16 pm.  I really need to get back to post work so I can kick off some batches so they are ready in the mourning.  I also need to do my tax stuff for tomoroh.  I pretty much work back to back 12-14 hour days. …. ok … got some of the taxes done and started the job and it’s only 11:13.


Rolled over my toe with my chair … ouch …

Zooming along this mourning, switching between screens … I’m barefoot and kind of push myself on my chair between different workstations.  Durring the push off I rolled over my left big toe, it’s like killing me … I can put weight on it so I don’t think I broke a bone, but could have bruised a bone and/or something.  I think rushing around is good, and stubbing a toe during a rush is just something I have to live with.

Also just worked out my Adaptive Frame to 3D VR transforms, looks pretty clean but there will be some code for each platform to do it quickly.

Doing finance stuff all day …

Needed to do finance stuff all day today. Probably need to work on finance stuff tomoroh half day, like get a list of all things purchased in 2015 for the film for taxes.

I also need to complete a entire multi shot sequence using the adaptive frames.  I also need to get that organized, since it had a bunch of experimentation, I think I have 8 maya scene files (programs) for that, and atleast 3 different nuke program – so i need to get that writting as it relates to the AF pre-composite files.

Taking advice from industry veterans

I don’t like the word “profesional”, it’s just to vague.  But to me the word Veteran means some one that has gone through the whole process more then once.  So I talk with a lot of industry veterans, and they all have battle scars.  I think it’s real important to listen to these veterans, the trick is how to apply their experiences to what I am doing – which is often very different from the experiences they have had.

I had this expereince recently with my cloth animation, part of what they said concerning how I did some things was very negative (i.e. crazy talk) and on those things they were totally wrong.  On other things, which were barely mentioned, they were totally on the money – and I had to redo some things and get way more fanatical on my process.

So my expereince is some things industry veterans express themselves out of negative expereinces they have had on productions that have nuthing to do with mine.  They normally don’t mention those productions exactly – and they are extremely vocal on these points which if I listened I would totally mess up my film.

On the other hand, I try to find as many standard practices they use, which often they do not mention except in passing without any details – I find this information golden.

Modeling the Model

I have a bunch of steps for doing the scenes with the model.  Here’s the current list of stuff that all needs to be “imported” from /vr/maya/scenes/womem_build_cloth.00xx.mb:

  • load the model from the /vr/maya/data/women.0001.fbx (This is a tweaked model from modo, removed a lot of geometry)
  • Load the hat mesh (this is a modeled mesh created from cloth, then select exported) ./vr/maya/data/woman_hat.obj
  • load the dress and veil which are fully ncloth modeled

From here, load the animated skeleton file (i.e. model_at_base_of_stairs-03.mvn.fbx), going from the T pos to the first “real” pose is requring me to put about 48 frames in – so the start time is minus 48 frames.

bind the women skeleton to the “body”.

Mess with the constraints by running the animation (this is tough, it just takes time).  I need to do this with a cache, is just going way too slow without using caches.  Also since the fabric is moving, I can’t do this at some low resolution.

Output the entire mesh sequence as a alembic file.

Load the alembic file into nuke.  Scale/rotate file to fit the scene.  Animate things sometimes when the scene and model don’t line up.

Run the scene through the Adaptive Frame, from the camera positions (i.e. go through all the camera positions, creating adaptrive frames for each “camera shot”.  Name the AF sequences by camera number/Shot Id.


Some steps in this script are erroring out.  I think I’m going to run it in the ranges I had in the other post.  Also I’m finding it’s better to just drap it at the location it’s needed.  For the girl I’m going to use a different approach, but the model/women I want this nice drapery.

Had to learn nCloth in Maya

During my scene reviews, the naked woman and young girl, even though were made out of water, were just too … out there.  I have been learning ncloth yesterday and today, I need to cloth those two characters – it will make the film better using ncloth though.  It is a bit frustrating going over a scene, then realizing I need to learn a bunch of new stuff.  I do have a bunch of classes on ncloth online, and there are just so many things that are not intuitive on putting “cloth” on characters – that getting lessons for industry profesionals is priceless and mandatory.

Performance Capture Synchonization

Well this is how I am doing the performance capture breakdown.

OK … lets try to lay out the syncs per shot.

Shot1 – painter looking at model with camera move through pool

Performance capture 3 (my maya lingo is Name Space 3, or NS3) frame 1132-1890

camera pan is around 1316 from NS3.

From NS2 (the girl) it starts at NS2 Frame 200 with camera following around 1962

Shot2 – girl films up cup

really a continuation shot from over the sholder of the model to over the shoulder on the girl (1962) which is 780 from NS2  [ just do a straight cut, lets not get too fancy] because of the skip stretch needed on NS2 because of the 2X size difference of the model

sho3 – hand over cup from girl to model, then model to painter

NS3 2264 to 2686 (hand over)

shot4 – back to real space