Doing finance stuff all day …

Needed to do finance stuff all day today. Probably need to work on finance stuff tomoroh half day, like get a list of all things purchased in 2015 for the film for taxes.

I also need to complete a entire multi shot sequence using the adaptive frames.  I also need to get that organized, since it had a bunch of experimentation, I think I have 8 maya scene files (programs) for that, and atleast 3 different nuke program – so i need to get that writting as it relates to the AF pre-composite files.

Taking advice from industry veterans

I don’t like the word “profesional”, it’s just to vague.  But to me the word Veteran means some one that has gone through the whole process more then once.  So I talk with a lot of industry veterans, and they all have battle scars.  I think it’s real important to listen to these veterans, the trick is how to apply their experiences to what I am doing – which is often very different from the experiences they have had.

I had this expereince recently with my cloth animation, part of what they said concerning how I did some things was very negative (i.e. crazy talk) and on those things they were totally wrong.  On other things, which were barely mentioned, they were totally on the money – and I had to redo some things and get way more fanatical on my process.

So my expereince is some things industry veterans express themselves out of negative expereinces they have had on productions that have nuthing to do with mine.  They normally don’t mention those productions exactly – and they are extremely vocal on these points which if I listened I would totally mess up my film.

On the other hand, I try to find as many standard practices they use, which often they do not mention except in passing without any details – I find this information golden.

Modeling the Model

I have a bunch of steps for doing the scenes with the model.  Here’s the current list of stuff that all needs to be “imported” from /vr/maya/scenes/womem_build_cloth.00xx.mb:

  • load the model from the /vr/maya/data/women.0001.fbx (This is a tweaked model from modo, removed a lot of geometry)
  • Load the hat mesh (this is a modeled mesh created from cloth, then select exported) ./vr/maya/data/woman_hat.obj
  • load the dress and veil which are fully ncloth modeled

From here, load the animated skeleton file (i.e. model_at_base_of_stairs-03.mvn.fbx), going from the T pos to the first “real” pose is requring me to put about 48 frames in – so the start time is minus 48 frames.

bind the women skeleton to the “body”.

Mess with the constraints by running the animation (this is tough, it just takes time).  I need to do this with a cache, is just going way too slow without using caches.  Also since the fabric is moving, I can’t do this at some low resolution.

Output the entire mesh sequence as a alembic file.

Load the alembic file into nuke.  Scale/rotate file to fit the scene.  Animate things sometimes when the scene and model don’t line up.

Run the scene through the Adaptive Frame, from the camera positions (i.e. go through all the camera positions, creating adaptrive frames for each “camera shot”.  Name the AF sequences by camera number/Shot Id.

————

Some steps in this script are erroring out.  I think I’m going to run it in the ranges I had in the other post.  Also I’m finding it’s better to just drap it at the location it’s needed.  For the girl I’m going to use a different approach, but the model/women I want this nice drapery.

Had to learn nCloth in Maya

During my scene reviews, the naked woman and young girl, even though were made out of water, were just too … out there.  I have been learning ncloth yesterday and today, I need to cloth those two characters – it will make the film better using ncloth though.  It is a bit frustrating going over a scene, then realizing I need to learn a bunch of new stuff.  I do have a bunch of classes on ncloth online, and there are just so many things that are not intuitive on putting “cloth” on characters – that getting lessons for industry profesionals is priceless and mandatory.

Performance Capture Synchonization

Well this is how I am doing the performance capture breakdown.

OK … lets try to lay out the syncs per shot.

Shot1 – painter looking at model with camera move through pool

Performance capture 3 (my maya lingo is Name Space 3, or NS3) frame 1132-1890

camera pan is around 1316 from NS3.

From NS2 (the girl) it starts at NS2 Frame 200 with camera following around 1962

Shot2 – girl films up cup

really a continuation shot from over the sholder of the model to over the shoulder on the girl (1962) which is 780 from NS2  [ just do a straight cut, lets not get too fancy] because of the skip stretch needed on NS2 because of the 2X size difference of the model

sho3 – hand over cup from girl to model, then model to painter

NS3 2264 to 2686 (hand over)

shot4 – back to real space

Better then I expected … Adaptive Frame looks great

water_362_for_website_smallThe water pass I ran over night in maya is looking good. It was outputed as an alembic cache file of the water going over a section of the wall.   I just loaded the water pass into the nuke, then minutes ago did a scanline render sequence into a Adaptive Frame.  Frame looks Great, better then I expected.  That means I’m good to go with doing the performance capture.

I’ve memorized the scene in my head looking at it a billion times in maya, so atleast the actor/scene problem will be well defined for that scene.  I’m also going to memorize the script, I think it will help me on timing for the performance capture.  I probably should record the sound in 3D while I’m doing the performance.  I’ll have to redo the ADR for it but it could help me on sequence, maybe just one too many thing to think about though.

… Nuke crashed on the render, is 1000 frames, i turned down how many cores  run at the same time with a 50 gig of RAM limit for that job.  It made it past the frame that crashed, so everything is good (Adaptive Frames are 8Kby4K EXR files with 4 major layers: color,alpha,Z, normals).  About 6 hours to do 1000 frames, so I’ll have to think things through a bit more when I do the AF composites (compositing multiple streams of AF frames into one stream of AF).

… crap … went to take a mourning nap, as I was dreaming I was thinking about some of the hard edges I was seeing on some the renders, then how 180 shutter sample (probably 50 samples) would take care of it, then had a wake up moment when I realized I hadn’t turned the GPU motion blur off of the render – which I’m sure is what created the crash. Yep, just checked, i have the distance vector set accidentally.  I’ll restart this batch after I made my “dream” changes. [this run will take about 7 hours but should be a lot cleaner]

… hmmm the 8k by 4k frames are taking a long time, I’m thinking of moving to 8k by 2k frames.  Also I just thought up a few tricks that would make the 8k by 2k (or even a 4k by 2k AF animated background) be just as good as an 8k by 4k, by moving the camera spherically while on a shallow depth of field lens changing it’s focus through the shot (I can do depth blur and spherical camera moves on the Adaptive Frames, so a background AF could be probaly 4k by 2k and not be noticable if I used these two tricks).  I’ll need to use some tricks like this or the scene will never finish, the final AF composite will still be 8k by 4k,  but durring the final AF composite to normal 2k frame, the pre-comp AF files don’t all need to be as high as res if i use these tricks.

I btw turned off all heat , and have a fan blowing into the computer room – is 55 outside so with the fan blowing in should keep the room to around 70 – ick … just checked it’s 73 degrees in the room with 86 degrees F out of the back of the racks. I’m moving around a half a petabyte of disk, the backups should be finished soon … so I will shut some stuff off.

—-

hmm hard disk that has the virtual memory for the operating system cache, crashed.  I need to convert the alembic files to fbx, abc files are just too big for jumping around (is one multi-gig file for an entire scene shoot).  I’m going to kick off that job, then get back to the AF pre comps. … running this … is going real fast … hmmm that ran fast till is crashed. hmmm … i see the job climbing over 175 gig of ram, is for sure bound to crash.  I don’t think the way nuke does sampling will work, they must be ram caching the samples then adding them together rather then just merging the samples on each iteration.  I’ll have to do it how my dream told me to, duh … why should I have even tried something else. ok … i have the render time down to about 35 minutes, I’m not going to mess with that anymore today and am going to focus on my performance capture system.  Workflow is still crashing about every 80 frames, atleast it doesn’t crash on the same frame, so i just do restarts during the day.

—-

taking a walk I remember Deke from the Nuke users group told me to use anisotropic instead of cubic on the scanlinerender node. I just went back and checked and I had it on cubic, I’m switching it to anisotrpic which is how I did it in my high performance testing – hopefully that fixes the crashes (i kind of shot myself in the foot on that one).  I have no idea what anisotropic means btw. [that didn’t help on the crashes btw]

—-

I needed to full out open windows in the winter, back plane of the grid was getting to 90 degrees.

Software Versions and Service Packs

Going crazy this mourning, I think my maya service patches were out of sync.  When this happens it’s not like things “stop”, but you get all sorts of little problems.  Like in maya my bifrost mesh wasn’t working sometimes because a patch from MentalRay (which I think is a different company from autodesk), wasn’t exactly on the same service patch.

I have a ton of software primarily on 5 systems, in a “real” shop I’m sure they would no the exact version and service patch for each piece of software on each machine.  I’ve only recently started putting all my software images on a shared drive network drive, then installing from that.

I got a head ache, it’s the small stuff that kills me.  Here’s the critical list & flow of software I use in my workflow with version numbers: Red weapon Firmware ?v?,  XSENS MVN and software?I’m not even sure of the name, that have wierd naming convention on thier software?, Nuke version 9.0.8? (testing 10), maya 2016 SP5, Mental ray SP5, plantfactory V?, Davinci resolve 12.4

Ick, I don’t even know half of the versions numbers on my primary machines, I doubt the versions are the same across all 5 machines … my headache is getting worse.  LMAO … I even forgot about protools which is 12.4 ( i think i confused the protools version with the davinci version), protools apps and stuff also have a trillion versions.

—-

Cool, that update on the service patches got it.  I btw had to clean up some of my network disk, that broke for a bit … everything is good right now … i think i need to go get a latte.

—–

9 PM … bifrost is still exploding/not working/breaking … all at different spots.  I’m going to have to move the entire simulation to windows … I’ll do that tomoroh.  The tutorials on it just are a bit bogus, they don’t take into consideration how flacky it is.  All this flacky vfx software drives me insane, I should have just coded everything myself.  If I don’t get this done by tomoroh afternoon I’ll have to learn Houdini water, which is a major learning thing – not what I wanted to do tomoroh.

I’m going to go insane, houdini license server is a total pain in the axx. Trying to install the recent version and the license server is messed up.

11:20 pm, I have a solution.  I’ll do small patches of the bifrost simulation then composite them in nuke.  Nuke btw is the greatest piece of software I have, is simply amazing and mastering it has been key to getting my film done.

Working on bifrost and doing practical textures

I need to get the water flow simulations going for the scenes.  I changed my water flow simulator and cut over to mostly doing things in maya – so using a mix of nparticles and bifrost.  I really need to get a critical scene out today.  Also I need to do some performance capture on the xsen suites today.

W003_C010_0203PGKind of messing around, but I just love doing the practicals for the vfx, did a bunch of textures for the water scenes.  I btw really like using the red prime pro 50mm PL lense for practicals.  It’s a fairly cheap spherical PL lens, gosh cost I think about 20x less then my Arri Master Prime Anamorphis – but for practicals it does a great job.  Also it creates textures which I can create 12k by 12k mosaics which I can smoothly scroll and morph onto 8k plates.  Here’s a linke the r3d for those that like to play around with things: R3D link of a practical texture shot (used for vfx)

8K adaptive frames working finally

My adaptive frames (AF) are now 8k, was kind of tough getting it all working and getting my 8k AF compositing and texturing going.  I like the texture of the house (which is projected), but I’m working on the look of the tree.  Also I need to do the performance of the women and little girl, I’ll ponder doing that tomoroh.

I btw really like the red weapon, it’s a lot easier to use then the dragon epic.  I don’t like prores out of it, just not enough bits – so I will be sticking with 6k R3D files.   For the 8k adaptive frame textures I’m finding I need max resolution, so might also go to 3/2 anamorphic aspect ratio with 2:1 redcode. (i.e. film at a 3:1 ratio rather then 2.4).  I have no idea when the 8k read weapon will arrive, can’t be soon enough.

Adding Deep Compositing work flow to AMPV work flows

For adaptive multi point of view (AMPV) compositing I really need to use deep pixels.  This in in openexr 2.0, but I having had much experience in deep compositing.  I’m doing a few tests this mourning on how to compoise AMPV files using deep nodes.  Really just takes one extra node to turn my standard AMPV image to a deep image, then the compositing works fine in deep.  I need to finish this step to determine how to do fog, then I have enough to start the actual compositing for the vr scenes – getting out of test/trial mode.

Well, needed to depth of field before I did the fog.  So just finished the depth of field (took an hour).  hmmm … there’s a bunch to this deep compositing.  AMPV mixed with deep compositing workflows is something I’ll be messing with for the next few years, so I need to learn enough to get my scenes done yet not get buried in it.  I’m ending up doing a lot of this with NUKE expressions (i just do the math myself per channel, is easier then fighting with things and much faster).

Projection Mapping the Growth

The projection mapping on even the 144 megabyte models of growth is pretty quick.  I’m running it in about 30 gig of memory and it’s taking about 1/4 of a second, so you can move the camera’s around near real time and see the result.  Some of this might go a bit underground for a while, so not sure how many shots i’ll show.  Is seriously impressive.

Without shading, I’m getting full 30 to 60 fps updates, so full realtime which allows quick creative decisions.  So the “final” render should be taking about 1 second per frame.  That would be I think will be one 4k files and three 1080p files.  Not sure how to do color grading on the 1080p files (will need to go through a seperate grading pipleline).

I btw got my Adaptive Multi Point of View file format specified.  Is real nice, has full 360 view and depth (so can re-create for 3d).  Does take about 1 minute per frame so isn’t cheap to do (I might need to get another computer just to render the AMPV files – only about half the film is in AMPV format though which btw is in a OpenEXR container).  I just look looked at the performance curve … it’s using all 20 cores go to 100% about half of the job per frame, and that is with a m6000.  I think It will only need about 64 gig of RAM, hmmm … i need to ponder which machine this will run on (it’s too big of a job for my older windows machines, could go onto my “maya”/VUE machine.  I’ll probably just kick these runs off at night.  lets see 1 minute per frame, that is about 2 second of a shot per hour, which would be about 16 seconds per night.  Hmmm … that’s way too slow.  I’m going to need to reduce this time by a factor of 4 (i.e. I think I could live with 1 minute of film finished per night).

sphere_ampfThis is what the end result looks like during my AMPV render testing inside of nukex.  So by playing around a bit and doing some speed tests on the AMPV format, got it to 20 seconds per frame – that’s in the reasonable speed side.  That’s about 100 seconds of a shot per night, which really is max on how much I can finish per day.   So I will stick with that (using a wierd 4k anamorphic’ish format, is kind of like a 8k anamorphic which btw I should have a camera for that latter this year).  As a side note I need to run it with a notch filter on one of the projection steps.

Growing Home

Ok … i’ll give a bit of a blow by blow Growing the home.  Here’s my first generation of the home with vines.  (btw I like how the mid vine grew so i changed where the opening was to the painters shed, it’s closer to the “clock” – of which now I will have to lidar that today and grow some vines around it.)

house_with_vines_01To the side is my 7th version of the vines, 1-6 I “attemped yesterday and they were not working, because I had setup my scale completely wrong, so my sleeping learning approach worked (I only did one cycle of sleep learning).

 

 

 

Ok … after a bunch of processing, this what I have.  Will be good enough for the first shot since there’s a ton of fog and mostly close ups:

vr_17

 

hmmm … looking at this a bit … too boring.  Trying to do something a bit more interesting but getting a lot of crashes. Ok, got something cool … trying to do the export in fbx … crossing fingers …

 

Well the master Maya file for this scene is around 150 megabytes, which I think is a bit too big for nuke dealing with things.  I’ll try that tomoroh, but here is the master maya file that is now ready (vr_house_and_tree_v0032.mb):

vr_house_and_tree_v0032

Inside the PlantFactory

Growing the plants inside the built landscapes.  I was going to procedurally create all the plants but I just decided to do a bunch of them manually (I just don’t like the automatic movement of the vines).  Will take tomoroh to do that.  Here’s a starting screenshot inside plantfactory, where I’ll be adding the plants.
ScreenShotplantfactoryHouse
I’ll also be putting on the the lynda.com “up and running with Plantfactory” tutorial up on the wall, and I’ll be watching it over and over as I sleep.  It’s kind of my sleep training (after the training loops like 4-7 times it just gets stuck in your head.  It’s a somewhat inneficient way of learning since it’s a bit irriating hearing it while you sleep, but it’s just something I do.

Creating the world for about half of the film

I’ve created the world for about half of the film: the world from the model and little girls viewpoint.  Took me awhile, the entire “theory” is so obvious once I finished it, but was a struggle getting there.  Basically the girls world is a exact physical reflection from the painters world but is 90 degrees rotated spatially from the doorstep.  While the Painters world is closed off by the house, the girls world has mostly walls removed, with waterfalls and rivers all around.  The primary interface being the current painters painting (in the shed) which is a small reflecting pool/puddle at the base of the girls world (which is next to a river).

girlsworldbeforevueHere’s a Maya render inside the real life blocking for the Model and the Girl  (i.e. it’s aligned to the real house).  I will load in the VUE world tomoroh which will be mapped over the performance capture blocking(blocking in film talk is where the actors need to move).  Then when I do the performance capture, the performance capture will align with the “vue” world (which is really cool).  When vue world then is loaded into nukex(along with particle meshes of water flows), nukex does projection mapping from r3d files onto the Vue world objects.  Finally the performance capture is UV placed over the projections mappings, to create the liquid forms.

Mathematically automated about 50% of my roto’s

I was spending a huge amount of my time rotoing shots.  I think this is fairly common in vfx.  So for every 8 hours of rotoing I now do a job overnight that takes about 10 hours (about 1 hour per every 10 seconds of a shot).  It looks a lot nicer, plus will speed up the production of my entire movie by about 30%.  Is doing some serious GPU cuda math with overlayed motion vectors and time differencing.

Got the shot, forgot the lidar

Lidar Preview mapHere’s the lidar of shot P041, I went back to get. I also included the model for those wanting to play with it: model file

—-

Got the shot of the painter walking toward the Model (whom is inside the paining) in the shed.  Did it in one take, with all the techie stuff it’s too hard for me to get in the “acting” mode, so I do the setup of the camera & lights & everythign … then think things through and just do the acting.  I actually added a few plot points when I did the acting, which points are now in the back of my mind with small visual triggers captured via camera.  I think it’s important to capture the performance during the shot, not creating the performance in post.

I did forget to do the lidar of that shot, I fortunetly didn’t move the tripod, so I think I’ll be able to go get an accurate 3d configuration of the room and camera angle.  I did the shot with a 35mm master prime anamorphic at t1.9 (wide open).

My daily work pattern, of sorts – need to get into a cycle

Since I shoot, do the vfx, and also the performance capture – it’s hard to get a daily work plan going.  So I’m going to focus on just getting one major piece done per day, not try and do every facet of film making in one day (just makes me mentally fractured and I can’t finish anything).  I’ll be reshooting p033 tomoroh, then wed. do the vfx for it.

P033 is the painter entering the shed with vfx “painting” in the background.  I need to reshoot p033, the framing of the vfx painting isn’t in the right place(the shot was framed too low).  p033 was blocked with the Model sitting on a chair in front of a real painting.  The shot is now the model looking out from a vfx generated painting that she is inside of.  Currently the painters pants aren’t creating a clean cut from the previous shot (P032), so I had to reverse the entire plate on p033 for the cut from p032.  Tomoroh it’s going to be raining, and that shot is designed to be with the camera looking from outside into the shed(which will not work in the rain).  What I will probably do is go with a 35mm anamorphic lens and have the camera inside.  Also the shot needs to start with the painter blocking the right 1/4 side of the camera frame.

I need to do full motion capture with the xsen setup.  Aside from doing the scene downstairs with the painter in the shed,  I still haven’t got the motion capture of the little girl and femal model inside the painting. I’ve been eating a bit too much icecream this weekend and feel bloated, doing a lot of motion capture is a fairly athletic thing which also mixes with me moving the camera’s and such.

Nice doing some photography for a change, rather then doing vfx

plate_stripI did a lot of practical photography today, for the painting sequences and “china” gold land areas.  Was nice getting away from the computer and actually moving stuff around, doing a cinematography shoot (even though it was for vfx practicals).  I’ll post some of the plate strips.  4k by 12k pixel strips mostly, with some 24k by 12k merges (so some huge plates).   I’ll be using these plates as projections into a vue built scene.

I need to figure out how to stitch these guys together.  I’m pondering on getting even closer and getting a 36k plate.  Another approach is doing multiple projections and render it in nuke.  Nuke beta has some new stuff, but the new scanline render I don’t think is out.  Might just do it the old way with multiple projections, and convert each slice into a 6k by 1.5k projection slice onto a 3d terain.

Adding in the dog …

I noticed in the performance capture I was petting a dog, but the dog wasn’t added into the scene … lol … good spur of the moment action so adding in the dog for that cut.

Also nuke 10 seems real nice so far, will speed up my workflow and is extremely stable for a new release.

I finally worked out “the shot” for intruducing the female “model” and also entering into the first painting.  I’ll have to do a pickup shot in the shed tomoroh, might reshoot the painter parting things.  Also I need to do the performance capture of that scene for the Model and the Girl.  I’m pondering out the blocking for that right now as I finish up the dog in nuke, so I will have a good idea on what to do on that performance capture session on friday/tomoroh.

wrong merge node …

While copying p032_c007 to p032_c003, and editing c003 I noticed I had the wrong normal in a primary merge node which prevented some of the water to have nice edges.  I need to remember to go back to node Merge5 in p032_c007 and re-render that entire scene.

In p032_c003, I think the boy is sitting too far up on the shed, I’ll need to re-render and bring him down a bit.  Also he really needs a shadow pass to hit the ground.  I think the boy needs to be about 20% bigger too.

ok, used the ambient pass from the xsen boy and it gave a nice shadow, rendering now … 650 frames which takes about 9 minutes (rendering on the 10 network).  Under a second per frame to render 4k vfx, not too bad 🙂  This also allows me to do a lot of quick little fixes to get a good feel of the shot. I keep messing with the shadow, here’s a jpeg of the boy sitting next to the shed:

4k jpeg of the boy sitting waiting at the shed door (he is scooting over right before this frame)

4k jpeg of the boy sitting waiting at the shed door (he is scooting over right before this frame)

Antoginist that are Sociopaths & Psychopaths

The antoginists in Pudls are sociopaths and psychopaths.  I’ve noticed various definitions for these both, I like to merge them a bit for my character.  I simplify this a bit in that the antogonist is someone that is trying to kill society, they are themselves inherently inneficient and do not want to compete in a fair playground, will attack society by moving up social chains, tell people exactly what they want to hear to move up social hiarchy, ultimately control others in a social hiarchy that is inneficient.   I started off the film with basically “eat whole foods that are primarily vegatables”, and was finding perty much everyone knows that but their are systems in place that have been running for a long time that make this very hard.

One interesting thing on sociopaths/psychopaths is how they gain social power.  I think this gaining of social power, at the individual relationship such as some men abusing women, or at the political level where a person erodes the rights of others in order to control them … for both of these I study exactly how the sociopath gains social power from otherwise “normal” people.  As a plot progression: the sociopath has a hole in thier emotional system; realizes that and hides it; studies emotional weaknesses in others and uses that to look how to degrade them; manipulates emotional stress/weakness of others to gain wider social impact; finally controls a wider group as the sociopath has “gained” in a social network.

From a personal viewpoint, I think it’s real important to recognize sociopaths and psychopaths, and don’t give them power over you.

Been messing with a shot forever, going to nail it today

I’d hate to think how much time i’ve spent on this single shot … it’s basically one of the early shots I had of the painter moving down a hall way and with animal movement.  I went to do a conform on it yesterday and found I couldn’t because a bunch of intermediary files were missing and I had changed the nodes significantly.

The nodes on this comp are hideus looking, I’m going to clean up this shot, and if I can’t clean it up today i’m going to drop it from the film. (p031_c007)

——

Cool , it’s 5:30 pm and I finished it –  is looking real nice also I deleted about 80% of the nodes in nuke.  I added a frame cut in the nuke composite which I think feels real good, my edit/frame cuts are starting work.  The entire render for the 271 frames was around 8 minutes, which I think is pretty quick (that is a direct read in nuke from the r3d all the way to the final composite which goes into davinci).

Did a lot of math today

I got a lot of math done today, is extremely solid … can’t believe I didn’t have this worked out before.  I’m moving away from xyz spatial co-ordinate systems.  New system is real powerful.  I’ll explain it throughout the film (is connected to the plot and premise).

Every day a pyramid

Pulling up a sound isoloation box upstairs, seems like every day I’m building a pyramid from scratch.  Wouldn’t be so bad with a crew, fairly quick in fact, but lifting that up myself I have to be smart and takes me a bunch of time.

isobox

Moved from P3 to Rec 709 color grading

I am doing all my editing and finishing on Davinci Resolve right now.  My temp reel cuts were becoming a hassle to do in p3, and I didn’t like the conversion look when I uploaded them for reviews.  So I moved my screen and all finishing to rec709.  When the movie is goes to the “final” finishing step, it will then get a p3 grade (probably just use my grade I’m doing as a reference but otherwise not use it).

I’m cleaning up a lot of my cuts right now, also am a lot more accurate on start/stop frames that need to be animated.  This speeds everything up a lot from a schedule viewpoint, since I’m only animating what is exactly necessary.

I need to start a music track soon, I should be getting in a sound isolation box, so my room won’t be so noise … and can do more with protools 12 (which btw is working very well).

Into the frying pan

I just finished the rotoing and the egg breaking into the frying pan, looks so far pretty good.  I need to add the chicken.  In the past I really haven’t like the chicken, I think tomoroh I will focus on “the chicken”, there’s a bunch of scenes where I need to update and replace the chicken.

A plan

I’m putting the lidar scans in the folder of the r3d, also when I take the lidar scan the camera is part of the scan, so i have exact camera placement for the scan. So this mourning Pick Up  Shots are:

P039_C001 was shot with a 35mm anamorphic (putting on shoes)

P039_C002 was shot with a 50mm anamorphic (close up cracking egg)

P039_C003 was shot with a 50mm anamorphic (differen ways of walking away from pan, locked off shot from C002

—-

Doing the vfx and compositing for these shots now.  I think I’ll P039_C001 first, so I can finish off the back door /snake shots.  Then I’ll do C002/C003, goal is to shoot the picku shot and complete it all the way through vfx in one day – TODAY.  (turning my phone off btw, if anyone’s trying to get hold of me, to complete this work)

Having some prores 444 speed problems on the mac, just going to flip to dpx to get stuff done. LOL … now moue isn’t working on the 12 core mac, entire keyboard config i guess isn’t reading.  Plugged in mechanical keyboard, it’s working.  I end up having a lot of problems with mac keyboards btw, they are just way too wimpy.

Man, just found a super cool way of clean plating that was built into nuke 9, a for sure speed up on what I was doing.  6:29 pm, still rotoing the knee’s.  Well it’s 9 pm, I do really like the look on putting on the shoes, the nuke script is btw real clean.  I think it’s possible to do a pickup shot and get it through vfx and even compositing is doable in a day, not sure if it’s possible to do two scenes all the way from shooting with the camera to final output in a day.  Perhaps as I get quicker on things.

What’s happening next week …

Well I got the scenes on the first reel done from the bed to the backdoor.  To do things is do a pickup shot on the back door, with performance capture.  I’ll do that monday.  I also got the color grading system going, which wasn’t part of my plan … also a bunch of business stuff came up which I needed to get done.

Next week is finishing the compositing of the painter going into and out of the shed, and the huge task is the rough of the introduction of the model in the full VUE environment.  I have been working on the blocking and general scene look for the “pool” environment that the Model lives in (which btw is at the base of the golden hills).

Clean and Catching up …

Going to go through all my tasks this week, and see which ones I can finish up by sunday, to get as many of the elments completly done.

Things I need to do are:

  1. roto out the painter in cut 2 (i btw am using the Tangent key system, is drastically improving my roto speed] This roto just isn’t working out, good practice but the shot is looking down too much and the angle is confusing as I look at the dog.  I’m going to reshoot this.  also get do performance capture of it, and do the dog in the same take group.  I didn’t wear the XSEN suite when I did this shot, and that was a mistake.
  2. move the HP computer so my knee’s aren’t hitting it
  3. get a mocha (almond milk, ductch cocoa with no sugar, two shots espresso, whip cream [DONE on the mocha]
  4. got my davinci studio, so installing that … this is my kind of “fun” task [ I got davinci and the panels working … I’ll do some training on it tomoroh]

Got nuke on track, using grid operations today

Got my changes working in nuke,  the “appendclip” and retiming stuff has to be done a certain way, I tend on transforming space and time all over the place, nuke doesn’t like that.  I will be using grid based transforms today, haven’t used those significantly before, but are required for the snake to go through the door in water form (most people would have probably done this in maya, but I do a lot of the “maya” type of things in nuke for turn around speed reasons).  Just brought up my performance monitor while doing a nuke batch, I noticed everything is 100% maxed – all 20 cores are maxed and I think the GPU is also maxed, then speeds are even high (around 500 megabytes per second).  It’s nice that I finally have my workload processing balanced.  On this run I’m doing is mostly on EXR since i have both normal transforms and velocity transforms per pixel I need to use, result is to a prores file though.

I just received my tangent element keyboard, so I will be setting up my color correction system up Monday 😉    Trying to decide the best way to do it, initially I had my windows doing the color correction inside studio, but nuke studio is just too buggy right now, so moved recently over to davinci.  With davinci how I have my setup, it needs to run on a mac, but on my mac I’m not sure I can use the 4k display.  So I need to really test if the mac can output to my 4k display through davinci, the mac does go correctly to myt 2k color correction screen.  So i should be ok with 4k.  The way I have my desks layed out, this would mean moving my computers slightly.

I am running a batch command on nuke using grid transforms on liquid, is working out well, but I think I’ll need to roto paint it a bit for the final.  The nuke gridwarp node has a nice feature in that I can turn it on/off within a range, so most of the time it isn’t hitting on the cpu very hard.  When the grid transform is active I did notice a significant slow down of frames rendered per second.

Working on the dog now. Actually going to mess with my tangent panel, I need a mental break and will do the dog after 8pm.  On the panel, I did some research and I’ll need to use the most recent version of the mac OS to use my color grading monitors, but the most recent version of OSX works flaky with davinci, so I’ll do my grading on windows 7 20 core system that already has my 4k grading monitor.  I’ll start doing more nuke batch jobs on my 12 core mac pro, I have that set up already anyway.  So all I need to do is plug in my color grading interface to my windows and i’m good to go 😉  GOT IT WORKING, atleast got it working with MAYA, which I think is a bit harder to do since I need to map the keys through this program I downloaded.

On the dog, which is pretty much the painters spirit guide, I wonder if I should just stretch the image of man into the dog.  Would line up much better with the skeletons, since I’m doing the performance capture of the dog with my skeleton – and need to match up my skeleton with the dog every time i need to animate the dog.  Would make the dog a bit wierd looking, but I was looking at meshes … a kind of deformed dog would be pretty freaky as a spirit guide – definatly would be visually interesting.  I might even cut off the back leg of the dog, would make it hop around more, would line up actually better with my performance capture since I don’t move very well as a dog.  ok … so I’m not going to have a back leg for this dog – i hope this makes plot sense later on since I do not want to go back and redo a bunch of animations.

Man, this dog’s looking pretty gnarly.  I have the dog animated now, actually emits emotion and is kind of creepy, mangled … so the fact that the character comes across is good I guess from an “animation” viewpoint … emotionally I wish it looked a bit nicer – i might have to blur the normals a bit.  Ok, doing the render, I’ll do the nuke step tomoroh mourning … and it’s only 11:54 ! ok, 2 am, kicked off the nuke run that has people->snake with the dog.  going to bed.

 

working through the nights

Trying to get the kitchen and door/people scenes done.  Working through the nights and taking naps while batches are going.  Is progressing and the performance capture is looking good.  Converting People to the Snake is up today, so need to get that going.  I’m going to kick off a rough of the kitchen scene in about 5 minutes, then get some coffee or something.  Did a quick check and it’s looking good on the kitche scene, I like how People is merging out from the Painter in the “background” (i.e. a clue).  I didn’t have a floor shadow on People, that shot really needs for me to add some floor shadows.

Ok, started on P037_C001, fortunately it used the performance capture continuation of P029_C010, so the Maya runs are set up fairly well.  I’m setting up P037_C001 directory on the NAS right now, I need to search for my lidar scans though that will match that scene.  Also I need to bring in the performance capture of the Dog into maya, I do have the dog mesh around somewhere though.  I actually haven’t animated a dog yet, so need to put that together also today.  Got the conform going now for cut1 and cut2, doing prores in the nuke chain.  Actually could have shot on prores, but I did mess with the r3d a bit on ISO, so if I didn’t have a raw and shot it directly with prores I would be lacking some depth in the highlights.

It’s cold outside with my window fully open, and it feels nice with all the computers going … I really need to finish the vfx up before the summer hits.

—-

Ok … todo’s for tonight

  1. find lidar of the back door [found, all my lidar’s is labaled by the download day of the email it was loaded on!!!arghh!!! that is hopeless – from now on all LIDAR will be done on the same day as the shot, and will be labeled first by the Primary Camera R3D, secondly by the scene section]
  2. find mesh of dog
  3. bend the dog mesh into a tpose
  4. pick best dog for the xsen mvn, convert that to fbx (remember to do the 10x reduction new trick i learned)
  5. position camera for P037_C001, Lidar to the camera then, People to the lidar
  6. position dog to the lidar
  7. do a rough render pass with both dog and people in maya, should be seperate exr files for people and dog
  8. do a rough render on cut1 of people and dog in Nuke

… I keep on getting crashes in MAYA on the LIDAR mesh of the back door … arghh … i’ve redone the material and also cleaned the mesh.  Seems fine on the mac side but explodes on the windows side.  Got it, just had to be fanatical about paths on the lidar textures.  Classic how programs like this basically just crash when things aren’t perfect.

hmmm … it’s only 7 pm at night, and I have the first maya UV render running, I might just finish by 2 am ?  My rendering speeds are now subsecond on complex 4k renders of caustics (i.e. liquids), real good times – without these type of speed up’s i’d never finish this film (i’d say normal films are around 2 minutes per render on things like this, so they need hundreds of computers to render going way slower then my process.  I just use the one 20 Core computer with i think around 3000 cuda cores, and have a second mac pro to edit on since I have two licenses of maya 2016. Well the run was fairly good on nuke, but LOL was getting way to much Bum on People, so even though it’s not the exact place where the performance capture angle took place I moved things around.  Rerunning the MAYA batch, then I’ll try a full run of the nuke.  Also cut1 I moved it from lasting 100 frames to 300 frames, as I think about it 3 seconds for a major scene is just to short, 12 seconds seems about right.  I can probably do some nuke rotoing while the maya is going, but I’m starting to burn out.

In P037_c001 I didn’t get a lidar of the shoes in the scene, which is turning a bit into a pain in the but.  I actually could do a camera solve on it, since I did a tripod rock on that clip (is the first 150 frames on the clip).  Also as people turns into the snake, with my “not showing his bum” modification, the snake is getting too small – so i need to raise the maya camera a bit and reshoot that sequence (i need to learn how to do a 2nd maya camera really, hopefully I’ll do that tonight without messing it up).  Kicking of the snake maya run now, only took me a few minutes to tweak it, and I don’t think I messed the camera up (i need to remember to save this session with a new version).

The Dog will have to be done in the early mourning, since it’s 11 PM now.

Finalizing the the look of People/Snake

The character People/Snake needs a specific look.  I was cleaning up P031_C007, the first scene that has People in it,  the feel is just “off”.  I need that look finalized for the kitchen scene and conversion to snake scene.  So doign some runs on that.  I want a watery blocky scale for it, I’ve visualized what I want last night while meditating .. so have to figure out how to get there in nuke.

I btw have my color correction station coming in soon, full Davinci Resolve with a larget Tanget panel (the real huge one).  Been trying to figure out where in the room to put it.

Also organizing my performance capture shots, I take no where enought notes when I do performance capture, I am starting to put that in the metadata … but the file system doesn’t show the meta data without me opening the files.

For the People to snake scene at the door I have a nice performance of that, but no idea what exact file that is.  I just finished moments ago moving all the performance capture data to the SAN, took about an hour.

LMBO … I actually did name the performance capture files decently past november, small miracles in my organizational skills.

On P031_C007(establishing shot of walking from bedroom then down stairs), is a fairly fast scene … i was having the chicken land and some interaction with the squirrel, is just too cluttered … so will be just the Painter and People in that scene. (every scene post time is pretty much dependent on how many actors are in a scene, I’m going to have fewer actors per scene, is cluttering the shot plus adding too much post vfx time).

Still on P031_C007, doing a lot of cutting … there will be btw CUT1 and CUT2 (actually look like two different shots with a cut. It ran like 1600 frames, but the timing just was off.  Now it’s straight time P031_C007_CUT1 and P031_C007_CUT2.   Also the chicken maya runs were just intermixed with stuff, and btw looked like chicken crap, so need to redo the maya runs.  Also the camera solve was horrible, with CUT1 and CUT2 no need for camera solves. … Well I did the clean up, the but maya run of people coming out of the painting will need to be redone.

Using people_to_snake_003.mvn for the ending of the kitchen shot and dog scaring shot. Woundering if I should move it from 240 fps to 24 fps, this has been such a pain to reduce the fps on the the animation by 10 … the motion builder filters are messing up on this for some reason.  I’m working on the files across the SAN, is real clean so far.  Using MVN i did export frame skip at 10, cross my fingers this will work better – hmmm … that worked perfectly, I think I’m going to skip motion builder for now.  Motionbuilder package btw cost me a TON and I’m not finding it very useful.  It’s a lot about which programs to master, and which things can be a time synch.

btw dealing with RED is a time synch.

Updated the shaders and integrating more performance capture

P029_C0010, the first kitchen scene, is the first scene to have tightly integrated r3d with the 240 fps xsen suite.  I have that in other scenes, but in that scene it’s all done by the book and is fully timed.  I need to do the egg cracking part on that, not exactly sure how to do it.  Also there’s a switch back and forth on that scene as People scares the dog and flips to a snake, so haven’t exactly determined the edit timing.  C010 is 700 frames, which is split.  What I think I might want to do is the medium shot of C010, the close shop of the malformed egg, then the people/dog shot, then cut back to c010 as the painter walks out (which then recuts back to the dog shot area).  I think this is the sequence I’ll be working on monday and tuesday.

Things are going fairly smoothly, but cable modem seems to be having problems.  Went out and got another modem, had to do all sorts of stuff regerstering it.  Network didn’t work for awhile.  Exablox SAN just came up, so that is fine.  Rebooting everything else … crossing fingers – well everything is working … miracle … but that took like 3 hours of the day (it’s 3:30 pm).  only 8 more hours I’ll be working today, so going to take a little break.

Dec 7-11th Weekly Plan

Here’s the scene list

  1. add chicken flying out of the starting scene as the bed pans (finished render)
  2. modify chicken on stairs scene, just doesn’t look right (finished render)
  3. the kitchen scene, all actors (including chicken, people, painter) … painter is all ready looking good (but use the new chaotic shader for water effect) (finished render)
  4. do the door scene … all shot and all the performance capture done (rough render)
  5. do a rough of the door to shed, (r3d done, and performance capture done), need to ponder dog (rough render)
  6. need to reblock the inside shed scene, maybe story board that a bit, as the painter goes to the black painting (go through r3d, might have something decent there) finished render
  7. complete a rough render of the women inside painting scene, need everything from performance capture to maya/vue.  also need to story board this. (proto render)
  8. get a rough of reel 1 and  spot for the music recording session.

Prores 4444 and always learning

I really like prores 4444, been flipping a lot of my files over to 4444, and also using the alpha channel.  I’ve also been modifying my water shader (my vfx routine that makes things look like water in the film), adding a bit of noise animation within the shader gives it a moving water feel and feeling of plasticity,

I need to finish up on some xStream lessons today.  I think I’ve caught up on my FXPHD classes, but i’ll check on those too.  So todays an experimenting day.

Had a lot of weird “stressful” type dreams last night, I meditate around 2 to 4 hours a day.  Somewhat result of this is will start meditation part time in the garden.  I also do quite a bit of breathing exercises while meditating, I’ll move those to in the garden.

Dec 4th Weekly Recap

I think I need to do a scrumish recap/close every week, friday seems a good day to do this.  Also going to do a weekly plan, sunday seems a good day to do this.

I just finished another class from Victor Perez where he was going over file referencing and linking in nuke.  Came at a perfect time, and I’ll switch to his method today (I was doing full path referencing, and that doesn’t allow me to move my nuke jobs quickly over to my 12 core PC when I need to do maya/vue stuff on my 20 core pc).

So here’s my recap

———————–

The vfx scenes where the r3d will not be used as a base where becoming problematic coding that all.  I researched options for doing full vfx created world scene, and bought Vue xStream and plant factory on blackfriday/cyber sunday.  Will be merging the world textures with r3d of the paintings, so am researching that methodology.  Finishing learning the basics of Vue and plant factory this week, so I can start the the dark puddles scene monday.  Completed previs and story development on the first two Vue based scenes. Completed move to shared nothing disk management, and conversion of nuke and maya scripts to indirect references.  Retimed the starting shots of the film in nuke, also redid the rotos for the first scenes of reel 1.

Another huge thing I’ve found through testing this week is by doing my colorspace workflow correctly, I can do almost all my compositing in prores 4444 and bet around 4 to 6 times faster in compositing.  Even the disk arrays go faster, since the files are large continuous blocks.  So by going with my rectangular pixels and prores 4444 workflows, I have a 4k workflow now that is significantly faster (atleast 4x faster and often 10x) then my previous workflows.  It’s a lot easier to organize too, since the file count is almost 1000x times fewer files.

Plowing through the compositing to get the 1st reel done

I’m plowing through the compositing to get the first reel done.  I might need 1 pickup shot to finish the reel.  Also big effort to get the first “inside” the painting done (which is needed in the first reel).  Kind of cool thing in the first reel is it introduces the whole world and conceptual framework for the series (i.e. pudls is a 4 film series).  So really focusing this next month on getting the reel done.

Blocked off the heater for this room, so no central heating comes in.  With computers going 24×7 no reason for heating upstairs in the wintertime.

Performance Capture Editing

First off, 50% of what people tell you for live action editing is totally wrong for performance capture editing processing, and the other 50% is so trivial compared to performance capture editing that it should be used as a starting suggestion only.

I’ve waisted a lot of time doing editing and story development like a live action film, performance capture editing is much more like animation editing.  In animation editing the editor is involved through the entire film, completely involved in the entire films process and music.  But i’d say classical cell animation editing is about 10x times easier then performance capture editing since performance capture editing is around 240 frames per second while cell editing is really 12 FPS with inbetweening(frame between primary frames for motion blur and movement).

So first off, why is performance capture so many FPS, well I think a easy way of thinking about it is if your simulating bouncing a ball, and the simulation steps where every 24th of a second, that ball would often be hitting the ground not exactly at 24th of a second so would go “under” the floor frequently.  So in order to both simulate and capture the physics of the world, you need to have a sample frequency of about 10 times your view frequency in order to model complex behavior.  (i’m actually doing some simulations at under 1 billions of a second).

Also the story development process is totally different in performance capture in that it’s much more about game theory: the actor is given tools and problems to solve; the story is the actor stretching to solve these problems which creates a character arc.  So performance capture story development isn’t just telling a actor what to do at 240 fps, it’s about giving actors tools and problems to solve in a sequence in order to stretch the actor and create a character arc.

There’s probably about 20 other fundamentally different things about performance capture editing, only advice I can generally give is watch some of the behind the scenes stuff of films like “avatar” and “the hobbit”.  Also really understand animation editing, the film “inside out” behind the scenes I think does a great job on that.  Finally ignore anyone that is telling you to do something in a specific order, and really listen to people that are giving you tools to help you solve problems.  The book by Murch, in the blink of an eye, even though it’s a old school book is more about understanding people, their problems and tools – so some of the old school stuff is a great reference.  The how to step by step people should be avoided.

 

 

Rebooting ….

90% of the time my local raid (which is thunderbolt on windows 7) isn’t seen by the OS the first time I boot in the mourning.  So I just don’t worry about it anymore (plus I keep my exablox SAN up to date), and reboot every mourning.  Arghh … windows 7 also isn’t seeing my SAN shares, though it can see the SAN servers, I really don’t want to move to window 10 and I have no idea if that will fix these weird windows disk issues (which are happening on my SAN and also my local thunderbolt raid … totally different manufacturers and connections … so is the OS).  Setting up my autodesk floating license server on windows 7 was totally hacky, so also don’t want to get into one of those license re-install & go crazy for 10 days things. hmmm … windows share’s started magically showing up … I have a love/hate relationship with windows.  I might just keep all my computers ON for December through February.

Going to put the tea on, also just decided to just not turn off any of my computers for a few months.  Here’s my computer setup, it’s kind of a 5.1 surround sound:

Left Side: license server

Left Front: nukex, nukestudio, maya, MVN, VUE, plantfactory

Center: protools, nukex, nukestudio

Right Front: email, maya, photoshop, training material,

Right Side: MVN(primary box for performance capture), license server

LFE: two exablox’s with a 10g switch over it

I have to keep my headphones on all the time it’s so noisy.  I am ordering a isobox (a sound isolation rack) this week to put the LFE two exablox’s and the 10g switch in.