For adaptive multi point of view (AMPV) compositing I really need to use deep pixels. This in in openexr 2.0, but I having had much experience in deep compositing. I’m doing a few tests this mourning on how to compoise AMPV files using deep nodes. Really just takes one extra node to turn my standard AMPV image to a deep image, then the compositing works fine in deep. I need to finish this step to determine how to do fog, then I have enough to start the actual compositing for the vr scenes – getting out of test/trial mode.
Well, needed to depth of field before I did the fog. So just finished the depth of field (took an hour). hmmm … there’s a bunch to this deep compositing. AMPV mixed with deep compositing workflows is something I’ll be messing with for the next few years, so I need to learn enough to get my scenes done yet not get buried in it. I’m ending up doing a lot of this with NUKE expressions (i just do the math myself per channel, is easier then fighting with things and much faster).