I have a bunch of steps for doing the scenes with the model. Here’s the current list of stuff that all needs to be “imported” from /vr/maya/scenes/womem_build_cloth.00xx.mb:
- load the model from the /vr/maya/data/women.0001.fbx (This is a tweaked model from modo, removed a lot of geometry)
- Load the hat mesh (this is a modeled mesh created from cloth, then select exported) ./vr/maya/data/woman_hat.obj
- load the dress and veil which are fully ncloth modeled
From here, load the animated skeleton file (i.e. model_at_base_of_stairs-03.mvn.fbx), going from the T pos to the first “real” pose is requring me to put about 48 frames in – so the start time is minus 48 frames.
bind the women skeleton to the “body”.
Mess with the constraints by running the animation (this is tough, it just takes time). I need to do this with a cache, is just going way too slow without using caches. Also since the fabric is moving, I can’t do this at some low resolution.
Output the entire mesh sequence as a alembic file.
Load the alembic file into nuke. Scale/rotate file to fit the scene. Animate things sometimes when the scene and model don’t line up.
Run the scene through the Adaptive Frame, from the camera positions (i.e. go through all the camera positions, creating adaptrive frames for each “camera shot”. Name the AF sequences by camera number/Shot Id.
Some steps in this script are erroring out. I think I’m going to run it in the ranges I had in the other post. Also I’m finding it’s better to just drap it at the location it’s needed. For the girl I’m going to use a different approach, but the model/women I want this nice drapery.