The Thirdrez Kinetiq Engine exists to balance speed and believability. We use deep learning to generate performances in minutes, yet we guarantee that every take passes through human refinement before it reaches the marketplace. Here is the anatomy of that loop--with links to the research that inspires our approach.
1. Synthesis architecture inspired by research
Our architecture uses a temporal encoder-decoder with prompt conditioning and contact descriptors. It derives from work such as the framework presented by Daniel Holden, adapted for long sequences and with an emphasis on complete cycles. We extend the idea of phase-functioned neural networks to support:
- Multimodal conditioning (text, tags, LoRA style embeddings).
- Contact awareness for foot locking and props.
- Energy curve guidance (useful for athletic movements and idle loops).
2. Retraining with human-polished data
Every engine iteration passes through feedback alignment with clips polished by Thirdrez animators. We track approval markers for each stage (blocking, polish, retarget) and reincorporate those takes into the dataset while maintaining version history via MDM. The model learns to repeat the corrections artists would otherwise make manually.
The cycle ties directly into:
- Kinetiq Engine v2.1.3, which details the hybrid pipeline.
- The Motion Ops Playbook, responsible for versioning datasets and retrained LoRAs.
- The panel described in Changelog-Driven Quality, where we communicate every relevant update.
3. Cross-platform validation
Generating clean curves is not enough; we need to prove they work in UE5, Unity, Roblox, and Second Life. We automate tests with:
- Percentual difference in root displacement between synthesis and post-polish.
- Analysis of penetration and torque on key joints.
- Execution on reference rigs (UE5 Mannequin, Unity Humanoid, Bento, R15), logging applied offsets.
These checks feed the workflow described in From Prompt to BVH/FBX/ANIM and the stability guide Root Motion and Foot Locking.
4. Operational metrics
We monitor:
- Iteration time between the initial prompt and final delivery.
- Manual correction rate (how many frames required human intervention).
- Retarget confidence by platform, exposed inside the Motion Ops dashboard.
- Production usage--every time a client downloads the clip via API or marketplace, the system records the engine version that generated the file.
These metrics help answer "why Thirdrez?" when clients compare plans in Which Thirdrez Plan Fits You.
Deep learning accelerates production, but artists and engineers close the loop. By combining state-of-the-art synthesis with human refinement and production-grade metrics, Thirdrez keeps the Kinetiq Engine evolving without sacrificing what matters: believable motion at scale.