Welcome to Frostember FaceSync, the ultimate audio-driven facial animation ecosystem for Unity. Whether you are building AAA cinematic cutscenes, highly optimized VR games, or live VTubing avatars, FaceSync delivers accurate lip-sync and procedural facial movements with zero hassle.
Stop wasting hours manually animating blendshapes or fighting with heavy real-time CPU loads. FaceSync bridges the gap between high-end realism and perfect optimization.
DOCUMENTATION + ROADMAP | FORUM | YOUTUBE
🔥 Key Features
- Zero-CPU Offline Baking: Real-time audio analysis kills frame rates on Mobile and VR. Our built-in Baker simulates the audio at 60 FPS in the Editor, baking visemes, procedural head noise, and blinks into a highly optimized FaceSyncAnimationAsset. Play it back at runtime with absolute zero CPU cost!
- Native Timeline Integration: Drag and drop your baked data into our custom FaceSync Track. Smoothly blend animations, scrub through time, and dynamically override eye and head behaviors directly from the Timeline Inspector.
- Procedural Eyes & Head Motion: Bring dead stares to life. FaceSync automatically calculates realistic blinking, saccadic eye darts, and micro-jitters. The head and neck procedurally react to the character's voice intensity using Perlin noise.
- 1-Click ARKit & CC3/CC4 Setup: Stop manually assigning dozens of blendshapes. Our VisemeMappingProfile features 1-click auto-generation for standard ARKit (52 blendshapes) and Character Creator rigs, creating highly accurate mouth shapes instantly.
- Live Microphone (VTuber Ready): Switch to Live Mode and use the included Frostember_MicrophoneInput component to drive your character's face in real-time using any connected microphone. Perfect for multiplayer voice chat and VTubers.
- Setup Wizard: Get your character fully rigged and talking in under 30 seconds using the automated Setup Wizard.
- Emotions profiles: A dedicated, robust module inside FaceSyncController that allows developers to seamlessly blend custom facial expressions (e.g., Happy, Sad, Angry) with active lip-sync data.
- IK Look Tracking: Automatically make your character track any object or player in the scene! The robust inverse kinematics system seamlessly blends with procedural speaking noise and animations, giving your character a dynamic focus without unnatural rotation snapping.
💻 API & Scripting
Triggering dialogue dynamically via code is incredibly easy using the FaceSyncPlayer component. Just assign the baked asset and call .Play(). Perfect for custom RPG dialogue and quest systems.
Questions?
contact@frostemberstudios.com