Crystal LipSync | 1-Click Solution incl. Game Creator 2 Integration OFFICIAL SALE

===| 💬 DISCORD | 📖 DOCUMENTATION | 🎥 SETUP VIDEO |===



CrystalLipSync is a lightweight, real-time lip sync and eye blink solution for Unity. It ships as pure C# source code - no DLLs, no native plugins, no black boxes. You get full access to every line of code, so you can inspect, modify, and extend it to fit your project.


  • Why Crystal LipSync?
    • Crystsal LipSync analyzes audio in real time using FFT spectral analysis, runs entirely on the CPU with zero external dependencies, and works on every platform Unity supports. For projects without voice-over audio, it can even animate the mouth directly from dialogue text - no audio files needed at all.

Features

  • Real-Time Audio-Driven Lip Sync
    • Feed any AudioSource - voice-over, procedural speech, or live microphone input - and CrystalLipSync produces smooth, frame-accurate mouth animation. There is no pre-processing step, no baking, and no waiting. Audio goes in, mouth shapes come out, every single frame. This matters because it lets you swap audio clips at runtime, stream dialogue, or use text-to-speech engines without ever touching an animation timeline.

  • 15-Viseme System
    • CrystalLipSync maps audio to the industry-standard 15-viseme set used by professional animation studios. From the closed lips of a P sound to the wide-open mouth of an A, every major mouth shape is covered. This gives your characters expressive, believable speech animation that holds up in close-up shots and cinematic dialogue sequences.

  • Text-Driven Lip Sync
    • No voice-over budget? No problem. CrystalLipSync can animate the mouth directly from dialogue text. It converts each character and common letter combinations (th, sh, ch, ee, oo) into the correct mouth shape and plays them in sequence, synchronized to your typewriter reveal speed. Your characters look alive even in fully text-based dialogue - no audio files required.

  • Live Microphone Lip Sync
    • Got a microphone? Your characters can lip sync in real time. CrystalLipSync captures live microphone audio - from a VR headset, a desktop mic, or any input device - and feeds it directly into the same FFT spectral analyzer that drives pre-recorded speech. Six frequency bands, spectral centroid, and high-frequency ratio analysis work together to map your voice to 15 mouth shapes every single frame. No machine learning models, no cloud services, no latency-heavy phoneme recognition - just fast, lightweight frequency analysis that runs entirely on-device. Mute the playback so players never hear their own voice echoed back, or leave it on for monitoring. Perfect for VRChat-style avatars, social VR, live streaming overlays, and any project where the player's real voice should drive a character's mouth.

  • Intelligent Auto-Mapping
    • Setting up blendshapes manually is tedious and error-prone. CrystalLipSync's auto-mapper uses a multi-tier scoring system that recognizes blendshape naming conventions from VRChat, DAZ3D, ARKit, VRM, UniVRM, and custom rigs. Drag in your character, click one button, and the correct blendshapes are matched to the correct visemes automatically. It even picks the best SkinnedMeshRenderer when your character has multiple meshes.

  • One-Click Setup Wizard
    • A single editor window provisions everything your character needs: AudioSource, Controller, BlendshapeTarget with auto-mapped visemes, Eye Blink, and Text Lip Sync. The entire operation is one Undo action. Run it again later - it skips components that already exist. Setup that used to take minutes now takes seconds.

  • Natural Eye Blink
    • Static eyes break immersion faster than a static mouth. CrystalLipSync includes a standalone eye blink system with randomized intervals, double blinks, half blinks, and configurable open/close speeds. It auto-detects blink blendshapes across ARKit, VRM, DAZ3D and custom naming conventions. Characters feel alive even when they are not speaking.

  • Emotional Mood System
    • A character who is angry should not move their mouth the same way as one who is happy. CrystalLipSync supports four emotional moods - Neutral, Happy, Angry, and Sad ... each with their own blendshape mapping set. Switch moods at runtime from script or through visual scripting. The same audio produces different mouth shapes depending on the character's emotional state.

  • Shareable Lip Sync Profiles
    • Create ScriptableObject profiles that store analysis settings and per-viseme multipliers. Assign the same profile to every character with a similar voice type - deep voices, high-pitched voices, whispery voices - and tune once instead of per character. Profiles override controller settings cleanly, so you can experiment without losing your original values.

  • Full Game Creator 2 & Dialogue Integration
    • CrystalLipSync integrates deeply with Game Creator 2 as IK rigs and visual scripting instructions. Add lip sync and eye blink to any GC2 Character through the IK rig system - components are provisioned automatically at runtime and re-scanned on model change. Seven visual scripting instructions let you play speech, stop speech, change moods, swap audio sources, toggle eye blink, and control text lip sync - all without writing a single line of code. Custom GC2 properties let you trigger text lip sync directly from an Actor's Typewriter Effect, get the current dialogue text at runtime, and play audio on specific sources through the polymorphic property picker.

  • Audio Priority System
    • Characters can have both audio and text lip sync at the same time. When voice-over audio is playing, audio-driven lip sync takes priority automatically. When the audio stops, text-driven lip sync takes over. You can mix voiced and unvoiced lines in the same dialogue without any manual switching.

  • Duration-Matched Text Animation
    • Text lip sync automatically normalizes its timing to match your typewriter reveal speed. Whether your typewriter runs at 5 or 50 characters per second, the mouth animation starts and ends in perfect sync with the text on screen. The relative rhythm between vowels, consonants, and pauses is preserved - fast text feels snappy, slow text feels deliberate.

  • Pure Source Code - No DLLs
    • CrystalLipSync ships entirely as readable, well-documented C# source code. There are no precompiled DLLs, no native plugins, no obfuscated binaries, and no platform-specific libraries. You can read every algorithm, step through every function in the debugger, and modify anything you need. This also means full compatibility with IL2CPP, AOT compilation, and every build target Unity supports - including WebGL, consoles, and mobile - without worrying about missing native dependencies.

  • Lightweight and Self-Contained
    • No external dependencies. No cloud services. No API keys. No runtime downloads. CrystalLipSync runs entirely within Unity's existing audio and rendering systems. It adds minimal overhead - a single FFT pass and a handful of blendshape writes per frame - and never allocates memory during playback after initialization.

  • Who Is This For?
    • Indie developers who need professional-quality lip sync without professional-quality budgets for voice acting or animation.
    • Visual novel and RPG creators who want their dialogue characters to feel alive, even without full voice-over.
    • Game Creator 2 users who want lip sync that integrates natively with their existing workflow - IK rigs, visual scripting, and the Dialogue system.
    • DAZ3D, VRChat and VRM creators who need automatic blendshape detection that understands their naming conventions out of the box.
    • Developers who value transparency and want to own, understand, and extend every piece of code in their project.