Character AI launches AvatarFX, a new video tool for realistic character animation in beta
character.ai has unveiled AvatarFX, an advanced AI video generation tool currently in closed beta, designed to animate virtual characters created on Character.ai with realistic movements and voices. The model supports a variety of styles, from lifelike humans to 2D cartoon animals, and is already accessible in the platform’s experimental Lab, with plans for integration into the main app soon.
AvatarFX stands out by animating existing images, allowing users to upload a single photo to create photorealistic videos featuring synchronized facial expressions, hand gestures, body movements, and audio, including both speech and singing. Powered by a flow-based diffusion transformer (DiT) pipeline and a custom text-to-speech engine, AvatarFX manages long-duration clips, multi-speaker dialogues, and user-defined keyframes, offering creators precise control over scene animations. This makes it particularly useful for scripted content.
Early access will be available to Character.AI+ subscribers, with broader availability expected in the coming months. A public waitlist is currently open for those interested in accessing the tool on both web and mobile platforms.



Comments
From its announcement: "[blah blah blah] that allows the diffusion model to generate realistic lip, head, and body movement based on an audio sequence." Sadly, eyes movement are wrong (a well known problem, mostly in robotics), and since it's the much important part of the face in social interaction, it looks uncanny. As for the voice, it's uneven when not painful to listen, but the easier part to improve. Let's wait for when it will be ready.
yeah, agreed about the voice. Hopefully it’ll be a bit more polished for the public beta/launch.