NVIDIA, in collaboration with developers, pioneers the integration of lifelike digital characters into games and applications through the re-introduction of NVIDIA Avatar Cloud Engine (ACE) at CES 2024. The ACE microservices bring forth generative AI models, marking a significant shift in how users engage with digital avatars.
These production microservices for ACE enable game, tool, and middleware developers to seamlessly incorporate cutting-edge generative AI models into the digital avatars within their projects. Among the featured ACE microservices are NVIDIA Audio2Face™ (A2F), facilitating expressive facial animations generated from audio sources, and NVIDIA Riva Automatic Speech Recognition (ASR), supporting the development of customizable multilingual speech and translation applications using generative AI.
Key Takeaways:
- Prominent developers adopting ACE include Charisma.AI, Convai, Inworld, miHoYo, NetEase Games, Ourpalm, Tencent, Ubisoft, and UneeQ.
- The Audio2Face and Riva Automatic Speech Recognition microservices are currently available, allowing interactive avatar developers to seamlessly integrate these models into their development pipelines.
As part of showcasing ACE’s capabilities in transforming NPC interactions, NVIDIA collaborated with Convai to enhance the NVIDIA Kairos demo. The latest version incorporates Riva ASR and A2F, significantly improving NPC interactivity.
Convai’s new framework enables NPCs to engage in conversations, exhibit awareness of objects, pick up and deliver items, guide players to objectives, and traverse virtual worlds.
NVIDIA ACE breathes life into game characters, overturning the historical limitations of NPCs with predetermined responses and facial animations. The traditional transactional and short-lived player interactions are replaced with dynamic, individualized experiences.