in

Soul App Unveils 3D Virtual Human and Multimodal AI Interaction at GITEX 

Soul App Unveils 3D Virtual Human and Multimodal AI Interaction at GITEX 

Soul App is demonstrating its latest innovations in 3D virtual avatars and multimodal AI interaction at GITEX GLOBAL 2024 in Dubai, held from October 14-18. The platform enables users to create personalized 3D virtual avatars that replicate behavior, preferences, and memories, offering a novel way to interact and form connections in the digital world.

Now in its 44th year, GITEX GLOBAL continues to be a hub for tech-driven innovation, attracting major tech corporations, startups, governments, and investors from across the globe. With over 6,700 exhibitors, including Soul, the 2024 edition is the largest to date, offering groundbreaking insights into AI, smart connectivity, and digital entertainment.

At the event, visitors can experience Soul’s self-developed multimodal AI model, which supports real-time motion capture, voice and text conversations, multilingual communication, and realistic human-like avatars. Through interactive displays, attendees can create 3D digital versions of themselves and engage in smooth, immersive interactions.

Soul App’s CTO Tao Ming shared, “We look forward to showcasing our innovations in social technology and digital interaction at GITEX GLOBAL. This event allows us to highlight our latest advancements and connect with global industry leaders to explore new possibilities for the future of social engagement.”

3D Digital Twins: Bridging the Physical and Digital Worlds

As one of China’s leading platforms incorporating AI into social interactions, Soul is showcasing its cutting-edge AI-powered multimodal interaction model integrated with 3D virtual avatars. Attendees can explore this AI-driven technology first-hand, experiencing the power of real-time interaction through personalized 3D digital avatars.

The platform has been a pioneer in integrating AI into social experiences, helping users interact more freely by allowing the creation of avatars rather than using real photos since its inception in 2016.

In 2022, Soul introduced the NAWA engine, which supports the creation of personalized 3D avatars and immersive social experiences. These avatars allow for more dynamic and expressive interactions, removing the pressures often associated with real-world social dynamics.

Tao Ming told Pandaily, “We want users to create a new persona in our space, with their own unique appearance, and be able to generate that image with just one click.”

Real-Time 3D Modeling and Immersive Interaction

Visitors at GITEX can explore Soul’s real-time 3D modeling capabilities, which generate highly detailed virtual avatars in just seconds. By analyzing over 90 facial shape parameters, Soul’s AI quickly recreates users’ facial features in the 3D world. This is combined with real-time motion tracking, enabling natural interaction with these digital avatars.

With its multimodal AI model, Soul allows for voice, text, and physical interactions to take place simultaneously, creating an engaging and fluid user experience.

Tao Ming also explained to Pandaily how their end-to-end AI system significantly reduces latency. “Our delay is under 200 milliseconds,” he said. “It’s no longer a serial process like before, where we first generated text and images, then converted them into speech. Now, we have unified speech and NLP into one integrated process, which eliminates the issue of lag.”

Multimodal AI Models and Enhanced Social Interaction

Soul’s ongoing development in AI focuses on improving communication and social interaction through multimodal AI models. Since 2020, the platform has introduced various models, such as its own language model Soul X, and voice-based AI technologies to support intelligent conversation and voice interactions.

In 2024, Soul launched its latest end-to-end multimodal AI model, which integrates text, voice, and visual interaction, enhancing the way users engage in social scenarios. The platform’s “digital twin” feature allows users to create virtual versions of themselves, based on past interactions or custom settings, for more personalized engagement.

In the future, Soul plans to expand its capabilities by introducing full-duplex video calling by the end of 2024, allowing users to communicate through text, voice, and video within the same seamless interaction model. This advancement will further enhance the immersive experience and provide a more comprehensive and natural way to engage in virtual environments.

Tao Ming also gave an example to Pandaily, illustrating how conversations with these digital avatars aren’t just one-off interactions. “For instance, if the avatar senses that you have a cold, it might ask you on the third day if you’re feeling better. This creates a completely different emotional experience,” he explained. “In the same space and time, strengthening AI’s ability to perceive is the most important thing,” Tao added.

Report

What do you think?

Newbie

Written by Mr Viral

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Turn to-dos into ta-dahs with MS Project Pro — just $17.97

Turn to-dos into ta-dahs with MS Project Pro — just $17.97

The Revenue of the Chinese Gaming Market in Q3 Increased Year-On-Year to A Record High

The Revenue of the Chinese Gaming Market in Q3 Increased Year-On-Year to A Record High