Meet Virtual Frank (made in less than a minute)
5 February 2021, by HCI-UHH

Photo: HCI-UHH
In our latest journal article, we introduced the concept of intelligent blended agents [1], which combine mixed reality technologies, artificial intelligence, and the internet of things. We extended this work by including also multimodal virtual doppelgangers. These doppelgangers are intelligent virtual agents (IVAs), which replicate the visual appearance as well as the voice of real humans. In the video example, you see a simple 3D reconstruction of our head Prof. Dr. Frank Steinicke, using a paGAN-generated rigged 3D head models. The reconstruction, which requires less than a minute, is based on a selfie captured with a smartphone. The voice has been cloned based on a 15 minutes training data set using Descript Overdub technologies. Please click the video for a simple demo. Further details can be found in the paper: https://www.mdpi.com/2414-4088/4/4/85
While Virtual Frank was generated in less than a minute, we are also working on high-end 3D reconstructions such as illustrated in the image below, which contains an IBM Watson integration for natural language processing and computer vision allowing to extract emotions, gestures, faces and objects:
Reference:
[1] Susanne Schmidt, Oscar Ariza, Frank Steinicke: Intelligent Blended Agents: Reality–Virtuality Interaction with Artificially Intelligent Embodied Virtual Humans, Multimodal Technol. Interact. 2020, 4(4), 85.