xi's moments
Home | Innovation

Chinese developer launches multimodal model unifying video, image, text

Xinhua | Updated: 2024-10-22 11:03

BEIJING -- The Beijing Academy of Artificial Intelligence (BAAI) on Monday released Emu3, a multimodal world model that unifies the understanding and generation of text, image, and video modalities with next-token prediction.

Emu3 successfully validates that next-token prediction can serve as a powerful paradigm for multimodal models, scaling beyond language models and delivering state-of-the-art performance across multimodal tasks, said Wang Zhongyuan, director of BAAI, in a press release.

"By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences," Wang said, adding that Emu3 eliminates the need for diffusion or compositional approaches entirely.

Emu3 outperforms several well-established task-specific models in both generation and perception tasks, according to BAAI, which has open-sourced the key technologies and models of Emu3 to the international technology community.

Technology practitioners have said that a new opportunity has emerged to explore multimodality through a unified architecture, eliminating the need to combine complex diffusion models with large language models (LLMs).

"In the future, the multimodal world model will promote scenario applications such as robot brains, autonomous driving, multimodal dialogue and inference," Wang said.

Global Edition
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349