Wu Dao 2.0

The Beijing Academy of Artificial Intelligence (BAAI), or 北京智源人工智能研究院 in Chinese, recently launched the latest version of Wudao 悟道 – an advanced deep learning model that the lab dubbed as “China’s first,” and “the world’s largest ever,” with 1.75 trillion parameters. It is designed to simulate conversational speech, write poems, understand images and even generate recipes. The model has been trained on 4.9 terabytes of data consisting of 1.2 terabytes of text in both English and Chinese languages, making it much more powerful than Google’s Switch Transformers (which has 150 billion fewer parameters) and OpenAI’s GPT-3 (10 times smaller). WuDao 2.0 already boasts 22 partners such as Xiaomi smartphone maker or short video giant Kuaishou who believe this pre-trained multimodal multitasking model can bring us closer to artificial general intelligence (AGI). However there is still debate about whether hybrid models are necessary for AGI or if embodied AI should be adopted instead to reject traditional neural networks entirely. Hua Zhibing is Wu Dao 2.0’s child – a virtual student capable of continuously learning tasks like composing poetry, drawing pictures or coding in the future without forgetting what it has learned previously – bringing AI closer to human memory capabilities according to Tang Jie from BAAI research group who claimed Hua Zhibing had “some ability in reasoning and emotional interaction”. People have high hopes for Wu Dao 2.0 but they need to remain cautious until further evidence supports their claims given how excited people got when playing with GPT-3 despite its lack of proof that it can actually understand anything at all

You May Also Like.

Share Your Valuable Opinions