DeepMind has created a new language model, referred to as Gopher, that is more accurate than existing ultra-large language models on certain tasks. This includes answering questions about specialized topics such as science and the humanities, in addition to matching or exceeding them in other areas like logical reasoning and mathematics. Even though Gopher is smaller compared to some of these software programs – with 280 billion parameters compared to OpenAI’s 175 billion GPT-3 – DeepMind claims it can outperform systems that are up to 25 times its size. Its 7 billion parameter Retro model was able to achieve equal results as the performance of OpenAI’s GPT-3 too. Moreover, users are easily able detect any potential bias or misinformation associated with training text due to the visibility of which sections were used by Retro software when producing output.