Xiaomi has officially launched Xiaomi MiMo-7B and it is the first open-source LLM of the company for reasoning and coding. The company has built it with the newly assembled Big Model Core Team.
It is coming with a 7-billion-parameter model and the company says that it is capable to offer a performance on par with more bloated systems, including OpenAI’s o1-mini and Alibaba’s Qwen-32B-Preview. The latest MiMo-7B outperforms OpenAI and Alibaba’s model in mathematical reasoning (AIME 24-25) and code competition (LiveCodeBench v5).
The latest MiMo-7B has the power of a tight pre-training regimen and the company says that the model compiled a dense dataset of 200 billion reasoning tokens and fed the model 25 trillion tokens in total over three training phases. It is also said that the company has used a multiple-token prediction objective.
Read interesting news, reviews as well as tips & tricks on TechnoBugg website, and stay updated with the latest happenings of the tech world on the go with Technobugg App. Also follow on Google News and join our Telegram channel as well as WhatsApp Channel for the latest updates.