← Back to all articles

Microsoft Researchers Unveil High-Efficiency AI Model Capable of Running on CPUs

Posted 5 days ago by Anonymous

Microsoft’s research team has developed the largest-scale 1-bit AI model to date, dubbed BitNet b1.58 2B4T. This model, now openly available under an MIT license, can operate on standard CPUs, including Apple’s M2 chip.

Unlike conventional AI models, bitnets are highly compressed, designed to function efficiently on lightweight hardware. Traditional models often employ quantization, reducing the precision of weights (the numerical values within the model) to reduce memory consumption and increase processing speed. Bitnets, however, take this further by quantizing weights to just three possible values: -1, 0, and 1. This drastically simplifies computation, making them highly efficient in terms of memory and processing power.

BitNet b1.58 2B4T is the first bitnet to reach 2 billion parameters—equivalent to the number of learnable weights in the model. Trained on a massive dataset of 4 trillion tokens (roughly equivalent to 33 million books), it outperforms similar-sized traditional models in key benchmarks, according to the researchers. In tests, BitNet b1.58 2B4T surpassed models like Meta’s Llama 3.2 1B, Google’s Gemma 3 1B, and Alibaba’s Qwen 2.5 1.5B in tasks requiring mathematical reasoning (GSM8K) and physical common sense (PIQA).

While not universally superior to all rivals, BitNet b1.58 2B4T stands out in efficiency—running up to twice as fast as comparable models with significantly lower memory usage. This makes it particularly suitable for AI deployments in resource-constrained environments.

However, achieving this performance currently requires using Microsoft’s custom bitnet.cpp framework, which has limited hardware compatibility. Notably, it does not support GPUs, the dominant processors in AI training and inference. This limitation may restrict widespread adoption, though the technology holds promise for future edge AI applications where resource efficiency is critical.