Qwen QwQ 32B Released: A Challenging Contender to DeepSeek R1 and o1-mini

The Chinese team Qwen has released the reasoning model QwQ-32B under the open Apache 2.0 license.

Despite its modest size of 32 billion parameters, the model is competitive with the massive DeepSeek R1, which has 671 billion parameters. Additionally, it significantly outperforms the so-called distilled versions: DeepSeek-R1-LLama-70B (trained on reasoning with Llama 70B) and DeepSeek-R1-Qwen-32B (trained on reasoning with Qwen 32B).

Experimental results indicate that the model performs well in responding in Russian. It also correctly answered trick questions, which typically stump neural models that lack reasoning capabilities and are commonly featured in assessments:

Which is larger — 3.11 or 3.9?

Olia has two brothers and three sisters. How many sisters does Olia’s brother have?

Overall, the model presents an intriguing proposition — the 32B model can be effectively run on a personal computer with respectable speed, unlike the powerful but enormous DeepSeek R1 with its 671 billion parameters.