
Alibaba’s new open source model QwQ-32B matches DeepSeek-R1 with way smaller compute requirements
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qwen Team, a division of Chinese e-commerce giant Alibaba developing its growing family of open-source Qwen large language models (LLMs), has introduced QwQ-32B, a new 32-billion-parameter reasoning model designed to improve performance on complex problem-solving tasks through reinforcement learning (RL). The model is available as open-weight on Hugging Face and on ModelScope under an Apache 2.0