Add QwQ-32B model
under review
Chin Keong Ang
The QwQ-32B model, launched in March 2025, is a 32-billion-parameter reasoning model developed by Alibaba's Qwen Team. It achieves performance comparable to larger models like DeepSeek-R1 (671B parameters), thanks to advanced reinforcement learning techniques. Excelling in tasks such as mathematical reasoning, coding, and logical problem-solving, it has outperformed competitors in benchmarks like AIME24 (79.5%) and LiveBench (73.1%). Additionally, with a 131k token context window, the model is highly efficient, requiring only 24GB of GPU memory for deployment, making it accessible for integration into applications
endu
under review
endu
Hi Chin Keong Ang QwQ-32B sounds impressive! Are their APIs available? If yes, we’d definitely look into including it, though we’re not sure how expensive it might be. Thanks for sharing! 😊